|
> From: Douglas Handy > > >That's fine while you're stepping through code, but sometimes I > want to know > >the current condition of a variable once I've hit a breakpoint. With an > >indicator, I could, with a BIF I cannot. > > But the same argument can be made for any code you encapulate > behind a function. I guess I just can't explain my position to you: using an indicator, I had visibility, using a BIF I no longer do. IBM wants me to use the BIF, but does not give me the visibility that I had with the indicator. Can you see this point, or am I just not being clear? > >What if I had hundreds of > >programs that don't use the INFDS - is it your opinion that I > should add an > >INFDS to them all? > > I don't see a problem with adding an INFDS. >From this answer, I see I'm not being clear, or at least that our backgrounds are different enough that we see things very differently. I come from a development shop, and the concept of touching, even slightly, hundreds of different programs is frightening. Even if nothing goes wrong, it's still a matter of making the changes. And it's not that simple, either - you have to add one INFDS for every file. Even at a couple of minutes per change, modifying 1000 programs means dozens of hours of work. If you have a change management system, it's even more complex. Saying "just add INFDS for every file" may be more work than you imagine. > >The loss of functionality is not purely the issue of the BIF, Doug. It's > >the fact that, before the BIFs were available, I would write an > I/O opcode, > >set a breakpoint immediately after the opcode, and be able to immediately > >determine the results of the operation without having to step > through other > >code. With the new coding style, I am told to stop using indicators, and > >use BIFs instead, but now I can no longer execute that same procedure. > > As I said, it works basically that way for me. Set the > breakpoint immediately > after the opcode, and when it breaks just do a quick single-step > to see where it > goes. (In my coding style, there is normally an IF test or loop > condition or > such immediately after an I/O operation.) This is exactly the point that I just don't seem to get across to you. What you do is NOT what I do! I do NOT do the single step. And in fact, I may not have an IF immediately following my I/O operation. If I do not, then I have to wait until I get to the IF statement to see the results of my I/O operation, where I didn't have to before. Can you NOT see that this is different? The fact that it only matters slightly to you doesn't make it not different - it only makes it less important to YOU. > Where's the beef? Right where I said it was - it is incorrect for me to have to change the way I debug in order to implement a syntactic change on IBM's part. Or perhaps "wrong" is an invalid term, since it implies a value judgement. Rather, the whole issue implies a lack of concern on IBM's part as to what impact their design decisions have on existing implementations. But from your comments, you don't think that's a bad thing, and you're willing to change how you do things every time IBM changes their minds. Perhaps I'm just not as flexible as you are. > >It really wouldn't have taken that long to figure out a way to > show the results > >of those particular BIFs. > > Nor does it seem hard to make an ever so slight modification to > the way you use > debug. It sounds like you set breakpoints like I do -- just > after an I/O. I > can single step faster than displaying an indicator, or simply move the > breakpoint. Either way I don't feel like I've really lost anything. IBM is the vendor. I am the customer. If every time I made a change, my clients had to change how they ran their business, I suspect I would be out of clients very soon. You don't seem to think that's a problem. We have different world views. > And the INFDS still seems like a viable option if you want to > tell when you are > at other breakpoints. The Prefix() keyword makes external > definitions for stuff > like the INFDS trivial to implement. Again, you and I have very different views on the concept of "trivial". I think in terms of users with thousands of programs, where implementing a change like yours could actually cost thousands of dollars. > >This is my opinion, clearly, and you may not share it. But I think it's > >kind of annoying that I need to change my development style > unnecessarily. > > It doesn't change my debugging style. I just use F10 to > single-step instead of > F9 and Enter to display an indicator -- that it, assuming I had recently > displayed the indicator. Well, I outlined how it affects me. Evidently since it doesn't affect you, then it's inconsequential. I think that's a rather narrow view, but hey, that's me. But this is a lot of bandwidth for what seems to be essentially an unresolvable difference in viewpoint, so I suspect this is the right time to let the conversation go.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.