|
Sure, but the difference between CHAIN and SETLL will not make the code any more or less maintainable. That being the case, and especially since there was initially discussion about making a corporate standard out of this, then it makes sense to identify the best performing route.
'Most efficient' is a nasty thing to gauge, and some of the worst code I have ever been exposed to was a result of preemptive optimisation. There's just no simple answer to knowing whether SETLL or CHAIN will perform better unless there is a deep understanding of the underlying data population. Perhaps the most significant performance enhancement is to narrow the width of the returned record [1]
For Shannon's particular case, he ought to model it both ways and run a simulation to see which one performs better. This is pretty much my standard performance tuning advice these days. Oh, and keep the test bed handy because when you add memory or change disk drives or add a PTF or change the composition of the database, performance will change.
As far as a standard goes, I think that I'd settle (no pun intended) on a standard that makes the code easier to work on. An example might be to use SETLL when the intent is to retrieve a group of records like all the items bought by a particular customer. And to use CHAIN to retrieve a single record such as the customer master record. Then, anyone seeing SETLL knows that the intent is to fill a subfile or some similar 'set' chore.
--buck[1] Let's say the record we're looking at has 300 fields and is 4000 bytes wide. If we have to read/chain to that file a thousand times, we've told OS400 to shuffle 4 million bytes and made RPG populate 3 hundred thousand fields. If we only need the name, or an active flag or something much smaller AND we've determined that performance needs improvement, create a logical file with the same key structure. It'll share the access path, causing virtually no extra overhead in that regard. Include only the fields we need to see, say it's a name of 30 characters. Now instead of 4 million bytes, OS400 only needs to move thirty thousand. Quite an improvement and the only program change needed is either changing the F-specification or issuing an OVRDBF. This is just an example with round numbers; what real iSeries program will have a performance problem with a thousand records?
Reducing the I/O can happen either by issuing fewer I/O operations or by requesting less daa to be moved into the RPG buffers.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.