|
Dean, I have to correct you here in one thing, but more to add something else to the intiutive logicals story, which would, I hope, explain why SSA is not always doing something our clients expect us to do :-) SQL optimizer does not evaluate access paths based on their creation date, it does that based on a huge number of factors, primarily its estimate of time needed to access specific set of data i.e. records, which is again based on many factors including size of the file, degree of file size change recently, current key range estimates, i.e. the number of records having key values equal to 'A', 'B', etc., current statistical information about the number of logical reads of each record from the file, maximum size of the access path (remember 1TByte limit requirement!), whether the access path has been created for a logical file or for an SQL index, size of memory pools and main memory in total, system configuration, CPU size, i.e. CPW value, time needed to build an index from the existing index if applicable in this specific implementation of the analyzed SQL statement and many, many more. What it does do by taking into account creation dates is, it evaluates all existing access paths starting from the most recently created one and going back to the oldest one, but it will evaluate all of them unless you limit the time available for this evaluation (by playing with QQQOPTIONS data area in the past and QAQQINI options file in latest OS/400 releases). It is important here to make a difference between access path and actual logical file or index it was created for, because if you create new SQL index or logical file which would have access path identical to the existing access path created for another index/logical file, then system will not do that and will declre that 2 indexes/logical files will share one access path, instead. This explains why sometimes new indexes you create do not get evaluated by optimizer at all. Also, any indexes/logical files owning access path which is damaged or corrupted, will not be evaluated by query optimizer either, but I think that this has been discussed already on this list in the past. Why many people think that it does take into account only the creation date is because they usually learn about new problem with SQL peroformance after adding a new access path (by creating a new index or logica file) to the existing configuration, which leads them to a logical conclusion that optimizer had taken into account the most recently created index and messed up things. This is only partially true because it is true that problem had been introduced by the latest access path, however reasons for this are much more complex than they seem. Sometime it is simply because logic of the optimizer (the way it has been coded) is not perfect, i.e. has a hidden bug, which causes that occurence of some new rare or particularly comlex index results in that bug surfacing for the first time now and not earlier. Sometimes it is because new index had caused access plan for the particular SQL statement to be rebuilt (which means all existing access paths to be re-evaluated from scratch), which with recently collected statistics info, causes optimizer to believe that some different implementation (whether it is this new access path, or existing one but different from the previously chosen one) is chosen, even though it is sometimes obvious even by 'naked eye' that it does not make sense - yes it is (just another) bug in the query optimizer. In theory new access path should never cause a problem, but this is true only if query optimizer code is perfect and does not have any hidden bugs anymore waiting for some special occation to show it up, which we all know is not possible. Finally, sometimes query optimizer will decide to change query implementation even without any new access path added, but simply because some of the other factors affecting its decision logic, has been changed on the system - best examples of this are upgrade to the later OS/400 release, applying latest SQL ptfs, adding some more memory to the system, or even changing memory pools configuration etc. This is why the performance tuning is so complex story and not always we can know in advance which logical files would be beneficial to each client's configuration. And you are right when saying that results SSA would obtain using small test database internally would not always be applicable to all clients and that's why we always insist that serious analysis has to be performed on the actual client box and BPCS database, but I believe that Genyphyr did not have in mind only this when suggesting that our clients should contact helpline when looking for performance improvements, but also that we do have performance team which could assist client in such situation in running performance analysis and finding whether their problem has been caused by lacking the logical file which would improve performance in their particular configuration (but not necessarily would help all other clients), or concluding that the nature of the problem is caused by something else. The biggest benefit of contacting SSA helpline in such situation is because that way you will make sure that you are not dealing with something that is a known problem already (whether it is a missing logical which because of the nature of the problem applies to most of clients, missing BMR, or missing OS/400 PTF). After checking this with helpline you would have better idea what would be the most productive way of pursuing this, i.e. looking into yourself, or getting assistance from SSA, or contacting IBM to report a new OS/400-SQL problem. Finally, be careful with usefulness of logicals based on work BPCS files, by the fact that som of them are cleaned up and populated with data in everydays process, this would cause their access plan to be rebuilt and access paths associated to them to be re-evaluated each time this happens, which is not a problem with small files but could be sometimes a significant overhead when dealing with larger work files. Work files are actually files to which intuitive logicals apply the least! To my knowledge, the most efficient approach for work files which affect performance would be creating 'intuitive' indexes on the fly, i.e. once they are populated with the data within the process and then removing indexes after cleaning them up, but this is not possible in many cases and also, you have to compare the benefit of having it and saving time by not processing it sequesntially with the time system spends to build the index. Regards, Goran Marie, In a message dated 9/20/00 7:33:29 PM Eastern Daylight Time, mgraziano@badgermeter.com writes: > Well that is ok to call SSA. But we have proven out our logicals. When we > called Helpline noone had any answers. IF SSA is aware of performance > problems, then they should publish them. We had our invoicing go from 8+ > hours down to 45 minutes with our inhouse logical. SSA cant beat that! > Thanks for the update. We will stick with our inhouse logicals Yes, SSA _should_ provide some rather intuitive logicals, look at ECHW and ECLW and, as someone that doesn't even know SQL, tell me what intuitive logicals are missing. However, the rest of performance logicals are highly subjective based upon your configuration and usage of BPCS. What would be a boon to one site could prove to be a bain to another. The SQL optimizer evaluates access paths based upon creation date, not usefulness. If you don't do the analysis, you could end up creating a logical that would destroy otherwise optimal performance. DBMON doesn't always provide accurate results, and the ability to evaluate those results based upon your own database is required. SSA evaluates performance based upon a minimalized database, not 500K records in ITH or HPO. Each company is different, and don't let anyone tell you how to optimize your database without a thorough analysis... JMHO, Dean Asmussen Enterprise Systems Consulting, Inc. Fuquay-Varina, NC USA E-mail: DAsmussen@aol.com +--- | This is the BPCS Users Mailing List! | To submit a new message, send your mail to BPCS-L@midrange.com. | To subscribe to this list send email to BPCS-L-SUB@midrange.com. | To unsubscribe from this list send email to BPCS-L-UNSUB@midrange.com. | Questions should be directed to the list owner: dasmussen@aol.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.