|
A couple of notes that I think were not covered.. 1) *CALC works well when you have similar jobs, using the same sets of database files, and you have lots of memory in the pool. You have to keep memory in the pool using the WRKSHRPOOL minimums, because the autotuner basically cannot see the effect of the extra memory in use by *CALC and will remove it without regard to *CALC. When you are not using the same set of database files, *CALC will have little effect - it is all about keeping records in memory (caching). When you have a small amount of memory, *CALC will appear to have little effect. 2) Using shared pools is a great idea - be careful not to overdo it. Separate your work into like jobs, and set them up to run in shared pools together. Use maximum and minimum limits for advantage. Let the autotuner do its work and watch where the limits are - for example, if it always tunes it up to the maximum, increase the maximum %... if it always tunes to the minimum - reduce the minumum % and so on... Pools like *SPOOL can often be set to a minumum of 0.01% and will still work ok. 3) The autotuner sucks at tuning activity levels. The idea is to have the activity level at the most efficient number - where the system is handling the most number of jobs in the most effective way. You will find that basic autotuning will just keep adding more and more activity levels until it has "too many". I had one situation where the normal activity level was 150 and one day, when there were too many runaway jobs, it went up to 650 - and stayed there. This is usually not a major problem, though, because most systems these days can handle the processing fairly if you set up and tune using a good set of shared pools. 4) Without ANY doubt, 99% of performance problems are application related. We recently spent several days tuning an iSeries to handle the workload from several SQL servers. They were providing slow response at certain times of the day. Even if we improved performance by 10% with a more efficient work management configuration, the users did not notice the difference between 10 seconds and 9 seconds. Even if we had got 25% improvement, 7.5 seconds is still a long time. It turned out to be a query that had not been written well, had NO supporting indexes pre-built, and was running during these peak times. By improving the performance of the query, we removed the performance problem completely. Lesson learned - don't spend too much time on performance configuration. Do it once, then review regularly, and prevent programmers from changing things on the fly. 5) Increasing the length of a timeslice is the opposite of what you should do... Oh.. .that is not related to pools :-) - Trevor ----- Original Message ----- From: <rob@xxxxxxxxx> To: <midrange-l@xxxxxxxxxxxx> Sent: Monday, November 22, 2004 3:19 PM Subject: Why separate pools? > Since many people are setting QPFRADJ at 2 or 3 I think this begs the > question: "Why use separate memory pools?" QPFRADJ is supposed to > balance memory between pools. However IBM puts some limits on it to make > sure that not everything swings one way or the other too radically. This > causes a problem when you do have a major job shift and you do want a more > radical shift. If I am using QPFRADJ in this manner that why not just put > everything into one or two pools? I could still have numerous subsystems, > (if that buys me anything), but why not run them all through the same > pool(s)?
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.