Hello Rob,
Am 10.02.2020 um 13:56 schrieb Rob Berendt <rob@xxxxxxxxx>:
I think I stated in that thread that I knew a vast majority of the total jobs were finished and were waiting disposition of the spool files.
Maybe you did. Please don't expect me to learn every message (thread) of yours to apply statements made there to other messages/threads. :-)
I do like your comment on reviewing periodically. And that is something we gained by this performance review. However some general recommendations would be nice.
Honestly, I obtain my general recommendations by reading the help text and apply a healthy dose of common sense.
One simple review would be to print system values and there is a comparison in that report between your values and IBM defaults.
This could be one way to do it. Question is: Do the IBM defaults matter that much? I think, it's way more important to tune values in question according to the condition of the current system and not according to IBMs view of an average system. Defaults are a good starting point, though.
To me, it's a matter of how much time you want to spend in fine tuning. To properly tune, you need to do actual measuring which will need to be done a lot of times throughout the day to have different production workload evened out.
From my experience with Linux Kernel Variable Tuning, applying in-depth knowledge about what the variable in question actually influences, together with knowing what your system does throughout the day, and applying common sense, yields a very high rate of successful improvements. Until today, I'm content with this approach's outcome also with OS/400 and it's future incarnations.
(This is also a reason why I stick with a slow machine: Proper Optimisation makes much more a perceptible difference than with already fast machines.)
Perhaps with newer versions of the OS they changed defaults and one should be able to justify why you vary from the defaults?
They do, because with newer hardware, more work can be done, which needs adjustments in mainly memory allocation and job concurrency. That's IBMs justification, I guess.
My justification is almost always: there's no "one size fit's it all". Allocating too much memory is less of an issue today, but allocating too little has a lot of impact, though.
Justification should have enough details to see if the justification is dated.
This assumes that you're referring to absolute values. I'm referring to increase/decrease values according to your particular knowledge what your system is actually doing.
For example "Based on performance evaluation we set these values to..." is not enough details while "Based on a performance evaluation on our B10 running OS/400 Version 1.2, running mainly BPCS V... we set these values to..."
Don't trust any statistics not done by yourself. ;-) Yes, you're right: Measuring is favourable to guessing. But when measuring eats up a lot of time to actually prepare an environment to automate measuring and accompanying collection of values, maybe my proposal to handle tuning is not perfect but just good enough. At least better than leaving default values set by IBM for an average usage scenario.
:wq! PoC
PGP-Key: DDD3 4ABF 6413 38DE -
https://www.pocnet.net/poc-key.asc
As an Amazon Associate we earn from qualifying purchases.