|
Having them in a shared pool is one part. Another part is getting
everything else out of *BASE.
In case anyone else wants to help add thoughts, I'll spill out my
understanding of some of this.
My understanding is that the performance adjuster doesn't move
memory from a shared pool that no longer needs it to one that does
need it. Rather, memory is always moved out of /*BASE/ to pools that
need it. And memory that has been released from a pool always goes
back into *BASE.
*BASE is the 'base' of memory allocations. For memory to get from
pool X to pool Y, it takes (at least) a two-step process -- first,
into *BASE; second, back out of *BASE. (Two steps... assuming
something in *BASE doesn't get involved.)
My concern gets hooked on high-priority jobs that commonly run in
*BASE. If such jobs are active, I can't see any reason why they
wouldn't have a chance at grabbing memory as soon as it arrives back
in the *BASE pool.
So, here are a bunch of subsystem monitor jobs, and jobs like QTCPIP
and QTCPMONITR, prestart jobs like QSQSRVR, routing entries like
QPWFSERVER, etc., -- all attempting to use *BASE directly. Paging
happens in *BASE just as it does elsewhere.
But with so many jobs competing for *BASE, how is any simple
measurement going to indicate which function is actually having trouble?
By assigning a few shared pools and directing different kinds of
work into them, you get the first real indication of where problems
are. When *SHRPOOL10 shows high numbers, it at least narrows down to
a particular group of jobs. You don't get any extra memory, but you
at least get an indication whether more memory might be useful and
an indication why. By (maybe temporarily) splitting that group into
a couple additional pools, you narrow things farther.
With normal 'work' removed from *BASE, you can look at *BASE during
most intervals of a few minutes and get a quick idea of how much
'spare' memory you have at that time. The memory should be in *BASE
when it isn't needed elsewhere. (With some definite exceptions, but
those can often be calculated.)
When your system is short on memory, *BASE should stay down close to
its minimum level-- an early clue that a memory-constraint is close
or at hand. When you have more than is needed, *BASE will rise. When
extra memory is in *BASE, the performance adjuster can move it to
where it's needed essentially in a single adjustment cycle. Memory
will still route through *BASE as it moves from pool to pool, but it
is much less likely to get hijacked on its way. And a job in *BASE
won't be paged out in order to allocate the memory somewhere else.
They can still be paged out, but I'll more likely know what is being
paged because I've limited what's in _that_ pool.
The performance adjuster (apparently) has less work to do. Memory
moves more freely. Measurement increases in meaning. Guesswork is
reduced.
To me, it just makes sense to start a system right off the bat by
tossing out the IBM default pool assignments on subsystems, routing
entries, prestart jobs -- all of them. All of the great built-in
work and performance management features might as well be put to
use. If my employer is going to buy an AS/400 (or later), I might as
well make it work the best it can.
Enough of a rant... I can't figure out why this doesn't get more
discussion. Are there so few out there who have any feeling of
certainty?
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.