Hi Paul
On your first point, I think if an object is in memory then it is "in
memory" and the object does not need to be paged in again from disk while
it is in use by a job. Probably the second job just gets a pointer to the
existing object in memory (I expect tables will behave differently....). I
will be interested to see what other comments there are on this point to
find out if I am completely wrong or someone can provide a deeper
explanation.
In terms of collapsing the subsystems, I personally would not do this; I
certainly would not run every batch job in the interactive subsystem. Apart
from objects (or parts of objects) that might be in memory, the other thing
that subsystems and pools give you is control of timeslice and activity
levels. You ideally don't want a batch job competing with an interactive
job for the next available activity level.
This is my simple view of the world:
If you run interactive jobs you (probably) want them to have a shortish
time slice and would expect them to get paged in and out quite quickly
either due to reaching a time slice end or more normally because the
current interactive transaction completed. The smaller time slice should
allow relatively less memory to support a larger number of jobs as not all
of them will be active at the same time (key-think time)
Batch jobs on the other hand, you want to get a decent crack at processing
as they are longer/bigger jobs; the more they get paged in and out the
longer they take to run and the more work the system has to do just
managing them. Each time a batch job gets paged out likely due to time
slice end (or maybe end of job) another batch job in that pool gets a run
at the processor for a while. The memory woudl be sized to run some tuned
number of jobs in parallel.
With your server jobs you would probably want them to NOT be paged out so
you would try to size the pool with enough memory to keep them permanently
running and the activity level set so they didn't get paged out. Obviously,
this depends somewhat on how busy the jobs in question are and how
important they are. If you have them in a pool with interactive jobs you
can guarantee they will get paged out regularly because they are competing
with a heap of short run jobs or transactions. If they are in the batch
pool they will (theoretically at least) get paged out less often but will
get paged out for longer due to the longer timeslice of the batch jobs.
If you are lucky enough to have enough memory to do all this, then no
worries (it was a lot tougher back in the old days of 32MB like on the
System/38.... - does anyone remember PAGs). Many systems I see these days
there is almost no tuning and everything runs in *BASE apart from
*INTERACT and *SPOOL and between autotuning, and much faster processors
things are kind of brute forced through.
Depending on how much batch work actually goes through (and what it is) I
*might* let it run in *BASE (though I prefer a BATCH pool) but if I had
genuine background server jobs/NEPs that were servicing interactive jobs,
or say something like WAS running, I would try and get them/it into their
own subsystem and allocate memory to it, ideally enough so that it did not
get paged in and out. These jobs (presumably) have the abaility to impact a
lot of dependent processes if they are paged out so you woudl want to
prevent or minimise that.
Are you seeing issues with jobs, or temporary storage use or some other
symptom that is making you ask, or are you just trying to simplify your
life ?
I expect I might get a lot of corrections or additions - or hopefully some
elaboration - on what I said above, which would be great as this is an area
I find pretty interesting.
On Fri, Apr 6, 2018 at 9:08 AM, Steinmetz, Paul <PSteinmetz@xxxxxxxxxx>
wrote:
If an object is in memory from an interactive job, (*INTERAACT pool) and a
batch job needs the same object, but batch runs in *SHRPOOL1, does the
object have to be brought in to memory again because it's in a different
pool?
Years ago, it made sense to keep interactive and batch in their own pool,
for multiple reasons.
I'm thinking the rules could be changing.
Batch and Interactive now use many of the same objects.
We also now have many "service" jobs, which are receiving requests from
either batch or interactive.
So wouldn't it make sense to now run them out of the same memory pool.
My thought is to collapse the batch pool *SHRPOOL1.
Interactive, Batch, "service" jobs would all run out of *INTERACT.
Result would be only 4 pools.
*MACHINE
*BASE
*INTERACT
*SPOOL
Any thoughts from the group?
Thank You
_____
Paul Steinmetz
IBM i Systems Administrator
Pencor Services, Inc.
462 Delaware Ave
Palmerton Pa 18071
610-826-9117 work
610-826-9188 fax
610-349-0913 cell
610-377-6012 home
psteinmetz@xxxxxxxxxx
http://www.pencor.com/
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related
questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link: http://amzn.to/2dEadiD
As an Amazon Associate we earn from qualifying purchases.