Nathan:
CPU and Memory true, but don't forget I/O requirements. I/O can affect the
other two quite a bit, so heavy I/O jobs should be separated from light I/O
jobs as well. Now, 90% of the shops don't have that big of a difference in
I/O patterns except maybe for batch overnight so this is a bit of a nitpik,
however it's the other consideration.
When I refer to workloads I differentiate 5250, Web Interactions (WAS, PHP,
iNode, etc), SQL, and batch as my main categories.
--
Jim Oberholtzer
Agile Technology Architects
-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of
Nathan Andelin
Sent: Friday, December 15, 2017 11:11 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Subject: Re: Subsystem and memory pools
On Fri, Dec 15, 2017 at 8:52 AM, Jim Oberholtzer <
midrangel@xxxxxxxxxxxxxxxxx> wrote:
The rule with memory is simple; keep like workloads together, so
interactive with interactive, batch with batch, SQL ...
In regard to "like workloads", I interpret that to mean similar CPU and
memory requirements. When you look at Jobs that are running on the system,
you might notice a Java process that is consuming 200-400 meg of "temporary
storage" and and the CPU time consumed may be tens of thousands of
milliseconds.
Then you look at a 5250 Job. The temporary storage might be 2 meg. Say the
user is viewing a monolithic source member in SEU. Every time the user
presses the PgDn key a request is sent to the server to get the next page.
Every 10-20 pages or so the Job is shown to consume a millisecond of CPU
time.
So, you don't want that Java Job paging in and out of memory.
Consider PHP Jobs. Running 200 of them concurrently has been known to
destabilize and bring down entire systems running Windows and Linux. It may
make sense to run PHP in separate subsystems and memory pools.
As an Amazon Associate we earn from qualifying purchases.