I used to run a PC based stress-test tool to test the performance of my IBM
i web applications, which handled HTTP POST and GET methods. The tool could
drive IBM i CPU utilization to 100%.

It was interesting that smaller time slices for those IBM i workloads
resulted in the highest overall throughput. I eventually reduced the time
slice to 1 millisecond, which provided the highest throughput. That seemed
counterintuitive to me.

Frank Soltis asserted that IBM i provided faster task-swapping than any
other operating system out there.

On Thu, Dec 16, 2021 at 9:22 AM Patrik Schindler <poc@xxxxxxxxxx> wrote:


I've read again and again through the Work Management manual but I can't
make sense with my Linux background about those attributes.

While RUNPTY might be closely related to the "niceness" — see
https://en.wikipedia.org/wiki/Nice_(Unix) —, I struggle to understand the
real world impact of different time slice values.

https://en.wikipedia.org/wiki/Preemption_(computing)#Time_slice says:

The period of time for which a process is allowed to run in a preemptive
multitasking system is generally called the time slice or quantum. The
scheduler is run once every time slice to choose the next process to run.
The length of each time slice can be critical to balancing system
performance vs process responsiveness — if the time slice is too short then
the scheduler will consume too much processing time, but if the time slice
is too long, processes will take longer to respond to input.

Is this definition true for IBM i also?

Time slices on Linux are much much smaller than the default 2 or 5
seconds. See

In general, it's said, larger time slices for batch jobs allow more work
to be done in a given time frame. But won't that affect interactive jobs
waiting to be scheduled to run? I never had the impression I needed to wait
5 seconds until a background compiler run allowed me to page down a subfile
in an interactive 5250 screen. Not even remotely, not even on my slow 150.

I'd be grateful for some enlightenment.

(Background of my question: Imagine a compiler runs in batch, and consumes
CPU, and there is a concurrent need to measure the time between two events
coming in from the network in another, unrelated process, and somehow save
the result. For that I want that time measuring program to be as small as
possible, as efficient as possible, and running with the highest possible
priority for user applications, which means RUNPTY=0. This should make sure
that timing is measured with sufficient accuracy, and the system itself
isn't just serving the counting program, because it's small, efficient and
thus quickly done with its job after each packet. But what about a
meaningful TIMESLICE value for that program?)

:wq! PoC

This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com

As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.