Time slice on the IBM i really only comes into play in a CPU intensive
workload...

If your job is waiting for screen or disk I/O, the system will trade it out
for something that is ready to use the CPU now.

Having said that, last I heard the default timeslice assigned to IBM
defined classes (ex: 2000 interactive / 5000 batch) haven't changed since
the first AS-400 was shipped.

CPU's where much slower then. :)

A much more modern CPU appropriate timeslice is 200 / 500 I believe.

Charles


On Thu, Dec 16, 2021 at 9:22 AM Patrik Schindler <poc@xxxxxxxxxx> wrote:

Hello,

I've read again and again through the Work Management manual but I can't
make sense with my Linux background about those attributes.

While RUNPTY might be closely related to the "niceness" — see
https://en.wikipedia.org/wiki/Nice_(Unix) —, I struggle to understand the
real world impact of different time slice values.

https://en.wikipedia.org/wiki/Preemption_(computing)#Time_slice says:

The period of time for which a process is allowed to run in a preemptive
multitasking system is generally called the time slice or quantum. The
scheduler is run once every time slice to choose the next process to run.
The length of each time slice can be critical to balancing system
performance vs process responsiveness — if the time slice is too short then
the scheduler will consume too much processing time, but if the time slice
is too long, processes will take longer to respond to input.

Is this definition true for IBM i also?

Time slices on Linux are much much smaller than the default 2 or 5
seconds. See
https://stackoverflow.com/questions/16401294/how-to-know-linux-scheduler-time-slice

In general, it's said, larger time slices for batch jobs allow more work
to be done in a given time frame. But won't that affect interactive jobs
waiting to be scheduled to run? I never had the impression I needed to wait
5 seconds until a background compiler run allowed me to page down a subfile
in an interactive 5250 screen. Not even remotely, not even on my slow 150.

I'd be grateful for some enlightenment.

(Background of my question: Imagine a compiler runs in batch, and consumes
CPU, and there is a concurrent need to measure the time between two events
coming in from the network in another, unrelated process, and somehow save
the result. For that I want that time measuring program to be as small as
possible, as efficient as possible, and running with the highest possible
priority for user applications, which means RUNPTY=0. This should make sure
that timing is measured with sufficient accuracy, and the system itself
isn't just serving the counting program, because it's small, efficient and
thus quickly done with its job after each packet. But what about a
meaningful TIMESLICE value for that program?)

:wq! PoC

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.