× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hello Vernon,

Am 09.10.2019 um 23:37 schrieb Vernon Hamberg <vhamberg@xxxxxxxxxxxxxxx>:

No problem with asking back, Ptrik!!

Thanks for reassuring.

I guess I don't see that as a firm divide, I see it all as cut from the same cloth.

This is a perfectly valid viewpoint when considering the "all integrated" approach, IBM takes ever since Frank Soltis did his work. However, there are relatively few ways to implement software (and firmware) functionality with given hardware components. As different as the system we like may seem at first, especially the hardware shares some components and their usage with common systems/platforms.

That said, putting it all together and observing that separating jobs into their own logical confinement has performance advantages is perfectly valid. But for me, this drills not deeply enough to the real reasons for that assertion. Especially seen from todays viewpoint that RAM is no scarce resource anymore. While the system makes the applications believe there's all RAM through the single level approach, we still have RAM and disk intertwined in the 128 Bit address space and thus a storage tiering based on speed (and persistence).

(Therefore I'm really curious how IBM i will feel like when in some years we maybe have hardware with only SSDs as cheaper and a bit slower storage accompanied by fast RAM which doesn't lose content without being powered.)

Please let me emphasize that "we can afford to not optimize, CPUs are fast enough today" makes me frown also. Not because of the explicit point to "fast enough" CPUs, but also because "good enough" is like an itch to scratch. Scratching means "do better". Programs doing their task with less CPU cycles means, jobs get done faster. For one system it might be like wasting of time and thus money. Seen globally, consumed power can be considerably lessened. This is especially true in the x86 world, where power saving has been engineered to a great extent. IBM POWER seems to draw (almost) the same power from the mains line, regardless of how busy the CPU(s) is/are. At least that's what I could observe with a handful of different systems of different ages over time.

Managing performance is a combination of the factors you list. Paging is not activity levels - maybe you left out a comma - and paging is more easily set for shared pools, as I recall from many years ago.

Sorry, this is where my knowledge is all too blurry. Activity levels as I understand means how many programs can be active in a pool at once. Being activated as I understand means, these must be kept in RAM. If a pool isn't large enough and there are many active programs, they compete for RAM which runs the pool into a thrashing condition.
I'm wondering how this can be a number of programs, since I guess allocation of storage within programs greatly varies with different purposes. Maybe this is calculated by means of a basic minimum allocation for every active program to just run. Filling a load-all SFL within a program to it's maximum size might not count to that minimum but just adds paging activity in that pool. Which in turn invalidates my initial thought of pool size divided by minimum allocation equals activity level.

Maybe you can elaborate on the true meaning of the term activity level?

I'll just throw in a model I heard from an IBM instructor when I first worked with this - I was at Help Systems and was a tech support person, mostly responsible for the AutoTune product, which is a dynamic performance tuning application. IBM took some of the took away some of the need for Autotune when they gave us setting like we have today.

So the picture - say you have 50 people who want to go swimming - and your swimming pool has room for only 30 - that is your activity level for that pool. There is also a max active for the entire water sport recreational complex.

Time slices could be how long a person can stay in the pool before the lifeguard checks to see if anyone else want to get in. If the number of people in the pool is less than the activity level (30), then someone else can jump in,

If there are 30 swimmers, then at a certain time, if someone else wants in, the someone has to get out. Smaller pools might cause this thrashing.

While this seems like a valid comparison, I fail to map the physical concepts of CPU and RAM to this picture. I'm better at true technical terms instead of real life comparisons.

Anyhow, enough silliness - it's been far too long since I worked in this area, anyhow.

Maybe I could help you refresh your knowledge by drilling deeper? ;-) But thanks for taking time to enlighten a relatively young newcomer.

:wq! PoC

PGP-Key: DDD3 4ABF 6413 38DE - https://www.pocnet.net/poc-key.asc



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.