thanks for your explanations. Maybe I should point out some corrections.
Am 30.12.2019 um 18:13 schrieb <midrangel@xxxxxxxxxxxxxxxxx> <midrangel@xxxxxxxxxxxxxxxxx>:
If your goal is to isolate the jobs using ODBC (or any other job for that matter) and make them as efficient as possible then we are discussing the correct methods in work management.
Yes and no. I want ODBC jobs to not interfere with interactive jobs in 5250. I don't care if the ODBC jobs yield longer run time. While this is true for other jobs also (the http server comes to mind), the paging delay is most noticeable with ODBC doing bulk inserts. Not much rattling of disks, CPU usage LED constantly on. Until I again use an existing 5250 session (or a new one).
From the disk noise I suspect, the OS loads stuff from disk (which it had previously at the last signon) but swapped out for whatever reason.
If that helps in any way: We're talking about a 9401-150 maxed out at 192 MiB of RAM and four 10kRPM disks running V4R5, just added to ASP with balancing. No mirroring.
The machine is in private use, so most of the time, there's one user job in QINTER, rarely two (if I need QSECOFR-access also).
Occasionally I'll copy records from my solar power database on MySQL/Linux there for archival with a simple shell script reading records from a select of MySQL on Linux, tinkering a bit to rewrite timestamps, etc. and feed an insert-line to isql (ODBC command line frontend for Linux). That's one connection per database being used and fed with insert by insert as lines come out of mysql. No connection closing and thus expensive re-spawning of processes in between inserts.
Inserting roughly 1,500,000 records runs about eleven hours, with no special adjustments for the database jobs. :-) As said, I don't mind about run time of these ODBC-thing.
As to using ASP or iASP for work management purposes, I would not worry about I/O for this situation, since the I/O for paging/faulting in V4 and beyond is ordinarily not a concern or even worth worrying about given sufficient memory.
Yes, I think likewise. Think of it as my example of separating work to the extent that "just" the CPU and buses are shared.
If however you are trying to stop the system from paging/faulting certain memory then that starts an entirely new problem. You have to assume the system is way smarter about how to manage it's memory, both real and virtual, than anything we can do as application developers.
I know. But again, isn't a subsystem also a confinement to group jobs together to compete only with other jobs in the same sbs for RAM resources, while memory at large is unaffected? At least that's how I understand the documentation.
The only way I know of to minimize paging/faulting is to put a huge amount of memory into the pool and set the activity level high enough that jobs very infrequently loose an activity level.
This would mean that a job which has no activity level immediately will be paged to disk. Did I get this one right?
Private pools just complicate the whole work management scheme and we don't use them except in very specific cases, this not being one of them.
I diagree. It seems this is probably a very practical special case.
PGP-Key: DDD3 4ABF 6413 38DE - https://www.pocnet.net/poc-key.asc
As an Amazon Associate we earn from qualifying purchases.