×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
I prefer the NEP waiting on a data or message queue [never any active
polling; albeit always on a timeout looping to check for /job ending/
condition, to enable a timely termination with cleanup]. This defers
all logging and tracking to that application, which may be desirable,
but perhaps not if the rest of the system likes to track activity to a
job or user -- for that, the chgacgcde job(*) acgcde(_requester) and the
switch user API can wrapper the report/spool activity.
It would be difficult to get a good cost measurement, because the job
tables will often have existing job structures waiting for reuse. As
long as the job tables are not being constantly compressed, and the job
structure creation rarely occurs for most of the jobs, one would hope
the cost for the processes alone is not so large. A big expense is if
the job actually logs; i.e. LOG(x y *NOLIST) is by far the cheapest.
A possible test would be to submit 5000 jobs [sbmjob cmd(chkobj
qsys/chkobj *cmd] to a held jobq just after compressing job tables [and
ensuring no new jobs were created in the same IPL, according to the
system values], and again the same 5000 jobs test just after 5K new jobs
were created in advance [according to the system values]. Both done
immediately after an IPL to restricted state [w/out performance adjust]
would be ideal. Running both tests with some performance measurement
tools would give better comparisons than just clock times; CPU and
paging across the system. CPU times and paging for the jobs themselves
might give some of the details for comparison, but much of the start/end
work is probably outside of the submitted jobs themselves - in the
subsystem monitor job, the submitting job, and ??. I am not clear on
how much of the job structures are created for a job on a queue, so,
given all jobs can not start at once, there will presumably be reuse
even in the first test, and nor presumably would the second test require
a full 5K pre-created job structures. Notes: Sysvals: QACTJOB,
QADLACTJ, QADLTOTJ See also DSPJOBTBL. See also GO PERFORM
A bigger [and more calculable] payback might be from any reductions
in setup, for example object create/delete activity within the
application, that may be duplicated for each job, and which can be
eliminated by combining in the one job. Complex objects like database
files are expensive for create and delete; they are expected to be
created and live forever, where only the occasional member is added, and
where almost all other member activity is only on the data --
reset/clear, populate, update, delete. In the same manner, ensuring to
avoid being /aggressive/ in reclaiming spool storage, spools can reuse
members at significantly reduced overhead to the system; i.e. keep
QRCLSPLSTG larger than 7 [default is 8], or set to *NOMAX.
Regards, Chuck
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.