|
Richard:
These jobs are doing a lot of work apparently. They're also directly competing
for memory and activity levels against all your other work including
interactive jobs, subsystem monitors, TCP/IP and host server jobs, and who
knows what else.
Most likely you'll never get a decent picture of what's really going on
(performance-wise) until you separate these things out. When I get to choose,
about the _only_ things I allow in the *BASE pool are subsystem monitors and I
put those elsewhere whenever reasonable. Otherwise, if you watch
characteristics such as faulting rates, how can you ever know what function is
causing the most?
But that's simple fundamental tuning. What your original question was could be
handled with a concept such as:
1. Create an object such as MYDTAARA.
2. In those ten jobs, have a sequence like:
Nxt_Entry:
alcobj obj((MYDTAARA *dtaara *shrrd)) wait(32767)
dlcobj obj((MYDTAARA *dtaara *shrrd))
call qrcvdtaq ( ... &nomax)
...process dtaq entry...
goto Nxt_Entry
3. Now, each of the jobs will attempt to get a *SHRRD lock on the data area (or
whatever object you choose.) The jobs will wait for the lock some nine hours if
necessary. As soon as the lock is granted, it's released. The job will then
wait forever for a data queue entry. After processing an entry, the lock is
attempted again before going for another entry.
4. When you want the jobs to stop, from your workstation run:
===> alcobj obj((MYDTAARA *dtaara *EXCL)) wait(300)
5. When you want to let them go again, run:
===> dlcobj obj((MYDTAARA *dtaara *EXCL))
Actually, I don't know if that will work exactly as written; perhaps the *SHRRD
ought to be another status. I don't know if a *SHRRD will be granted
anyway if there's an outstanding *EXCL request pending; but the idea is to have
a fairly lightweight secondary control in the loop.
Perhaps a second "control" data queue for the jobs that can be tested with zero
wait-time each iteration. If an entry is found that says 'HOLD', the jobs wait
on an entry that says 'RELEASE' or 'END'. Your workstation job would send
appropriate command entries depending on what you wanted to accomplish.
However the control is done, each of the ten jobs would do no more than finish
their current task before waiting until you let them go again.
With a simple external control in place, you could then spend some time
organizing your subsystems and pools (and routing entries and pre-start entries
and time-slices and whatever else comes along that needs attention).
Tom Liotta
midrange-l-request@xxxxxxxxxxxx wrote:
> 2. RE: Performance of batch vs other batch and interactive
> (Richard Allen)
>
>Here is some additional information,
>We have a model 720 with two processors
>8g memory
>400g+ DASD (Running at about 70% capacity)
>
>Each of the 10 jobs is doing it's own tasks and barely keeps up with the
>Data Queue feeding it, so I don't think we can combine them into less jobs.
>
>The 10 batch jobs are running in Subsystem pool 2 (same as the interactive
>and other batch jobs)
>
>Is this the problem? Should it be running in it's own pool with a smaller
>amount of memory?
--
Tom Liotta
The PowerTech Group, Inc.
19426 68th Avenue South
Kent, WA 98032
Phone 253-872-7788 x313
Fax 253-872-7904
http://www.powertech.com
__________________________________________________________________
Introducing the New Netscape Internet Service.
Only $9.95 a month -- Sign up today at http://isp.netscape.com/register
Netscape. Just the Net You Need.
New! Netscape Toolbar for Internet Explorer
Search from anywhere on the Web and block those annoying pop-ups.
Download now at http://channels.netscape.com/ns/search/install.jsp
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2025 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.