And in which periodical would that be in?
Mark Murphy
STAR BASE Consulting, Inc.
mmurphy@xxxxxxxxxxxxxxx
-----midrange-l-bounces@xxxxxxxxxxxx wrote: -----
To: "Midrange Systems Technical Discussion" <midrange-l@xxxxxxxxxxxx>
From: "Michael Smith"
Sent by: midrange-l-bounces@xxxxxxxxxxxx
Date: 09/12/2011 04:26PM
Subject: RE: How do you determine when numerous SBMJOBs have ALL finished
Monitor Submitted Jobs
Article ID: 1823
Posted February 1st, 1997
in Load 'n' Go Utility
In my shop, as in many others, we often run two (or more) jobs in
parallel and then execute a third job after the first two are
successfully completed. Because OS/400 provides no simple way to monitor
for the completion of submitted jobs, we have to monitor the jobs
manually or resort to specialized programming.
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of CRPence
Sent: Monday, September 12, 2011 3:09 PM
To: midrange-l@xxxxxxxxxxxx
Subject: Re: How do you determine when numerous SBMJOBs have ALL
finished
On 12-Sep-2011 10:25 , Joe Pluta wrote:
On 9/12/2011 11:16 AM, CRPence wrote:
If you try to handle every conceivable possibility, no simple solution
works (in my opinion, submitting a list of jobs to a message queue
handling job is not a simple solution). But a little common sense
would make it work for a specific situation.
Agreed.
In reality, for the vast majority of cases, the original as posted
would work fine. A simple tweak makes it NEARLY invincible, without a
lot of code: the last job in the list submits the job that acquires
the exclusive lock (after it gets its own read lock).
A very good improvement.
There is almost no conceivable way this could fail. <<SNIP>>
I just prefer not to code to chance when there seems to be a means to
be thorough without significantly more effort. That there "is almost no
conceivable way this could fail" warns me [via those nagging voices in
my head ;-) ] that Murphy will surely rear his ugly head at the most
inopportune time, and that debugging those eventual failures will likely
be difficult.
I would just use the MSGQ() and job completion messages as
specifically provided to detect when and how the submitted jobs
complete. I would probably pass the list of jobs [and the message
queue to monitor] as input to the final job, submitting that job
without delay, allowing that final job to do the polling awaiting the
completion messages for that list of jobs; allowing for the submitter
to continue doing something other than the polling for completions.
I think it's a matter of personal style. Passing a list requires
maintaining that list every time a new job is ended or a job name
changes. Locking a data area works pretty well for me.
Perhaps. Updating a list within the controlling program seems to me
much less work and less error-prone than the maintenance required to
ensure that the [first] called program in each additional submitted job
is properly implemented to get the lock. I would prefer to be able to
add a new submitted job without having to code actions in external
entities; to instead take advantage of what is already provided by the
OS. Plus if one of the jobs [added] is just a CL command versus a
program, I would not even need to have a CLP to effect any locking, I
would need effectively only to update the list of jobs being submitted.
Besides, I would probably implement a solution that used a CPC1221
"job submitted" message sent to the same message queue coded on each
SBMJOB request. The final job would, for each job named as submitted in
one message, find the corresponding "job completed" message on that same
queue, with continued polling until every "job submitted" message has a
pairing of "job completed"; reacting in some way to any failure versus
successful completion code perhaps as an early exit from polling.
Something like the pseudo-code:
jnq=sbmrqs('call pgmx') /* sbmjob cmd('...') msgq(l/mq) */
sndmsg msgid(cpc1221) msgdta(jnq) tomsgq(l/mq)
/* above two stmts repeat for each [added] request, then: */
jnq=sbmrqs('call wrapup') /* pgm wrapup awaits cpc1221\cpf1241 pairs
for above repeated paired requests, before continuing */
Regards, Chuck
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit:
http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at
http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.