× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



   I am covered up in work during the week (currently tracking down a variety
   of dirty data, and scenarios where different reports that are supposed to
   have the same totals, do not, so I have to figure out which one is
   correct) so late in replying.

   You are describing something very similar to current  400 capabilities.

   A program, written in COBOL or RPG, performs a series of application
   program actions, in which if there is an error in the data requiring
   operator intervention, the program, which can be running in batch, can
   halt & send a message to the operator message queue, with a list of
   alternatives.  Operator response to this, directs the main program, how to
   react.

   Ideally your data processing design tries to resolve things with a minimum
   of opportunities for an operator to respond inappropriately to some error
   message.  Once upon a time we had a real bad problem with this.  The
   bosses had me spend almost a year figuring out how to have the software
   itself make best choices based on the available info associated with each
   known error condition.  The result was that our volume of errors went down
   to a microscopic fraction of what they were earlier.  Typically no one
   human is an expert on all aspects of company data & you need everyone to
   be that way to avoid a lot of human error, when responding to error
   conditions, let alone "oops, that is not the response I intended to make,
   now what?"

   You can use a message architecture with standard responses, where several
   different programs can use same error message choices.

   Messages can come from main COBOL RPG etc. programs, can also come from
   the CL that they are typically within.  A program can call another
   program.

   You can have one CL that has embedded within it, a string of tothers  I
   don't do this very often, usually for specialized things like ... let's
   reorganize a bunch of files, then do a backup, then initiate an IPL.  Or
   perhaps there is some task that needs to be run one facility at a time. 
   Task-1 in facility 20, task-2 in facility 20, repeat that for facilities
   30 40 50 60, now task-3 in facility 20, etc. where the prompt parameters
   were same for all facilities, and there may have been a list of facilities
   with Yes/No check marks to do th nitely stuff on which ones.

   Task-1 2 etc. can be developed with simple to maintain parameters fed to
   it by the main CL.  By using an outer Cl that has a list of tasks to be
   performed, it is then simple to later add a new task between 1 and 2,
   where that task also has its own CL driven whatever.  In my experience we
   are better off having small focused programs, because they are easier to
   maintain, more efficient use of 400 resources, easier to troubleshoot ...
   I have a message queue associated with the progress made, that basically
   is given an informational message "This step completed & incidentally here
   are the parameters it was using.  Later if suspicion of something went
   wrong, we can then see that the steps ran in the right sequence, and had
   the right parameters, by looking at what is in this special message queue.

   I have inherited software I need to maintain, where the program is HUGE
   (like a million lines of code, and it calls at least 10 other programs of
   similar size, and they in turn call ... but I am only interested in fixing
   a tiny thread of logic that is a nightmare to find.  Using modularized
   code means you do not pass this kind of nightmare on to whoever is to
   maintain it later.

   Typically the main program is inside of a CL, controlling file overrides,
   and also a prompt screen in front. 

   I use DDS to design the prompt screen.

   The operator will have keyed parameters into the prompt screen, such as
   date range, other ranges, various criteria, then within the CL there was
   checking to see that the parameters were A-Ok, then launch the software to
   run off of a Job Queue.  On the 400 you can have a variety of special
   purpose Job Queues which are single or multi-threaded.  All of ours are
   single threaded due to limitations with our main ERP package.

   In the absence of operator intervention, and when using a single threaded
   JOBQ, the programs escape to execute & as each one completes, another one
   is released.  The operator can rearrange the waiting jobs, move some to
   another JOBQ, before it starts executing.

   We have software steps that typically complete in a matter of seconds, and
   we have others that take like an hour to run.  So we have one JOBQ for the
   quickies, and another for the long running.  There are some, which due to
   some design constraints, we can't be having other people doing same kind
   of stuff at same time, so no matter who runs this, it had better be in
   same JOBQ.  Depending on your 400's overall resources, there may be some
   limit on how many JOBQ you got active at same time.

   My experience does not involve moving jobs after they have started
   executing, other than to allocate 400 resources, to give one task more CPU
   priority for example. 

   I do know that if a running job needs to be cancelled, that we first need
   to check what is about to go into execution right after it ends.  Perhaps
   it is related to the job we aborting.

   I am also not experienced in the notion of moving stuff between different
   partitions, other than resources such as disk space.

   These many tasks for an operator to do, can be on a menu, that arranges
   them pretty much in the sequence they are to be done.

   I have many such menus I have created for myself, that have banner
   headlines ...
     * this stuff to be run after the labor input backflushing is completed
       (the people who are doing that input can get completed for the day
       anywhere between 3 and 6 pm)
     * actions to be taken after all shipping for day is done (can be
       anywhere from 3.30 to 7 pm) ... mainly reports associated with billing
     * actions to be taken after invoice billing has successfully cleared the
       JOBQ
     * these options to be run after everyone is done for the evening doing
       input that ends up in the General Ledger
     * these options to be run after everyone done for the evening with input
       that affects the MRP (which includes actions that have no effect on
       General Ledger)
   I also have menus of actions only taken on FRI nite, end of week, end of
   month

   Scheduling software is available for the 400 such as ROBOT from HELP
   systems, and some stuff from IBM.  The simpler scheduling software merly
   launches a standard job (that requires no operator involvement) to the
   JOBQ at a certain time, day of week, etc.  More advanced versions can
   respond to error messages, control which jobs to be launched based on
   which successfully or unsuccessfully completed.

     List,

     I am trying to get a feel for how best to duplicate some our nightly
     batch
     job runs on the iSeries that run on our VSE system. I am open to ideas
     and
     suggestions. We would of course want the same basic results and control
     but
     realize things may have to be handled differently simply due to the
     differences between the two operating systems.

     Here is a "brief" description of our current process....

     The nightly batch run consists of many "jobs". They are, for the most
     part,
     single-threaded. We do not currently have scheduling software on VSE, so
     this is operator-controlled. Each "job" will have one or more "steps". A
     "step" is an execution of an application program. After the jobs are
     submitted, each with any required parameters, the jobs run in sequence
     with
     virtually no operator intervention required.

     If a job fails, it requires an operator response which prevents the jobs
     behind it from running. Depending on the nature of the problem, the
     operator
     can release a "PAUSE" in that batch partition to keep the rest of the
     jobs
     from executing while the failing job is re-run in a different partition.
     Then the "PAUSE" is release and the rest of the jobs continue on their
     merry
     way.

     What is the best way to duplicate this batch process on the iSeries?

     Will we need scheduling software to accomplish this?

     Are the built-in iSeries scheduling capabilities sufficient?

     TIA for any advice and/or ideas!

     --
     Regards,

     Michael Rosinger
     Systems Programmer / DBA
     Computer Credit, Inc.
     640 West Fourth Street
     Winston-Salem, NC  27101
     336-761-1524
     m rosinger at cciws dot com

     --
     This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
     list
     To post a message email: MIDRANGE-L@xxxxxxxxxxxx
     To subscribe, unsubscribe, or change list options,
     visit: http://lists.midrange.com/mailman/listinfo/midrange-l
     or email: MIDRANGE-L-request@xxxxxxxxxxxx
     Before posting, please take a moment to review the archives
     at http://archive.midrange.com/midrange-l.

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.