× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Srikanth,

1) Are the subprograms returning without setting on *INLR?  Have the
subprograms return without setting on *INLR which leaves the subprograms
ready to handle the next transaction without having to close and open the
files between calls to the subprograms.

2)  If the subprograms leave files open between calls do the subprograms
read any secondary files to create their writes?  Make sure the subprograms
aren't needlessly chaining again for the same record already read for the
previous transaction.

3) Are the subprograms OPM or ILE programs?  If they are ILE consider making
them modules instead of programs and link them into the control program.
This will save some time on the calls to these programs.

3) Is the control file being read in physical sequence?  Do you care about
the sequence in which transactions are processed?  Make sure there is no K
on the Record Address Type field on the F spec for the control file if you
don't care about transaction sequence.  If you do need the transactions in a
particular sequence consider using FMTDTA to resequence the data before
processing sequentially by the control program.   This will get some
improvement but 40K records isn't really a lot of data so I'm not sure this
will be much help.

4) DO NOT block the control file reads.  It appears the control program is
updating each control file record as processed.  Blocking the control file
reads will not improve performance.  Blocking the read on this file will
slow down the program rather than speed it up.  Doing a commit after each
control file record is updated  will flush the buffer. The next read won't
be from the buffer but from the file.  Blocking the read just requires extra
data to be brought into the buffer which will never be used.  This slows
down the process.

5) Using data queues could cause problems if the secondary programs do not
commit their writes but let the control program commit their updates.
Because they are now separate processes they would have to individually
commit their writes.  The control program could not roll back their writes
if one of the other subprograms fail and you roll back the transaction.

6) Consider committing every second or third transaction (or even the entire
batch) instead of each transaction.  You will have to roll back more than
one transaction if one fails.  This will make a transaction failure and
recovery more difficult but will get a performance improvement.

7) Rather than use data queues consider having three different control
programs that each process the transaction but only call one of the
subprograms.  You'd have three control fields on each transaction to show
that part of the transaction has completed.  Only when all three control
fields have been updated by the respective control programs has the
transaction been completed.  Then run each control program with it's
subprogram as three separate jobs that can run concurrently.

Paul

-- 
Paul Morgan
Senior Programmer Analyst - Retail
J. Jill Group
100 Birch Pond Drive, PO Box 2009
Tilton, NH 03276-2009
Phone: (603) 266-2117
Fax:   (603) 266-2333

"Dwarakanath,Srikanth (Cognizant)" wrote

> I have following scenario.

> Control program reads through a control file (control file has list of
transactions to be processed) and calls programs as below:
> CREATE_TRAN_INFO -> this program writes record in CREATE_TRAN_INFO phy.
file per transaction
> CREATE_CASH_INFO -> this program writes record in CREATE_CASH_INFO phy.
file per transaction
> CREATE_COLLAERAL_INFO -> this program writes record in
CREATE_COLLAERAL_INFO phy. file per transaction
> (Lets call these as work programs)
> If all work programs run successfully, control program flags transaction
as processed and commits.

> This is taking about 20 min. to process around 40,000 transactions. I need
to reduce this by 40% to meet SLA specified. Can I get some ideas on how
best can > this be achieved.

> I could think of creating IN and OUT data queues for each work program.
Take them out (from inline call) to a separate job that runs in batch.
Control program > feeds these programs through IN data queue and picks the
processed part from OUT and also writes to the respective phy. file.
> I am of the opinion that there would be too many dynamic calls to QSNDDTAQ
and QRCVDTAQ, not sure if the objective would be achieved.
> Just for note, all work programs can run independently at the same time.

> Any help is greatly appreciated.

> Best rgds.,
> Srikanth D




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.