× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Here's my thoughts:

I generally prefer to use one of the following methods to "sync" files:

1) trigger program on the source that directly updates the target
2) An always running batch job that processes the journal receiver
looking for entries for the source file
3) trigger program places entries on a DTAQ that a batch job processes.

The nice thing about all the above techniques is that all are designed
(at least partially) with the "sync" purpose in mind. All three also
allow you to easily handle the use of commitment control in the
process that updated the source files.

You don't mention if the source and/or target files are journaled, if
so is commitment control being used in the process updating the source
or your process updating the target? If the files are journaled, but
commitment control is not being used, then it should be pointed out
that any process updating a significant number of records would see a
25-33% improvment in performance just be turning on commitment
control.

The problem with the way your doing things: there's a cost involved
with having your program determine which records have changed. You
don't mention how you're doing so, but if it's a flag field of some
sort, then the cost of finding the records is small, but the cost of
updating the flag field is high particularly if you have an
index/logical file over it and/or the file is journaled w/o commitment
control being used.

You don't mention the interval you are using, but assuming it's more
than a few seconds, and that there's usually more than just a couple
of records to be processed, then opening and closing the files vs.
leaving them open may not matter. In any event, BLOCK(*NO) is not a
good idea. Assuming that your files are output only and not update
(which isn't blocked in the first place), and you process a
significant number of records at once, then a better solution is to
let the RPG program block the output and simply issue a FEOD(N) before
you return from your program to flush the program buffers to the DB.

HTH,
Charles

On Mon, Oct 27, 2008 at 7:56 AM, Brown, Stephen GRNRC
<Stephen.Brown@xxxxxxxxx> wrote:
Folks,

Just looking for some general advice, so please add your tuppence worth
if you have any left after this credit crunch.

The idea behind this is keeping linked member details in sync with each
other (Two main files). The source detail of any update is recorded then
picked up by a CLP program that sits and checks then process's any
updates. This will happen throughout the day. Note, there are two files
that need keeping in sync, updates happen in different programs, but the
overall logic is the same, therefore RPG1 is basically same as RPG3 but
is looking at different source/detail files and RPG2 = RPG4 and is again
looking/updating different files

CLP - Unprocessed source 1 ? - Calls RPG1
Unprocessed detail 1 ? - Calls RPG2
Unprocessed source 2 ? - Calls RPG3
Unprocessed detail 2 ? - Calls RPG4

Deep breath here goes .....

I have developed a CLP that calls four RPG programs (two per source
update), the CLP will sit and repeat these calls at pre-determined
intervals till it reaches a certain time and then it will close itself
and all called pgms down.

I will look at RPG1 and 2 as they are the same in logic as 3 and 4.

RPG1 takes a source update record and then "explodes" this into multiple
detail updates (for linked members) on a separate file. The source
record is updated to processed and that is pretty much it. Logical over
source file is set to only include unprocessed records.

RPG2 process's the detail update records (again logical over this file
only includes unprocessed recs), applies the details to our main
membership file, updates itself to processed and again that is really it
all it does.

In both programs I do need to access other files for input only purposes
generally but there are others that are updated.

I am trying to make sure that because these programs get called
repeatedly throughout the day and that is updating two of our main files
that I minimise the potential for locks and that the programs themselves
are as efficient as possible.

I have use the following techniques.

CLP - Only calls Pgm1 if there are source1 updates to process
- uses logical selecting only unprocessed recs.
Only calls Pgm2 if there are details1 updates to process
- uses logical selecting only unprocessed recs.
Only calls Pgm3 if there are source2 updates to process
- uses logical selecting only unprocessed recs.
Only calls Pgm4 if there are details2 updates to process
- uses logical selecting only unprocessed recs.

End Time reached - calls RPG1/2/3/4 to close down files.


RPG - Where appropriate files set to USROPN, this is performed
in *INSZR. These are left open till the CLP closes all programs down.
All update files set to Block(*no) - assumption here to
is make sure all updates happen as soon as possible and are available to
other programs immediately. When updating our main files Pgm 2
/ 4, Error Extension used on prior chain to make sure it can be
allocated for update. If Error encountered main file will not be updated
and update details record not be updated to processed. Next
time program called unprocessed updates should be picked up again.
When not in close down mode - all files are UNLOCK'd
before RETURN is executed.

The programs seem to do what I am asking of them but I haven't had the
opportunity to properly test yet. I did come across one issue where I
was opening one file for input in pgm1 and then for update in pgm2, pgm2
didn't like this and crashed out. This didn't happen in the other pair
of programs when doing something similar because the logical in question
had "Share open data path" set to *NO. This turned out to be a mistake
in our test environment and it was then set to *YES and hence resulted
in the same error. I removed the USROPN on this file and the programs
work ok. This got me wondering if there are better ways and techniques I
could be using to achieve the same results, or maybe the techniques I am
enploying are not partuculary efficient and/or effective. Note, Free
format not used.



Thanks Stevie




------------------------------------------------------------------------------
CONFIDENTIALITY NOTICE: If you have received this email in error, please immediately notify the sender by e-mail at the address shown. This email transmission may contain confidential information. This information is intended only for the use of the individual(s) or entity to whom it is intended even if addressed incorrectly. Please delete it from your files if you are not the intended recipient. Thank you for your compliance. Copyright 2008 CIGNA
==============================================================================
--
This is the RPG programming on the AS400 / iSeries (RPG400-L) mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.