× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi James

I am not 100% clear on what you are doing, but I get the impression you
want to read an entire table just once and send each record to a web
service.

In a similar scenario where I had a table that was just too large to do an
update run for the entire table in the available windows I had, I broke the
table up into chunks of records and then processed each chunk of the table
by relative record number - the RRN range to be processed was passed in as
a parameter.

The number of simultaneous jobs I ran was determined by disk activity. So
basically I processed the table in chunks of a million records from 1 to x
million which meant I had no update collisions and could process the table
concurrently. I named the jobs "BATCH1M", "BATCH2M" etc so I could gauge
progress by batch number and just queued the whole lot up and ran 6 or 7
jobs concurrently until the whole lot were done.

I tried some other approaches like data queues, data areas etc but none of
them performed as well as processing the table in RRN chunks. if it's a one
shot and what you need to do is similar to what I needed to do then maybe
this would work for you.

Maybe you've already thought of this, but just a thought.

On Fri, Oct 19, 2018 at 4:42 AM James H. H. Lampert <
jamesl@xxxxxxxxxxxxxxxxx> wrote:

Note: we aren't actually updating the records in the file. Just passing
them to a web service.

New idea:

Suppose we have a data area. When a job is ready to process a record, it
reads and locks the data area, SETGTs the file to the record after the
one in the data area, and READs it, and writes the key of the record it
found back out to the data area.

That would (I hope) guarantee that regardless of the number of jobs
working on the file, and regardless of the number of records in the
file, each record would get processed exactly once.

--
JHHL
--
This is the RPG programming on the IBM i (AS/400 and iSeries) (RPG400-L)
mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/rpg400-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: http://amzn.to/2dEadiD




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.