× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Vinay:

I am not sure how "busy" these programs are, but consider this approach:

For each physical file to be updated, you create a data queue with the same name as the target file name. (These could all be in the same library or different libraries.)

Then, you split the current "busy" program into two parts -- a "front end" program that figures out which of those files needs to be updated, and the rest of the program that does that "work" -- think of the front-end as a "router" or "distributor" -- this front-end program reads each request from the first data queue, and figures out which target file needs to be updated, and then writes that request into the new data queue -- one for each target file.

You will have 15 copies of the "back-end" (the remainder) of the original program, that "does the work" -- each copy listens to the data queue for just one of those 15 files, and they each update just that one file. The program will open the file(s) it uses just once, when started ... then it goes into a "loop" waiting on its data queue for requests to process. After processing each request, it returns to the top of the loop, to wait for the next request.

To set this all up, you will probably want to create a subsystem with a job queue that can accommodate the number of target files as the total number of jobs active in that subsystem, and you could even set them up as "prestart" jobs -- that way, you need only start that subsystem and all 15 (or however many may eventually be needed) jobs will become active, each listening on its own data queue.

Also, you could pass the name of the data queue (and the target file name, since they are the same name) to the program as a parameter. That way, you have just one version of the source code for both the "front-end" and "back-end" programs, and one copy of the executable *PGMs, that gets submitted or started 15 times, with different parameter values for each one. To use pre-start jobs, create 15 *JOBDs, where each one specifies a different value for the "target data queue and file name" parameter on the CALL to the PGM.

Note that with this architecture, you can still start multiple copies of the front-end (router) program, but they will all feed their results into a single target data queue (one for each file). Depending on the nature of the "work" that needs to be done between the time the request message arrives and the record is formatted for update of the target file, you might want to do most of that work in the "router" or "distributor" program, and then just write the image of that record into the correct target data queue -- so in that case, the "messages" on the data queue will just be an image of the record data structure containing the exact image of the record to be updated in each target file. (From what you described, all of these target files will always have the same record format name and the exact same record layout, etc.)

I hope this explanation "makes sense" to you ... Let me know if you have any questions.

Hope that helps,

Mark S. Waterbury

> On 12/11/2015 6:42 PM, Vinay Gavankar wrote:
Hi,

We have a program which receives some values thru Data Queue read and it
updates a file with the values received.

The problem is that there are multiple copies of the file it needs to
update.

Let us say it is updating FILEA, so that is what is in the compiled code.

On the system there are currently 15 files which are exact copies of this
file. They have a different file name, but the the same record format name,
field names etc.

The program reads a 'control' file, which has the names of these 15 files.
For every file, it does an OVRDBF of FILEA to the actual file name, Opens
the file, updates/writes the record, and then Closes the File.

The system was built for 'flexibility' years back, when they started with 2
or 3 files. It is flexible, in the sense it did not need a code change as
the files being updated grew from 3 to 15.

This is a very 'busy' program, meaning it is constantly receiving data thru
the data queue. Actually there are multiple copies of this program running
simultaneously receiving data from the same data queue and all of them are
pretty busy.

Now they are finding that all these open/closes are seriously impacting the
CPU usage.

An architectural change is obviously indicated, but that would take time to
implement.

As usual, they are looking for a 'short term', quick and dirty fix (to be
implemented in 2 weeks) to eliminate these open/close.

The only thing we could think of was to define the current 15 files
separately in the program and then update them as needed, losing all
flexibility.

Any other ideas or suggestions would be greatly appreciated.

TIA


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.