× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hello, Charles:

I suggest that in OS/400 or i5/OS, perhaps the "best simple" approach to solving this kind of problem is to create a single request data queue for incoming requests, and then you can start one or more "server jobs" that will "listen" on that data queue (calling QRCVDTAQ API). This is one of the nicest things about data queues, that you can add as many "listener" jobs as you want or need, so it "scales up" very nicely. As more requests come in, the next available server job pulls the next request from the queue. So, you don't need to write any fancy or sophisticated code to submit additional jobs or create threads within the same job, etc.

To go along with this, each "requester" job can create its own "results" data queue, and pass the qualified name (20 characters, object name and library name) of the "results" data queue, as part of each request message, so that each server job knows where to send the results. Then, as soon as the request is placed in the queue (via calling the QSNDDTAQ API), the requester then "listens" or waits on their own "results" queue, by calling QRCVDTAQ. (NOTE that this data queue cannot be in QTEMP.)

This simple approach performs rather well, and has been used since the days of CPF on the System/38 to allow "server jobs" that service hundreds of requests for many users.

Regards,

Mark S. Waterbury

> Charles St-Laurent wrote:
Hi!
The situation is:
We have a big process that is used all day long by our sellers. This process
uses many queries within SQLRPGLE programs. Each time a seller launchs this
process, it is launched as a batch job. All the programs within this process
are service programs, running in QILE. And there is a night job that uses
this process too.
Even if we run this process in QILE, when the batch job ends, all the
ressources that have been allocated in the activation group are reclaimed,
because the activation group is linked to the job. What I want to do is to
keep these database files open between calls by other users / jobs.
I know that is is possible to start a never ending program, a kind of
server, that waits for requests. Once the program receives a request, it
processes the request and returns the result to the requester. The easy way
to do that is a data queue server. But a data queue server is a blocking
server. It can process only one request at the time. Another way is a TCP
server, with one parent process and many children processes. So the parent
process waits for a request. Once it receives a request, it dispatches this
request to one of its children processes. Then, the child process returns
the result. Because the server is a never ending job, all the ressources
allocated in QILE will stay open until I stop the job, including ODP.
Is there something simpler to do what I want to do? Sharing ODP between
calls of my process, considering that each call can be done by different
users / jobs?

Charles St-Laurent
Consultant Berco
Technologies
Industrielle Alliance, Assurance auto et habitation Téléphone : 418 650-4600, poste 3216
Sans frais : 1 800 463-4382
Télécopieur : 418 650-4612
Sans frais : 1 877 650-4612
Courriel : charles.st-laurent@xxxxxxx <mailto:pcharles.st-laurent@xxxxxxx> www.inalco.com/assurancegenerale

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.