× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I didn't get a chance to respond to this earlier, so let me drop in a couple
of thoughts.

First, you can directly call an RPG program on an AS/400 via the toolbox.
The ProgramCall class is very good for this.  In fact, even when I use data
queues, I still wrapper the data queue calls inside a simple RPG call
interface; it allows me to encapsulate the mechanics of the communications
and change them as need be without having to monkey with the Java code too
terribly much.  It allows me to hide and centralize things like library
names and session ID generation.

As to the issue of asynchronous process death, it depends on the
application.  There are a number of possible options, all of which involve a
bit of effort.  If the request servicer is slaved to the requester (that is,
it processes a single request and then goes away), you can do some
relatively simple registration gymnastics: when you start the conversation,
the first thing the submitted job does is send a message identifying itself.
The requester stores that information.  If the requester subsequently get a
timeout, it can then check the status of the server job.

A second option in this case is a routing step.  I haven't done much in this
area, but the way I understand it, you can add a routing step to the job
(maybe at the CLASS level?) and even if the job terminates abnormally, the
routing step will get executed.  This would send an abnormal termination
message back to the requester.  As I said, I'm not very versed in this
technique, but I'm looking into it.

For a multiple-request processor (one that sits in batch and waits on a data
queue), then you're probably going to want to implement a router and a
status file.  In this case, you design a router that submits the jobs to
process requests.  If a processing job is already submitted, it doesn't
submit it, obviously.

In either case, it returns that job information to the requester, which can
then use it to interrogate the status of the job on a timeout.  You can also
use a termination routing step in this case, but the abnormal end processing
gets more complex.  Do you tell only the requester that was active at the
time of the termination, then sbumit a new server to process any queued
requests?  Or do you run through the data queue and notify ALL requesters
that the job has terminated abnormally?

And of course, the issue go up in complexity as your requests get more
complex:

1. Single record inquiry (LEAST COMPLEX)
2. Multi-record inquiry
3. Single record update
4. Multi-record update (MOST COMPLEX)

As you get more complex, you start running into you have your traditional
problems of rollback and restart/recovery (can you say CPR?).

Joe

> -----Original Message-----
> From: David Gibbs
>
> I'm curious if anyone knows of a "Recommended" way of using data
> queues for
> inter system communication?
>
> Also, does anyone know of a way to handle "Process Death" on
> either side of
> the connection?  If I were using TCP, I would just look for a
> closed socket.
> With a dataqueue, the potential exists for the remote process
> (the one that
> feeds the data queue) to die and the local process (the one that eats the
> data queue) to just wait forever.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.