× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 7/4/06, Simon Coulter <shc@xxxxxxxxxxxxxxxxx> wrote:
I normally just bin appends from you but this is such rubbish I can't
stay silent.

On 04/07/2006, at 12:46 PM, Steve Richter wrote:

> You can make a strong case that data queues have no place in
> application systems. For programmers they are the system level
> equivalent of the computer language GOTO.

Complete rubbish. Data queues are perfectly acceptable application
level tool when used appropriately.

Even though I don't subscribe to the use of GOTO, it's not the presence
of a GOTO that's the issue--it's the lack of a COMEFROM. When I strike
a GOTO in code I know exactly where it's going to end up. However,
after the jump I often don't know how I got there.

suit yourself.  as400 programmers are the last remaining users of the
GOTO in that prior to v5r3 you had to use them in CL.


> They dont work well with the debugger.

Data queues have got nothing to do with the debugger. You might as well
say the same thing about data areas or database files. They can't be
debugged either therefore "don't work well". Utter nonsense!

You can debug the program sending the data and easily see what was sent
to the queue. You can debug the program receiving the data and easily
see what was received from the queue. You can easily see the contents
of the queue by either writing a utility to peek at the queue using the
QMHRDQM API or by wrapping a suitable CL program around the DMPOBJ
command.

Applications send to data queues to trigger the execution of code that
processes the data queue entry.  Where there is code there is the need
to debug that code. Here is the scenario: you are using the debugger
to step thru your code. You come to a program call you decide to step
into the program or not. When the code gets to a QSNDDTAQ you might
want to step into the code that runs to process the dtaq entry. But
for a lot of practical reasons you cant.


> They dont throw exceptions so errors go undetected.

Of course data queues don't throw exceptions--they're objects. But the
send and receive APIs do so errors can be detected. If you mean that it
is possible to receive too little data off the queue then, yes, that's
possible but that is no different from screwing up the parameter
interface between programs. If the caller passes 100 bytes and callee
receives 10 bytes that's not "detected" either. Nor is the reverse.
Similar things happen with SBMJOB and parameter passing as any casual
search of the archives will show. None of these situations is the fault
of the system, or the data queue, or the program, or the compiler, but
rather lies squarely within the purview of the programmer using the
tool.

classify the dtaq however you want. In my thinking they are
functionally being used in place of a program call. Sending a
correctly formated dtaq entry is just one source of an error. That is
the equivalent of calling a program with the correct arguments. What
does the data queue application do when the customer number is
invalid? You might print an error report but then you have to think a
lot about how to get that error report back to the user who sent the
entry. Much better to be able to send an exception back to the code
that put the entry on the queue - but you cant do.  Many times the
programmer ends up punting and ignores the error.

If I cut myself while using a chisel is that the fault of the chisel,
the timber, the mallet? Or is it really my own stupidity and
carelessness at fault?

It's your mule like insistence on using the wrong tool for the job.

Most of the Unix APIs don't throw exceptions either. They set return
codes--usually with insufficient description of the real problem. Does
that mean you would also say using stream files is a bad practice
because the APIs involved don't throw exceptions?

I would say stay away from interfaces and OSes that dont provide
integrated exception handling support.  Why use Unix when there is
.NET?


> They also violate the rule of isolating a user's job from the shaky
> code being run by another job.

How do you arrive at that nonsense? Although data queues can be used
within a job they are far more often used to communicate between jobs
therefore the so-called "shaky" code in one job cannot affect another.

If you mean that one job could write crap to the data queue and upset
the job receiving that data then true but that's no different from
writing crap to a database file used by multiple jobs. The same can
happen with data areas, message queues, user spaces, user queues, or
any of the other shareable objects on the system. It can even happen
when passing parameters between programs.

In practice data queue driven server jobs crash because of invalid
data queue entries. They also fail to start after the system backup.
Sometimes they end because the entry that tells the server job to end
has found its way onto the queue or was there when the server job
started.  All of these points of failure dont exist if the application
simply calls a program to perform a transaction.


> Security wise consider this: how wise
> would it be to implement a personnel or accounts payable system using
> data queues? Where payment to a vendor was triggered by sending an
> entry to a data queue?

It makes perfect sense. The data queue is the means to trigger an
event. The event is to make a payment. The queue is not a replacement
for persistent storage such as a database file.

Anyone using queues should be aware they are volatile and build in an
auditing mechanism and restart function that ensures processing cannot
be missed. Doing so is not particularly difficult and should be part of
any design that uses queues. Anyone who uses the queue as the sole
container for the data--even for just a short time--is a fool. The
queue should contain either a copy of the relevant data (to avoid I/O
in the background job), or just enough information to enable the
background task to locate the relevant data from a persistent medium
such as a database file.

The background job transaction does not run under the authority of the
user who initiated the transaction.  There are ways to get around this
but it is work and does not solve all the problems. Program call is
much better.


None of your objections stand up to even cursory scrutiny. Data queues
and user queues are the fastest and most resource effective way of
communicating between jobs. A process that uses queues will
significantly out-perfom a similar process using SBMJOB, message
queues, EOF delay, or any of the other alternatives. They can even
improve interactive response time by shifting appropriate work to the
background such as printing, faxing, e-mailing, EDI, PDF creation, etc.
none of which need, or even should, be done in an interactive job.

Your comments on this topic have got me so annoyed I might even start
taking you to task over the P5 nonsense you've been spouting recently.

Do that. The two topics are actually linked. Background jobs are used
on the i5 because they are an efficient use of limited CPW. IBM prices
the system per CPW instead of per user. This encourages - forces shops
to write overly efficient applications with the minimum of features
and accessibility.  RPG is very much like C in that very little CPW is
needed to run an RPG program.  Modern language features like virtual
functions, OOP and try ... finally blocks are judged too heavy a use
of CPW to be practical on a 1000 CPW system that is used by 50 users.
The p5 has buckets of market priced CPW and it has user based pricing.

-Steve

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.