× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



I normally just bin appends from you but this is such rubbish I can't stay silent.

On 04/07/2006, at 12:46 PM, Steve Richter wrote:

You can make a strong case that data queues have no place in
application systems. For programmers they are the system level
equivalent of the computer language GOTO.

Complete rubbish. Data queues are perfectly acceptable application level tool when used appropriately.

Even though I don't subscribe to the use of GOTO, it's not the presence of a GOTO that's the issue--it's the lack of a COMEFROM. When I strike a GOTO in code I know exactly where it's going to end up. However, after the jump I often don't know how I got there.

They dont work well with the debugger.

Data queues have got nothing to do with the debugger. You might as well say the same thing about data areas or database files. They can't be debugged either therefore "don't work well". Utter nonsense!

You can debug the program sending the data and easily see what was sent to the queue. You can debug the program receiving the data and easily see what was received from the queue. You can easily see the contents of the queue by either writing a utility to peek at the queue using the QMHRDQM API or by wrapping a suitable CL program around the DMPOBJ command.

They dont throw exceptions so errors go undetected.

Of course data queues don't throw exceptions--they're objects. But the send and receive APIs do so errors can be detected. If you mean that it is possible to receive too little data off the queue then, yes, that's possible but that is no different from screwing up the parameter interface between programs. If the caller passes 100 bytes and callee receives 10 bytes that's not "detected" either. Nor is the reverse. Similar things happen with SBMJOB and parameter passing as any casual search of the archives will show. None of these situations is the fault of the system, or the data queue, or the program, or the compiler, but rather lies squarely within the purview of the programmer using the tool.

If I cut myself while using a chisel is that the fault of the chisel, the timber, the mallet? Or is it really my own stupidity and carelessness at fault?

Most of the Unix APIs don't throw exceptions either. They set return codes--usually with insufficient description of the real problem. Does that mean you would also say using stream files is a bad practice because the APIs involved don't throw exceptions?

They also violate the rule of isolating a user's job from the shaky
code being run by another job.

How do you arrive at that nonsense? Although data queues can be used within a job they are far more often used to communicate between jobs therefore the so-called "shaky" code in one job cannot affect another.

If you mean that one job could write crap to the data queue and upset the job receiving that data then true but that's no different from writing crap to a database file used by multiple jobs. The same can happen with data areas, message queues, user spaces, user queues, or any of the other shareable objects on the system. It can even happen when passing parameters between programs.

Security wise consider this: how wise
would it be to implement a personnel or accounts payable system using
data queues? Where payment to a vendor was triggered by sending an
entry to a data queue?

It makes perfect sense. The data queue is the means to trigger an event. The event is to make a payment. The queue is not a replacement for persistent storage such as a database file.

Anyone using queues should be aware they are volatile and build in an auditing mechanism and restart function that ensures processing cannot be missed. Doing so is not particularly difficult and should be part of any design that uses queues. Anyone who uses the queue as the sole container for the data--even for just a short time--is a fool. The queue should contain either a copy of the relevant data (to avoid I/O in the background job), or just enough information to enable the background task to locate the relevant data from a persistent medium such as a database file.

None of your objections stand up to even cursory scrutiny. Data queues and user queues are the fastest and most resource effective way of communicating between jobs. A process that uses queues will significantly out-perfom a similar process using SBMJOB, message queues, EOF delay, or any of the other alternatives. They can even improve interactive response time by shifting appropriate work to the background such as printing, faxing, e-mailing, EDI, PDF creation, etc. none of which need, or even should, be done in an interactive job.

Your comments on this topic have got me so annoyed I might even start taking you to task over the P5 nonsense you've been spouting recently.

Regards,
Simon Coulter.
--------------------------------------------------------------------
   FlyByNight Software         AS/400 Technical Specialists

   http://www.flybynight.com.au/
   Phone: +61 3 9419 0175   Mobile: +61 0411 091 400        /"\
   Fax:   +61 3 9419 0175                                   \ /
                                                             X
                 ASCII Ribbon campaign against HTML E-Mail  / \
--------------------------------------------------------------------



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.