× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.




On 06/07/2006, at 3:02 AM, Steve Richter wrote:


They dont work well with the debugger.

Data queues have got nothing to do with the debugger. You might as well
say the same thing about data areas or database files. They can't be
debugged either therefore "don't work well". Utter nonsense!

You can debug the program sending the data and easily see what was sent
to the queue. You can debug the program receiving the data and easily
see what was received from the queue. You can easily see the contents
of the queue by either writing a utility to peek at the queue using the
QMHRDQM API or by wrapping a suitable CL program around the DMPOBJ
command.

Applications send to data queues to trigger the execution of code that
processes the data queue entry.  Where there is code there is the need
to debug that code. Here is the scenario: you are using the debugger
to step thru your code. You come to a program call you decide to step
into the program or not. When the code gets to a QSNDDTAQ you might
want to step into the code that runs to process the dtaq entry. But
for a lot of practical reasons you cant.

If you want to debug the receiver then you shouldn't be starting in the sender.

It's easy enough during testing to debug the sender and ensure the correct output is sent to the queue. Then debug the receiver and ensure the correct input is received from the queue. Once the code gets to production there should be no need for debug.

The fact that you want to start debugging in the sender and "magically" step into the receiver, which in all probability is in another job, is just laziness. It might not fit the way YOU want to work but it doesn't negate the use of data queues as a valid application level tool.



They dont throw exceptions so errors go undetected.

Of course data queues don't throw exceptions--they're objects. But the
send and receive APIs do so errors can be detected. If you mean that it
is possible to receive too little data off the queue then, yes, that's
possible but that is no different from screwing up the parameter
interface between programs. If the caller passes 100 bytes and callee
receives 10 bytes that's not "detected" either. Nor is the reverse.
Similar things happen with SBMJOB and parameter passing as any casual
search of the archives will show. None of these situations is the fault
of the system, or the data queue, or the program, or the compiler, but
rather lies squarely within the purview of the programmer using the
tool.

classify the dtaq however you want. In my thinking they are
functionally being used in place of a program call.

Then your thinking is flawed.

Sending a
correctly formated dtaq entry is just one source of an error. That is
the equivalent of calling a program with the correct arguments. What
does the data queue application do when the customer number is
invalid?

It shouldn't get one. That sort of validation should be performed before the entry gets on the queue.

You might print an error report but then you have to think a
lot about how to get that error report back to the user who sent the
entry. Much better to be able to send an exception back to the code
that put the entry on the queue - but you cant do.

Again, flawed thinking. Validation errors probably should be detected in the sending program. Processing errors might occur in the receiver and those will have to be handled but most, if not all, of these can be avoided by passing in good data.

Many times the programmer ends up punting and ignores the error.

Now we're back to laziness again, or sloppiness. Error handling is a fundamental aspect of programming.


If I cut myself while using a chisel is that the fault of the chisel,
the timber, the mallet? Or is it really my own stupidity and
carelessness at fault?

It's your mule like insistence on using the wrong tool for the job.

If I'm using the chisel in place of a saw then you're correct. However, if I'm cutting a mortise then the chisel is the perfect tool (unless I want to lose all my skills and use a router instead)


Most of the Unix APIs don't throw exceptions either. They set return
codes--usually with insufficient description of the real problem. Does
that mean you would also say using stream files is a bad practice
because the APIs involved don't throw exceptions?

I would say stay away from interfaces and OSes that dont provide
integrated exception handling support.  Why use Unix when there is
.NET?

Then what are you doing here? Go join a .NET mailing list.

Besides, OS/400 DOES provide integrated exception handling support. That's the whole purpose behind the Message Handler component. CPF was one of the first commercial operating systems to offer the idea of an exception-based message handler and OS/400 has inherited that. That's about 30 years of exception handling.

The Unix interfaces on OS/400 have to behave like Unix interfaces on Unix systems which is why they use return codes instead of the more elegant exception model like the rest of OS/400.



They also violate the rule of isolating a user's job from the shaky
code being run by another job.

How do you arrive at that nonsense? Although data queues can be used
within a job they are far more often used to communicate between jobs
therefore the so-called "shaky" code in one job cannot affect another.

If you mean that one job could write crap to the data queue and upset
the job receiving that data then true but that's no different from
writing crap to a database file used by multiple jobs. The same can
happen with data areas, message queues, user spaces, user queues, or
any of the other shareable objects on the system. It can even happen
when passing parameters between programs.

In practice data queue driven server jobs crash because of invalid
data queue entries.

Only poorly written server jobs crash because of invalid data. Properly written ones log the error and continue.

Properly written applications don't send invalid data to the queue. They validate before-hand and reject invalid data while they have a user who can correct it. If the sender is also a batch process then the sender logs the error for later recovery.

They also fail to start after the system backup.

Only in bad implementations. It's trivial to organise a restart facility for the server jobs and a monitor job to keep an eye on things if necessary.

Sometimes they end because the entry that tells the server job to end
has found its way onto the queue or was there when the server job
started.

Again, this indicates poor application design.

All of these points of failure dont exist if the application
simply calls a program to perform a transaction.

While that's true as far as it goes, in itself it is no reason to avoid data queues.



Security wise consider this: how wise
would it be to implement a personnel or accounts payable system using
data queues? Where payment to a vendor was triggered by sending an
entry to a data queue?

It makes perfect sense. The data queue is the means to trigger an
event. The event is to make a payment. The queue is not a replacement
for persistent storage such as a database file.

Anyone using queues should be aware they are volatile and build in an
auditing mechanism and restart function that ensures processing cannot
be missed. Doing so is not particularly difficult and should be part of
any design that uses queues. Anyone who uses the queue as the sole
container for the data--even for just a short time--is a fool. The
queue should contain either a copy of the relevant data (to avoid I/O
in the background job), or just enough information to enable the
background task to locate the relevant data from a persistent medium
such as a database file.

The background job transaction does not run under the authority of the
user who initiated the transaction.  There are ways to get around this
but it is work and does not solve all the problems. Program call is
much better.

This is still an application design issue. It is perfectly possible for the background job to run under the same profile as the initiator. Even without resorting to profile swapping or other such techniques.

Everything you've complained about so far simply smacks of poor application design. You're opting for a straight-forward program-call interface purely to simplify your thought processes and programming effort. That's fine but don't use your limited ability to implement a proper data queue based solution as the reason that data queues should be avoided.



None of your objections stand up to even cursory scrutiny. Data queues
and user queues are the fastest and most resource effective way of
communicating between jobs. A process that uses queues will
significantly out-perfom a similar process using SBMJOB, message
queues, EOF delay, or any of the other alternatives. They can even
improve interactive response time by shifting appropriate work to the
background such as printing, faxing, e-mailing, EDI, PDF creation, etc.
none of which need, or even should, be done in an interactive job.

Your comments on this topic have got me so annoyed I might even start
taking you to task over the P5 nonsense you've been spouting recently.

Do that. The two topics are actually linked. Background jobs are used
on the i5 because they are an efficient use of limited CPW.

No. Background jobs are used because the work they do doesn't belong in the foreground job. This is a standard technique used in Unix systems too and therefore on P5 systems. Process spawn or fork other processes to accomplish work.

IBM prices the system per CPW instead of per user.

No. They price according to type of CPW. If your primary argument is that it's unfair to distinguish between interactive CPW and batch CPW, or that interactive CPW shouldn't be so expensive, then you'll get no disagreement from anyone on this list.


This encourages - forces shops
to write overly efficient applications with the minimum of features
and accessibility.

Rubbish. The reason current OS/400 applications have a minimum of features is three-fold: 1) Lazy programmers who continue to do things they way they've always done it
        2) Block mode terminal interface
3) No decent alternative interface that is as easy and cost-effective to develop for

Efficiency is something that should be considered in all applications regardless of language. Efficiency is not achieved by dropping features but rather by implementing those features in an efficient manner using appropriate tools such as data queues.

RPG is very much like C in that very little CPW is
needed to run an RPG program.  Modern language features like virtual
functions, OOP and try ... finally blocks are judged too heavy a use
of CPW to be practical on a 1000 CPW system that is used by 50 users.

You could do this if you have a 1000 Interactive CPW or if your users are clients consuming Batch CPW. Virtual functions add no value. OOP is useful and its principles can be applied even in non-OOP languages. Try ... finally can be implemented even in RPG and C using ILE condition handlers and language features.

However none of this helps modernise an application. OOP, nor virtual functions, nor try...finally will modernise an application. That will take a better user interface. The only choices you have at present are a thick client or a browser. Thick client programming is a pain and the browser is an awful terminal. No wonder people are sticking with 5250. Even shifting to a p5 won't help here.

If an appropriate user interface is used it doesn't matter whether the back end is written in RPG, C, Java, or .NET.

The p5 has buckets of market priced CPW and it has user based pricing.

p5 systems come in the same range of processing power as i5 systems. They both use the same processors. They both use the same hardware. Neither the hardware nor the operating system is priced by user. Some applications, such as DB2, might be priced by user. I see IBM no longer provide pricing for i5 systems on their web site (or at least I couldn't find any). My memory tells me that although i5 plus OS/400 is more expensive than p5 plus AIX it is not hugely more expensive by the time a p5 is properly configured into a comparable system. It is only Interactive CPW that skews the pricing completely out of whack and Interactive CPW is not necessary for the sort of applications you are describing.

Further, as we've seen discussed in this list and other sources the p5 loses money and is subsidised by i5 sales. This fact alone makes any sort of comparison between i5 and p5 pricing misleading and erroneous.

Perhaps someone with access to i5 and p5 pricing can do a proper price comparison for comparable systems.

Regards,
Simon Coulter.
--------------------------------------------------------------------
   FlyByNight Software         AS/400 Technical Specialists

   http://www.flybynight.com.au/
   Phone: +61 3 9419 0175   Mobile: +61 0411 091 400        /"\
   Fax:   +61 3 9419 0175                                   \ /
                                                             X
                 ASCII Ribbon campaign against HTML E-Mail  / \
--------------------------------------------------------------------



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.