|
>From the CRTOUTQ help text for DTAQ parm: ------------ Specifies the name of the data queue associated with the output queue. Entries are logged in the data queue when spooled files are in ready (RDY) status on the output queue. A user program can determine when a spooled file is available on an output queue using the Receive Data Queue API (QRCVDTAQ) to receive information from a data queue. Each time a spooled file on the output queue reaches RDY status, an entry is sent to the data queue. A spooled file can have several changes in status (for example, RDY to held (HLD) to release (RLS) to RDY again) before it is taken off the output queue. These status changes result in entries in the data queue for a spooled file each time the spooled file goes to RDY status. When the data queue is created using the Create Data Queue (CRTDTAQ) command, the maximum message length (MAXLEN parameter) value should be at least 128 and the sequence (SEQ parameter) value should be *FIFO or *LIFO. More information about data queues on output queues is in the Guide to Programming for Printing. --------------------------------- I read this to mean that nothing will be logged to the DTAQ until its status is RDY. I'm not sure what happens if OVRPRTF HOLD(*YES) is specified. I would assume that it would not be logged to the DTAQ. HTH eric.delong@pmsi-services.com ______________________________ Reply Separator _________________________________ Subject: Re: Data Queue's Author: <MIDRANGE-L@midrange.com > at INET_WACO Date: 9/23/98 8:00 AM OS/400 does not write to a data queue when print files appear AFAIK. This is an issue within the utility that you have inherited. You will need to look at the details of the utility. I suspect that one of two things are happening: 1) Your assumption that "RDY" status is necessary for this utility to recognize a new spool file is an issue or 2) What ever method the utility uses to query the printer queue contents does not include spool entries that are not "RDY" status and a better query method may be available. Having said that, I don't believe that a spool member can be moved while it's under creation so if you can use an API to detect spool entries in creation you would need to have a time loop test for "RDY". Since you are faced with this type of test, which from what you stated the utility already does, could you change your inherited utility to test more often and pick up the untimely dropped entries? Just a shot in the dark. let's face it ... you're talking a home grown utility, not an OS function. And if we all raised our hands when asked (be honest), some home grown utilities work great 95% of the time and for triple the cost you can gain an extra 5% <g>. James W. Kilgore qappdsn@ibm.net P.S. Somewhere I picked up a statistic that a specific purpose program may take X hours to write and an equivalent general purpose program will take 7X hours to write. This utility may fall into the specific purpose (circumstance) category. RDJessen@aol.com wrote: > I need some information on when data gets written to a data queue when a spool > file is created. I am supporting a utility (written by a former employee of > course) that reads a data queue to find newly created spool files. <<snip>> +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +--- +--- | This is the Midrange System Mailing List! | To submit a new message, send your mail to MIDRANGE-L@midrange.com. | To subscribe to this list send email to MIDRANGE-L-SUB@midrange.com. | To unsubscribe from this list send email to MIDRANGE-L-UNSUB@midrange.com. | Questions should be directed to the list owner/operator: david@midrange.com +---
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.