from: rob@xxxxxxxxx
subject: Re: Inconsistent writes to a data queue
Can you record which entries are not making it?
For example, your 'audit' file shows entries 1, 2, 3.
And what we are seeing are things like 2,3
The shadow file confirms that QSNDDTAQ was called, but nothing was written to the data-queue, just to the shadow file.
RPG debugging confirmed that it is not a zero length record.
The Sharepoint pgmr did not see the missing entries.
When we started looking with DSPDTAQ, we also could see that some were missing.
Note that the environment here uses lots of Data-Queues where we feed our iSeries processes and Microsoft Data Warehouse processes.
But that this is the only one using a vendors' package with stacks of APIs, our OPNQRYF shares, our trigger-pgms and IBM's QSNDDTAQ.
The journal receiver shows 1, 2, 3 hitting the data queue. But it does it not show the data, just that it was modified.
I would find it really odd that it could record an entry to the journal receiver but now actually write the data to the data queue.
Yes, we also are perplexed with this intermittent failure and what we saw in the journals (Code ZC=Object Changed).
And how often do we write to this data queue ?
We will probably just write to it 5 times a day. It is at the end of a manual data-entry process.
We are still testing the app here, but this problem has been driving us nuts for a couple of weeks.
We are still waiting to hear from IBM.
I have re-created the data-queue to Force to Disk each write of the entries, and it still happens
Data queue . . . . . . . . . . . > CUQPASPN
Library . . . . . . . . . . . > GLCUSTFD
Type . . . . . . . . . . . . . . *STD *STD, *DDM
Maximum entry length . . . . . . > 4096 1-64512
Force to auxiliary storage . . . > *YES *NO, *YES
I think it is a threads-problem from work being done in the interactive job, so I have pulled QSNDDTAQ formatting and writing out to a submitted job.
If that does not fix it, I don't know what to do.
The other odd thing is the DataQueue is configured for 4096, but we only put 85 bytes into the data-queue. (We might expand the entry larger in the future, hence the large entry size.)
As an Amazon Associate we earn from qualifying purchases.