|
Hi Joe,
We're running V5R4 and the DSPJRN help for the ENTDTALEN() parm states
The system calculates the length of the entry specific data field
to
accommodate the longest entry specific data among all journal
entries
in the specified receiver range. The entry specific data field is
a
fixed-length character field. The minimum length of the field is
130
characters. If the length calculated by the system causes the
record
format length to exceed the maximum record length, a message is
sent
and the entry-specific data field is truncated.
If the length calculated by the system causes the record format
length
to exceed 32766 bytes, a diagnostic message is signaled and the
entry
specific data field is truncated.
My interpretation, FWIW, is it looks like you have at least one record
with a length of 32641 bytes somewhere within your receiver range.
Gary
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Joe Pluta
Sent: Tuesday, March 15, 2011 11:16 AM
To: Midrange Systems Technical Discussion
Subject: DSPJRN OUTPUT(*OUTFILE) and resulting record length
A couple of parameters in the DSPJRN command (OUTFILFMT and ENDDTALEN)
control the size of the JOESD field in the output of the DSPJRN
command. JOESD is what actually contains the data in type R entries:
actual file I/O like writes and updates. I have a command that does a
DSPJRN to an output file with OUTFILFMT(*TYPE1) and ENDDTALEN(*CALC).
This has worked wonderfully up until now, but suddenly today I started
getting failures because the data length was too long. Something
started causing records with a JOESD of 32641 to be added. The real
downside is that DBU can no longer handle the records. For those
unfamiliar with DBU, when you use DBU to view the output file from
DSPJRN, it now only shows the journal fields, but it also breaks up the
JOESD field based on the format of the journaled file. It's very, very
cool.
I can truncate the data to make sure that I don't get this error (I've
done that and it seems to work just fine), but I'd like to know what
caused it. Anybody have any idea how to tell which journal entry caused
the huge record length to be calculated?
Joe
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.