Scott,
Sorry, forgive my stupidity. Why would someone ever want to left-fill a
character field with zeros? I mean -- from a character field point of
view, the zeros don't mean anything. And you're not populating a
numeric field (or, if you are, there are MUCH better ways).
So... what's up? Is this some weird file format required by a customer?
If I followed the OP correctly, he is parsing a delimited text file so the
"columns" of data have variable length data in them. For each "column", an
initial pass determines the minimum and maximum length of data actually
found in the file for that column. For numeric fields, it also tracks the
maximum number of decimals detected.
Then a second pass standardizes the data in each column to the maximum size
and decimals detected for that particular column during the first pass.
There "coldta" is the contents of a column of data but the size can't be
determined at compile time as in:
D zeros s like(coldta) inz(*zeros)
It is also why the values aren't known at compile time for using some of the
other techniques proposed. Back at V3R2, we had subprocedure support but
not the %editc() and similar BIFs. So among the various string handling
routines I wrote for a service program were some which tried to mimic the
%editc() and %editw() in V3R7. However, since operational descriptors were
not available for numeric fields, I had to pass the field's length and
decimal positions to my routine. It then used QECCVTEC to create the "edit
mask" and QECEDT to perform the actual editing of the data, which got
returned as a string.
What I don't get here is why it is desirable to standardize the columns into
what in essence is fixed width sizes with leading or trailing zeros, up to
the maximum length of any data in that "column".
Doug
As an Amazon Associate we earn from qualifying purchases.