× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Arthur,

So the PF you're writing to always uses zoned decimal output fields? Never packed or integer? Because padding with EBCDIC zeros wouldn't work well for packed or integer. You'll also need code to deal with negative numbers...

Personally, I wouldn't try to do all this numeric formatting manually. I'd load the numeric values into a very large RPG numeric field (say, 63P30 or something like that) using the %DEC() BIF to convert the data. Then, I'd use _LBCPYNV to copy that 63P30 to the destination number. _LBCPYNV will happily take parameters for the size/decpos of the output data that can be changed at run-time. IT can output to and zoned, packed, integer, with little difficulty.

And since it's build into the MI layer of the OS, it's very fast, and it's results will always be consistent with the way the rest of the OS would read/write that type of number.


Arthur Marino wrote:
Thank you, Doug and Scott. The one downside to asking this august group
a question is the ever present possibility that I'll expose my own
stupidity. But since you asked...

What I'm trying to do here is create a generic delimited file parser.
I've had one too many users tells me they have a s/sheet for me, and I
need to get it into a file on the 400 ('scuse me) so I can do something
useful with it.

I'm making 2 passes thru the data.

First it determines 'max' field attributes for every column in the .csv
file and then displays a subfile showing the 'DDS'. The user (me) can
unlock this data in order to make changes to generated field names,
lengths, decimal positions and COLHDGs. Lengths can be left alone or
made larger, decimals can be changed up or down. Then it builds the DDS
source member and creates the file. Then there's an override/open on the
program-described 'dummy' output file (reclen=4096) to the file just
created.

The 2nd pass reads the data for the sole purpose of building a string
that exactly mimics the DDS record layout. That's why the alpha fields
have to be padded with blanks, numeric fields have to be prefixed with
zeros, decimal positions have to be aligned, so when I do the exception
output of the string to the dummy, everything is kosher on disk.

Thanks again to all.

Arthur J. Marino
RockTenn Corporation
(631) 297-2276
-----Original Message-----
From: rpg400-l-bounces@xxxxxxxxxxxx
[mailto:rpg400-l-bounces@xxxxxxxxxxxx] On Behalf Of Scott Klement
Sent: Thursday, March 26, 2009 1:52 AM
To: RPG programming on the IBM i / System i
Subject: Re: Using an Expression to control a FOR loop

Douglas Handy wrote:
Then a second pass standardizes the data in each column to the maximum
size
and decimals detected for that particular column during the first
pass.
There "coldta" is the contents of a column of data but the size can't
be
determined at compile time as in:

D zeros s like(coldta) inz(*zeros)

I don't agree with you. The code I work will work perfectly well
despite the size being unknown at compile time. Think about it. The OP

defined a variable called ColDta that he stored the column data in. How

could he possibly do that if he doesn't know the size of the field? The

answer is simple: ColDta is defined as the MAXIMUM possible size of a
field. Since it's a VARYING field, he can set the length to be smaller.

My code, similarly, will work. It doesn't matter that you don't know
the field size at compile time, because I did the following:

coldta = %subst(zeros:1:cwdth(#col)-%len(coldta)) + coldta;

%subst() is a BIF that lets you take PART OF a string. When I do that,
I can control the length of the data I use in the expression.

So I was *not* assuming that he knew the size of the field at compile
time. Incredibly, I was able to glean that the size wasn't known from
the other 500 times it was stated in this thread.


It is also why the values aren't known at compile time for using some
of the
other techniques proposed.

Yes, I get that. I would not have proposed those techniques. Unlike
certain others, I KNOW that *ZEROS can't be used as the first parm of
%SUBST(), and I KNOW that you can't use a variable for the length/decpos

of %DEC(). It's not rocket science after all. %DEC() controls the size

of an intermediate result. Think about it...

Surely you understand that an intermediate result is -- basically -- a
variable generated and used by the compiler. (Under the covers.) in
order to change it's length/decpos, the compiler would -- essentially --

have to *recompile* the program. Think about it.

What I don't get here is why it is desirable to standardize the
columns into
what in essence is fixed width sizes with leading or trailing zeros,
up to
the maximum length of any data in that "column".


Yes, that was MY question as well. I can fully understand the idea of
loading a variable amount of data -- and I *did* take it into account
when I wrote my code, as I've hopefully clarified above.

What I don't understand is why you'd want to fill the silly thing with
zeros. What good does that do anyone?

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.