I have no idea what I "fixed" but it does compile. Obviously I did overlook
a different error earlier. Musta been something I was smoking, Chuck.
Jerry C. Adams
IBM i Programmer/Analyst
I don't feel we did wrong in taking this great country away from [the
Indians]. There were great numbers of people who needed new land and the
Indians were selfishly trying to keep it for themselves. - John Wayne
--
A&K Wholesale
Murfreesboro, TN
615-867-5070
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of CRPence
Sent: Thursday, March 24, 2011 1:00 PM
To: midrange-l@xxxxxxxxxxxx
Subject: Re: CL Field Size Workaround
On 3/24/11 10:02 AM, Jerry C. Adams wrote:
I am trying to write a CL program using the following:
DSPFD FILE(SOMELIB/*ALL) TYPE(*MBR) +
OUTPUT(*OUTFILE) FILEATR(*PF) +
OUTFILE(QTEMP/REORGS)
The program will not compile because some of the numeric fields
exceed15.0.
Okay, that I understand. But is there a workaround? Heck, I don't
even need those fields.
The CPI0306 should not cause a compile failure, so perhaps something
else did? What release is being used? If the fields are unused, then
the compile should complete with only those as informational\diagnostic
of their redefinition as *CHAR versus *DEC [or a *INT variant].
Since v5r4 there is the ability to overlay the storage for a program
variable, over the variable declared for the field, if the field must be
referenced; STG(*DEFINED) DEFVAR(&MBRxxx 1).
I am unaware of any DECxxx() parameter [like DECRESULT() on
RUNSQLSTM], an OPTION() parameter specification, and there is no new
parameter special value for DCLBINFLD() like *DECMAX, that would
override the old behavior in order to enable larger decimal field
variable declarations from DCLF in any release; i.e. unaware of a means
to avoid the CPI0306 and its effect, such that the variable reflects the
field data type\precision\scale.
Regards, Chuck
As an Amazon Associate we earn from qualifying purchases.