× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 13 May 2013 09:59, Hiebert, Chris wrote:
On 13 May 2013 07:47, CRPence wrote:
On 13 May 2013 06:23, Florin Todor wrote:
<<SNIP>> my work-around was this:

if DS_HRESD01.D1CHGAMT < 0;
DS_DETD01 *= (-1);
endif;

and it worked. So, the subfield DS_HRESD01.D1CHGAMT is recognized
as a negative value; <<SNIP>>


While I was originally going to describe the effect for the
presentation by debug as "disturbing" [the term /interesting/ is
far too complacent sounding toward the debugger; just look at the
confusion it caused], I would find the above claim for the effects
from the RPG comparison to be even more disturbing... if I
understand correctly.

Does that claim imply that the 12 byte string [with character b
representing blanks in] 'bbbbbb-79349', when stored in a zoned
decimal variable, will compare as less than a zero value? Please
tell me that is not so!

That is not so. "bbbbbb-79349" is recognized as 79349 when assigned
to a packed field.

:-) I actually knew that not to be so... and confirmed with an actual test on v5r3. I was just /calling out/ the claim. It seems the OP retracted that claim in the next post... Whew! :-)

Even though the debugger shows the "-" sign, it doesn't actually
convert the value to a negative number when used in any calculation.

Whatever algorithm the debugger uses to show the value for the zoned decimal field is problematic, and IMO should be "corrected". The /normal/ means to present a numeric value as a character string is to have used the MI EDIT instruction with an edit mask. I verified that a Zoned BCD scalar variable defined with attributes 12S02, and with that effectively invalid value in its storage, would have been presented as the string "793.49" when using the conventional /numeric editing/ feature. Whatever is the /home-grown/ version of /editing/ being used by the debugger, seems both to ignore the convention and to produce inconsistent results as compared to almost everything else that does use the MI EDIT instruction.

I can only guess that the debugger is ever-so-desperate to not fail with a MCH1202, that it chooses to use its own editing feature. If so, then IMO a better design would have been to use the builtin numeric editing, and then only if\after an error occurred would the faux-edited value be presented, but along with some indication that the edited value being presented is not an accurate representation *due to* the bad data condition. Had that been done, then the value would have been properly edited, and the effects of the edited value would match with how the programming language treated the value.... and similarly being notified for a bad value one would know that the programming language would also receive a decimal data error.

What Barbara said was also "interesting". Since the hex value was
X'60', the zone-to-packed conversion ignored the x'6' portion and
only looked at the second number (X'#0" where the # is ignored).
This is what I found in my testing when I the hex values where
X'40', which also passed a decimal assignment to packed from zoned.

Yes. It was only that "the debugger shows the zoned value that way" which was /disturbing/ to me.

I am intimately familiar with the effects or lack-thereof for invalid data, by the OS for the Binary Coded Decimal (BCD) data types. I know that for the zoned BCD type, the /zone portion/ of the non-sign digits is just ignored by the MI /numeric/ internals. However the SQL has its own rules, which should diagnose that data as being in-error, for any attempt to input\insert such invalid data to a TABLE. That the numeric instructions of the MI do not issue MCH1202 for such /bad data/ has long been considered acceptable [which IIRC, was to match the effects provided by the S/36]

If the OS and the RPG both consider that the 12-byte hex string of value x'40404040404060F7F9F3F4F9' as stored in a 12S02 variable is the decimal value 793.49, then I would expect that the debugger should reveal in an EVAL of that variable, that the value of the variable was 793.49 *instead of* presenting what appears to be the decimal value -793.49. Similarly for the value x'40404040404061F7F9F3F4F9', why show some strange effect like /793.49 when the OS and the RPG both will treat the value of that decimal scalar as 1793.49; i.e. why not just show the decimal value 1793.49 when the variable is EVALed? Like any other EVAL, there is the option to :X to get the actual code points of the bytes that make up the data in the variable.

My suggestion assumes the field is an actual character field being
converted to numeric. It won't work if the field in the data
structure is left as Zoned.

Yep. As Barbara suggested, "just define DS_HRESD01.D1CHGAMT as character, so EVAL-CORR will skip it, and then handle it separately as Chris suggested." Using the %dec(char_variable:12:0)*.01 seems quite sensible, to effect a cast from an effective character string value representing an integer value stored in a character data type variable into a numeric value stored in a decimal data type variable.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.