× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 25-Jun-2010 07:04, rob@xxxxxxxxx wrote:
Ok, so if you want to limp along with layouts like (exaggerated)
...
SIZE DEC(3,0) If all 9's use SIZE1
...
SIZE1 DEC(5,0) If all 9's use SIZE2
...
SIZE2 DEC(7,0) If all 9's use SIZE3
...
SIZE3 DEC(9,0)

Not just exaggerated. That description is a misrepresentation of the resolution provided by IBM. The ODOBSZ was deprecated [the computing definition], having been replaced with the expression ODSIZU * ODBPUN [optionally divided by the appropriate value to show in KB, MB, et al.] to determine [and present] the object size. Just because the field text for ODOBSZ suggests for "OBJECT SIZE: 9,999,999,999=USE ODSIZU*ODBPUN", does not mean to suggest that is the only or best manner to code for the new function. There is not much that can be said in those 50 bytes. Using the expression, there is no requirement to perform any special CASE\SELECT processing for a maximum decimal value for the old size-field when making code updates to handle larger values. The only change required is to use the new field definitions instead of ODOBSZ. The best code changes would simply drop all references to the old and since-defunct column name ODOBSZ, replacing those references with the calculated size ODSIZU*ODBPUN.

You can by specifying not to check for record format level
identifiers (or by using sql instead), and just ignoring trailing
new fields. I understand how some people would be happy with
returning invalid data, like the size of an object is 999 when
it's higher than that, versus having their program error out.

If the ODOBSZ had changed, only SQL usages would be able to function without recompiling. Ignoring the format change renders the vast majority of the columns unusable in row level access; i.e. ignoring *all* fields trailing ODOBSZ would be required, not just ignoring any *new* trailing fields since a change made to the definition of ODOBSZ. Plus, all existing programs would still need to be recompiled to function, even with ignoring level check; i.e. P(10,0) to P(32,0) for example, would have existing Row Level Access programs treating the first six-bytes of the new seventeen-bytes of packed data as packed BCD. However the first six bytes would not be valid BCD data, so data mapping errors would be the result for every existing program even *if* no later fields in the format were referenced by the program.

The current design approach is IMO correct, such that changes which are mostly non-disruptive can be accomplished by adding a new column to the end of the existing record format. That design allows new programs to be code to get required support, and allow old programs be updated or simply to remain /functional/ even if potentially making decisions on garbage-in. However that would be by their choice, having decided not to update the programs that referenced the ODOBSZ field, knowing that the MTU had warned them the field definition was unable to represent accurate object sizes so all code dependent on /accurate/ object size values should be changed to use the new computed result.

Only the idea of making command parameter(s) available to modify the output file results [or even an entirely new command] would seem reasonable to effect change to the output results. That same option could enable the command capable to accept something like XML or other means to define what data and the attributes of storage\layout for that data should be, such that the invocation itself defines a static\unchanging output record format definition rather than the "model output file" defining that somewhat static record format definition.

And I do use DSPOBJD for quick and dirties. But for anything
imbedded I prefer APIs.

And I am not the only one who would like a change. Even if you'd
be satisfied with an additional date field instead of fixing the
other date field you still need to submit a DCR. Again, yes, you
could get around it by calling functions, like idate, to resolve
it, but you can't build an index over a function (can you?). This
can get into performance issues.

https://www-912.ibm.com/r_dir/ReqDesChange.nsf/Request_for_Design_Change?OpenForm


There are so many relatively simple solutions that can be used by users for overcoming the stupidity of that date format so poorly chosen by IBM. Just how much real value is there in each user not having to code their own resolution, versus the cost for IBM providing and then users actually taking advantage of the change in updated or new code? Even if the cost is lower for IBM, that most users would ever even step-up to make the changes is less probable than those who have ever even made the change finally to stop using ODOBSZ. The full date data is already there from the existing feature, just in an ugly format, so any code changes in the OS to give either the same data reformatted or additional column(s) of the same data reformatted, seems of little benefit to just about anyone; esp. for those with the ability to recognize the date data can not be sorted without first being reformatted, as they are already well poised to figure-out the how-to.

If the data output by the command were to change, then access to the data from the CL command output via SQL for which ordering requires an ORDER BY [per the subject requirement] suggests that additional code changes to SQL for reformatting is both required and likely somewhat trivial for whomever recognizes the need.

For when access is via RLA, for which [a sort or] keyed access path is required for collation, programs using Row Level Access have long had the option of [FMTDTA\SORT and] OPNQRYF with a shared ODP to create the derived key.

For the given scenario I would resolve the issue directly with the DSPOBJD output file, by redefining a field that is of sufficient length and compatible type, which is not of interest to my program. As such, perhaps:

DSPOBJD MyLib/*ALL *ALL *FULL OUTPUT(*OUTFILE) OUTFILE(QTEMP/OD)
OVRDBF QADSPOBJ QTEMP/OD SHARE(*YES)
OPNQRYF FILE((QTEMP/OD)) OPTION(*INP) KEYFLD((ODPPNM *ASCEND))
FORMAT(*FILE) MAPFLD((ODPPNM /* map CDat for ORDER BY */
'odccen *cat %sst(odcdat 5 2) *cat %sst(odcdat 1 4)'))


For the James Lampert's out there (and other long time users)
did the outfile for DSPOBJD always have ODDCEN, ODCCEN and other
century flags? Anyone have a V1R1 manual (or whenever DSPOBJD
came out)? I see that the list objects api came out in V1R3. I've
been on the machine since V1R2 and I can't remember and those
manuals have long been disposed of.


According to the design change rules for "model output files" [AKA OutFiles], the same rules which resulted in ODSIZU & ODBPUN being *added* to the record format to replace the ODOBSZ instead of changing the definition of ODOBSZ, all of the fields prior to any newly added fields had already existed on the prior release in the same order and with the same definition [or at least compatible, with almost no discernible or only corrective side effects; e.g. a change to the CCSID]. That effect should hold true with very few exceptions, where any exception should have been documented in a Memo To Users. IIRC the fields ODCRTU & ODCRTS were added in V1R3M0, so any prior fields had presumably existed in the same order and data types on the prior release, just as the rules would require.

Regards, Chuck

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.