If you use a logical to update a physical and don't have all the fields
defined in the logical I would guess that the database would initialize the
non-referenced fields on an add and leave unchanged any non-referenced
fields on an update--but I've never tried it. I don't know how intelligent
the initialization would be unless you defined your files with SQL.

I really don't see why you'd want to do it anyway. I can (kinda) see
reading the file using a logical that includes a subset of the fields but
then you'd call the update program to do any updates right? The update
program would then read the record for update, move the data and then write
so it could initialize any fields as needed. You're still better off
including individual fields in your calling parameters instead of using the
whole record.

I rarely define logical files with field subsets because you never know when
you'll need one of the fields you didn't include in the logical. I recently
worked on a system where the original programmers included only the fields
they needed when they wrote programs. They ended up with a LOT of logical
files with the same keys but different fields. It was a real mess.

About 15 years ago I worked on a system where the developer set all the
files to Lvlchk(*NO). It was a real nightmare with garbage data in files.
He did it to avoid compiles.

I still think you're better off using SQL for record access or just
compiling the programs when you change file layouts. DSPPGMREF will show
you which programs use which files.

Chuck Landress, PMP




On Thu, Jan 20, 2011 at 1:00 PM, <cobol400-l-request@xxxxxxxxxxxx> wrote:

Send COBOL400-L mailing list submissions to
cobol400-l@xxxxxxxxxxxx

To subscribe or unsubscribe via the World Wide Web, visit
http://lists.midrange.com/mailman/listinfo/cobol400-l
or, via email, send a message with subject or body 'help' to
cobol400-l-request@xxxxxxxxxxxx

You can reach the person managing the list at
cobol400-l-owner@xxxxxxxxxxxx

When replying, please edit your Subject line so it is more specific
than "Re: Contents of COBOL400-L digest..."


Today's Topics:

1. Re: Using one Update Program (Jeff Buening)
2. Re: Using one Update Program (Jon Paris)


----------------------------------------------------------------------

message: 1
date: Wed, 19 Jan 2011 16:47:34 -0500
from: Jeff Buening <JeffBuening@xxxxxxxxxxxxxxxxxxx>
subject: Re: [COBOL400-L] Using one Update Program

Thanks Chuck and Jon for input. I have started using SQL for new programs.
Jon or Chuck the only thing I am still wondering about is.....

2) Use logicals to ignore the additional fields

I have read many articles talking about using Logicals to minimize
compiles. So lets say I define a Logical file in a Stand Alone program
that does Reads, Writes, Rewrites etc... and add a field to a physical
file. I don't recompile the program with the logical. So, it is only
dealing with 100 bytes storage allocation, and rewriting or writing to the
physical file that is 120 bytes doesn't overwrite or create bad data in the
last 20 bytes because doesn't know about or receive 20 extra bytes from
anywhere?


Thanks,

Jeff

I won't answer your points directly Jeff - rather I'll just try and explain
what you are seeing.

I had forgotten that you were using your UPDATEPRO program to read the file
before the update so ...

Let's assume that you pass the record area (say 100 bytes) as a parm to
UPDATEPRO. What is actually passed is a pointer - note that ZERO
information about length or anything else is passed. In UPDATEPRO (because
it uses the new record format) that same record is (say) 120 bytes. The
only thing the compiler knew about when it compiled UPDATEPRO was the
length it was given (120) so when it moves data to the parm it moves 120
bytes. But back in the caller the storage allocation was only 100 bytes -
so the next 20 bytes belonging to something just got nuked. If you then
change some data in that record in the caller and pass it back to UPDATEPRO
it will still see the corrupted memory and will write that to disk - hence
your 24 value didn't change. In your current scenario it may be that those
20 bytes don't "matter". But they will in another program and it may take
some hapless programmer who comes after you a long time to work it out.

Best example of this was something I saw while working on the COBOL
compiler team. Customer had a program that had been "working" for seven
years. They changed the value of one literal and recompiled. No test needed
- just changed a literal from 0.75 to 0.78. Meanwhile back in production
all hell broke loose. Why? For 7 years the program had been corrupting
storage as described above. But the area of storage corrupted was the print
buffer - and it was space filled before each line was built - so no harm no
foul. But in that 7 years the compiler had changed how it generated its
storage definitions - and now it wasn't a print buffer that was being
corrupted but a collection of pointers used in program calls. Made for an
interesting debug scenario.

There are at least three options I can think of:

1) Recompile all affected objects when you change a file

2) Use logicals to ignore the additional fields

3) Use SQL

The one option you should _not_ use is the one you are using now - i.e.
lying to the compiler <grin>


Jon Paris



------------------------------

message: 2
date: Wed, 19 Jan 2011 16:58:10 -0500
from: Jon Paris <jon.paris@xxxxxxxxxxxxxx>
subject: Re: [COBOL400-L] Using one Update Program

It has been eons since I did this (I use options 1 and 3) but it should
apply the default value for the field to any new records - on update it
should be unchanged. Personally I would specify a default value in the DDS
for all new fields. I just prefer explicit to implicit.


Jon Paris

www.Partner400.com
www.SystemiDeveloper.com



On Jan 19, 2011, at 4:47 PM, Jeff Buening wrote:

Thanks Chuck and Jon for input. I have started using SQL for new
programs.
Jon or Chuck the only thing I am still wondering about is.....

2) Use logicals to ignore the additional fields

I have read many articles talking about using Logicals to minimize
compiles. So lets say I define a Logical file in a Stand Alone program
that does Reads, Writes, Rewrites etc... and add a field to a physical
file. I don't recompile the program with the logical. So, it is only
dealing with 100 bytes storage allocation, and rewriting or writing to
the
physical file that is 120 bytes doesn't overwrite or create bad data in
the
last 20 bytes because doesn't know about or receive 20 extra bytes from
anywhere?


Thanks,

Jeff

I won't answer your points directly Jeff - rather I'll just try and
explain
what you are seeing.

I had forgotten that you were using your UPDATEPRO program to read the
file
before the update so ...

Let's assume that you pass the record area (say 100 bytes) as a parm to
UPDATEPRO. What is actually passed is a pointer - note that ZERO
information about length or anything else is passed. In UPDATEPRO
(because
it uses the new record format) that same record is (say) 120 bytes. The
only thing the compiler knew about when it compiled UPDATEPRO was the
length it was given (120) so when it moves data to the parm it moves 120
bytes. But back in the caller the storage allocation was only 100 bytes -
so the next 20 bytes belonging to something just got nuked. If you then
change some data in that record in the caller and pass it back to
UPDATEPRO
it will still see the corrupted memory and will write that to disk -
hence
your 24 value didn't change. In your current scenario it may be that
those
20 bytes don't "matter". But they will in another program and it may take
some hapless programmer who comes after you a long time to work it out.

Best example of this was something I saw while working on the COBOL
compiler team. Customer had a program that had been "working" for seven
years. They changed the value of one literal and recompiled. No test
needed
- just changed a literal from 0.75 to 0.78. Meanwhile back in production
all hell broke loose. Why? For 7 years the program had been corrupting
storage as described above. But the area of storage corrupted was the
print
buffer - and it was space filled before each line was built - so no harm
no
foul. But in that 7 years the compiler had changed how it generated its
storage definitions - and now it wasn't a print buffer that was being
corrupted but a collection of pointers used in program calls. Made for an
interesting debug scenario.

There are at least three options I can think of:

1) Recompile all affected objects when you change a file

2) Use logicals to ignore the additional fields

3) Use SQL

The one option you should _not_ use is the one you are using now - i.e.
lying to the compiler <grin>


Jon Paris

--
This is the COBOL Programming on the iSeries/AS400 (COBOL400-L) mailing
list
To post a message email: COBOL400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/cobol400-l
or email: COBOL400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/cobol400-l.



------------------------------

--
This is the COBOL Programming on the iSeries/AS400 (COBOL400-L) digest list
To post a message email: COBOL400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/cobol400-l
or email: COBOL400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/cobol400-l.



End of COBOL400-L Digest, Vol 9, Issue 6
****************************************


As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2022 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.