× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.




On 2014-07-29, at 12:09 PM, "John R. Smith, Jr." <smith5646@xxxxxxxxx> wrote:

I have some questions about variable length fields in files created using
DDS.

When I learned about them, I was told that if there is a field with a max of
100 bytes with a VARLEN(20), the system allocates 20 bytes in the record and
if the value is longer than 20 bytes, it stores the additionally needed
bytes elsewhere. When the record is read, the OS knows to retrieve the 20
bytes from the record and concatenates the additional bytes from the
elsewhere storage (if any exist) and it returns a 100 byte field. The net
result is less DASD tied up with extraneous spaces in the file with no
impact to the developer. However, when I look at the file via DSPPFM or
WRKLNK, I see all 100 bytes even if all 100 are spaces.

My questions are:
1) Is the OS smart enough to pad out the 21-100 bytes when displayed via
WRKLNK and DSPPFM so I don't see the difference or is my understanding of
how the DDS VARLEN works flawed?

Yes it is - your understanding is basically correct.

2) Assuming my VARLEN is guessed correctly and most records fit into the
allocated bytes, is there any noticeable impact with using VARLEN fields?

So much depends on how the file is used that that is really hard to answer. My guess is that across the board it has little impact. The embedded length avoids any overhead from (for example) scanning for hex zeros to determine the end of the field. You will also have reduced disk access if you've got the retain value set right - which is a very slow process relatively.

3) If my guess is incorrect and a big chunk of records require the
additional 21-100 bytes, does this have a huge performance impact?

This is in the category of "Doctor my head hurts when I bang it with a hammer" - You can cure it very easily by stopping using the hammer. You need to make sure that the retention value is set high enough to avoid the additional disk access. For some uses, 80% of the records "fitting" would be good enough. For others 90% would be a better target but I've seen shops that went as low as 60% because they knew that the vast majority of access to that file was within that 60%.

You should not "guess" - as a minimum you need to analyze your file and see what the lengths are currently in use. Are 90% under xx bytes with the other 10% widely spread? etc. etc.


Thanks.
John

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


Jon Paris

www.partner400.com
www.SystemiDeveloper.com





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.