× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Thanks, Neil

The trouble with trying to manage this stuff on the 400 is, we don't
generally know how data is really stored. E.g., there is a byte at the
beginning of _every_ record that includes, among other things, the record's
delete status. So, if the boundary is 256, a record of 256 is actually 257
long, so the cross-boundary problem arises.

And if there are variable-length fields (VARCHAR, LOB) or NULL-capable
fields, there is another structure that includes a null status table and
address of the auxiliary space that holds the variable data beyond the
allocated length. The null-table is 1 bit for each field in a record - up
to 1000 bytes, therefore, IIRC that the max number of fields is 8000.

Then there are cache misses - the L2 (?) cache has a cache line length of
128. If data is not found in a cache line, an IO results. Cache _size_
varies from processor to processor, but the cache line has remained the
same for a long time. We can't control this much, if at all, but it's
always there. It _was_ material to some testing and predictive activity I
did - how long would a certain process take, based on file format?

IMO, one of the better things to do involves VARCHARs and the like. SQL
creates these with allocated length of 0, by default. This means that every
access to that data involves 2 I/Os, because the data will never actually
be in the main record. So,use the ALLOCATE parameter to cover, what is the
recommendation, 80% of the cases.

Of course, generic statement generators will probably NEVER use ALLOCATE -
alas!

Regards

Vern

At 12:19 AM 11/30/02 -0500, you wrote:
Data on disks is written in sectors,  at one point it was 256 bytes/sector
- but larger sector sizes are multiples or 256, therefore in the old days,
accessing a record with a 256 byte length would require reading one
physical disk sector (a record length of 64 would yield 4 records per
sector or access with no additional overhead).  A record length of 257
would require reading 2 physical disk sectors to read one record, hence
much more inefficient.

...Neil

Vern Hamberg <vhamberg@centerfieldtechnology.com>

        To:     midrange-l@midrange.com
        cc:
        Subject:        Re: Odd/Even packed numbers.

OK, Neil, I'm not even as much a codger as some of you - what was the
issue
with 257 or 260 record lengths - looks very close to 256.

Thanks, esp. as this will tell me how much better off I am today.  ;-)

Vern


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.