× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi, Mike:

I don't know about those other databases, but with DB2/400, I suggest you define your VARCHAR fields with a minimum length that should be large enough to contain the "average" or most frequently occuring strings. That way, most of the time, the data will just fit right there, within the fixed portion of the record, where that space for that field is allocated. Only when the data does not fit within this space, will DB2/400 be forced to use the "overflow" technique. Note that, in this case, the entire "fixed" area is unused (blank or nulls) and the entire varchar string data is stored in the overflow area for that record.

So by carefully choosing the minimum or default size to allocate for those fields, you can ensure that, 80% or 90% of the time, you will not incur any additional overhead.

Does that "make sense"?

Regards,

Mark S. Waterbury

> Mike Cunningham wrote:
Not looking to start a war over which is better, just looking to update my knowledge.

When we create character fields in DB2/400 we usually setup fixed length, no null support fields. If the character fields is long (e.g. 100+ bytes) and we suspect the data will vary greatly in actually used length, we will make it a variable length field. The only time we use null fields is for data we are importing in from other systems where the file can have null values. In those cases the file with null field support is usually a work file used for the import and they we move the data into non-null fields in the production files. In other databases (like MS SQL) the standard looks like it is just the opposite. All character fields are variable length with null support unless you take extra steps to not do that. My training (and it has been some years) said that variable length fields are good for saving storage space but bad for overhead. That the database had to do extra work to manage the variable length, tracking the actually number of bytes in use and manage the ov!
erflow areas when the data in the fields changed from 10 characters to 1,000 and back to 10. Is it still true that variable length fields are less efficient and if so why do other databases have that as the default? Or is this something specific to the implementation of the database? Is DB2/400 move efficient with fixed length but MS SQL more efficient with variable length?

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.