× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 17-Apr-2015 12:30 -0500, rob wrote:
<<SNIP>>
So let's try this:
CREATE TABLE ROB.BUZZ06 ( TIMESTAMPCOL06 TIMESTAMP (06) )
-- Table BUZZ06 created in ROB.
CREATE TABLE ROB.BUZZ12 ( TIMESTAMPCOL12 TIMESTAMP (12) )
-- Table BUZZ12 created in ROB.
<<SNIP means effecting 10001 inserted rows>>

DSPOBJD rob/buzz06

Storage information:
Size . . . . . . . . . . . . . . . . : 1359872

DSPOBJD rob/buzz12
Storage information:
Size . . . . . . . . . . . . . . . . : 1622016

A difference of 262,144 bytes. / 100,001 rows = 2 bytes per row
<<SNIP>>

To effect a more appropriate review of database file sizes:

• Use the "Data space size" from the Display File Description (DSPFD) instead of Display Object Description (DSPOBJD); the Size presented for the latter includes much more than just the DataSpace, and therefore could skew the results, unless the total of all other sizes are equivalent [which in the given script, presumably will be the case].
• Instead of performing any INSERT for which dynamic /extent/ allocations [unpredictable] could also skew the results, use Initialize Physical File Member (INZPFM) to achieve a consistent effect for the same number of rows in both tables. Even so, the dataspace sizes still could be skewed by an amount approaching two times the minimal extent size; e.g. if the dataspace for the larger record length fills to the point of barely overflowing into a new extent and the other dataspace for the smaller record length fills to just under the cap of the prior existing extent, then a maximal distortion occurs in the measurements.
• The larger the number of rows used for a test, the smaller any effects of the aforementioned potential for distortions; that the compared tables are of one column and that only of interest is th measuring of a non-varying data type, are also helpful.

Based on my most recent prior reply in this thread, I expect the results using that revised methodology [with reduced distortion] would show a number even closer to 3 bytes per row [instead of 2.6b/row]; i.e. showing that the TIMESTAMP(12) values require 13-bytes whereas the TIMESTAMP(6) values require 10-bytes, thus a difference of 3b/row.

FWiW: Obtaining the results for additional variations, perhaps for each of TIMESTAMP(0), TIMESTAMP(3), and TIMESTAMP(8) should reveal if the implementation for database columns is reflective of the expression stgInBytes=7+((P+1)/2) for TIMESTAMP(P), or if perhaps the storage-in-bytes might seem to be defined some other way.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.