Why would the variable length and LOB data not be allocated
contiguously? They are both variable length data; i.e. a 50MB LOB
column may hold only 15K of data.
I mean not that each space is contiguous, but that the entire block of
"big" data is contiguous for a row. And when I refer to a 50MB LOB I
mean that the actual data is 50MB, not the max. So, IF the storage is
contiguous, then you could have this type structure where the pipes (|)
are field delimiters, and not actually stored in the row:
Segment1:
ThisIsTheVarCharFieldData|ThisIsTheLOBDataThatIs50MB|ThisIsTheOtherLOBDa
taThatIs50MB
So, if you want to change the varchar data to ThisIsTheVarCharFieldData
to ThisIsTheVarCharFieldDataBigger you not only have to update the
varchar space, but you need to copy the LOBs. Net effect: changing the
25 byte description to 31 bytes results in over 200MB of IO (100MB to
read the old LOB values, 100MB to write them somewhere new, and the 60
bytes of the VarChar) instead of just 60 bytes of IO.
If you stored the 3 variable length values in 3 segments then you'd have
this on disk:
Segment1: ThisIsTheVarCharFieldData
Segment2: ThisIsTheLOBData
Segment3: ThisIsTheOtherLOBData
And could change any one of the values w/out effecting the others.
As for the predictive read-ahead process to avoid faulting. With LOBs
that can be extremely dangerous. If I'm reading a LOB I know it's big, I
expect it to take a little time. What you're doing is saving me that
time, great. HOWEVER, you don't know if I'm going to read that LOB, and
in many cases I many not, but in order to save me some read time on the
LOB you've got to toss (in the case above) 200MB of other data out of
memory in order to make room for LOBs I'm never going to reference.
Ouch. Or even worse, if I update 10 of these rows in a sql statement
that touches a VarChar field, it's 2 GB of IO, and 1GB of memory
flushing just to run that statement. Ewwww...
-Walden
As an Amazon Associate we earn from qualifying purchases.