|
I'm in the process of converting my trigger
procedures over from the Big Array overlay method (32,767 1-byte elements) to
one in which I use pointer math to find the offsets. The question that
comes up is what does data management put into the trigger buffer when the file
record is really big? Say I have 32,767 one-byte fields in a single
record? Does it write the 96 fixed bytes plus four sets of 32,767 byte
variable fields (the before buffer, the before nullmap, the after buffer, and
the after nullmap)?
If so, then the big array overlay method I've
been using would not have worked anyway for big files like this would it, since
it depends on starting out with a data structure and overlays it with an array
of only 32767
bytes? What will data management write to the buffer when we get the
really big fields (blobs, clobs, etc)?
If I switch over to calculating pointers to the
four variable areas based on the trigger buffer's beginning address plus the
offsets (plus 1), will this be able to handle those really big fields? Or
is the record in the buffer just going to contain some sort of pointer to that 2
meg. image in the blob?
|
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.