|
On 05-Aug-2016 14:03 -0500, Mark S Waterbury wrote:
<<SNIP>>
_METHOD A -- checksums or hash code_
1. when you first read the record, (with no lock), compute a
checksum or "hash" of the contents of the entire record buffer.
2. present the data on-screen for the user to (possibly) update.
3. when the user has updated some fields, re-read the original
database record again, with record lock this time, into another
buffer and re-compute the check-sum or hash code. If the hash
or checksum is still the same then no one else has updated the
record in the meantime, so it is now "safe" to update that
record with the changes made by the user. (The update will
release the record lock automatically.)
4. if the check-sum or hash does not "match" you need to release
the record lock and inform the user that another person has
updated the same record -- present the (newly updated) data
again, and ask the user to re-enter the changes.
(... lather, rinse, repeat ... as needed until the update
is successful or the user presses F12=Cancel ...)
The above logic is flawed because while a changed checksum\hash can conclusively suggest a change was detected, an unchanged checksum\hash can not definitively\safely be concluded as suggesting _no changes_ having been made. Only a comparison to a prior saved-record-image can definitively detect changes; i.e. additional work for the latter case. Restated, an unchanged checksum\hash for a given value does not imply there were no changes to the original value, but a changed checksum\hash does imply there was at least some change made to the original value.
By their nature, a checksum\hash can serve only as an indication that there is some level of /probability suggesting/ that the same-valued checksum\hash implies a like-value of whatever were checksummed\hashed. Probably not the best idea [for a business application] to effectively be placing bets, that prior row-changes are not going to be lost\overlaid by the current update. A /better/ checksum\hash can give increased odds, but only the actual byte-comparison of the record images could be definitive.
3a) If the hash or checksum of the re-retrieved record is still the same, then unknown whether anyone else might have updated the record in the meantime, so a need remains to compare the entire re-retrieved record with an original\saved copy of that record retrieved previously.
3b) If there is no mis-compare between the saved record-image and the re-retrieved record-image, then "it is now 'safe' to update that record with the changes made by the user. (The update will release the record lock automatically.)"
_METHOD B -- a similar but simpler approach_
A simpler, low-cost variation on the above, relies on adding a
single "UPDATEID" field to each file you want to update "lock-free"
-- it can be a binary integer or a packed numeric field, if binary,
you can use a 2-byte or 4-byte integer, or packed with at least 5
or more digits. Proceed as follows:
* whenever a record is first added to the file, initialize this
UPDATEID field to zero. (Could be handled by an INSERT trigger)
* whenever any application program updates any record in this file,
you must ensure that the UPDATEID value is always incremented by 1
The algorithm for updating is similar to the above, with these
changes. Instead of computing a checksum, you just remember
the UPDATEID value ... and compare it, as follows:
1. when you first read the record, (with no lock), save the UPDATEID
value for use later.
2. present the data on-screen for the user to possibly update.
3. when the user has updated some fields, re-read the original
record again, with a record lock this time, into another
buffer and If the UPDATEID is still the same as the saved
value, then no one else has updated the record in the
meantime, so it is now "safe" to update that record with the
changes made by the user. Increment the value of UPDATEID in
the record buffer about to be updated (This update I/O will
also release the record lock automatically.) (An UPDATE
trigger could handle incrementing the UPDATEID field.)
4. if the UPDATEID does not "match" you need to release the
record lock and inform the user, that the record has been
updated by someone else, and present the (newly updated) data
again, and ask the user to re-enter any changes. (... lather,
rinse, repeat ... as needed until the update is successful or
the user presses F12=Cancel ...)
I have not had an opportunity to use, but I believe the ROW CHANGE TIMESTAMP is meant to suffice for the described.
The main "down side" to METHOD B is that _everyone must follow_ these
rules_ to ensure that the UPDATEID is always incremented whenever a
record is updated. (The "increment" logic could be put in a trigger
to make that part easier.) But, you must still add logic in your
applications to save the UPDATEID on the first READ and compare it
again, just prior to any update ... -- if anyone forgets or ignores
these rules, your users will suffer "lost changes" like what happens
if you use no record locking at all.
With the updated value having been implemented in the database via TRIGGER or using the ROW CHANGE TIMESTAMP, then there should be fewer worries; the applications would be able to operate referencing the data from an effective UPDATEID column, but not have to worry about remembering to update, because the database [trigger] implements that part of the rule.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.