|
Specifies the number of values of the identity column sequence that the
database manager preallocates and keeps in memory. The minimum value that
can be specified is 2, and the maximum is the largest value that can be
represented as an integer. The default is 20.
In certain situations, such as system failure, all cached identity column
values that have not been used in committed statements are lost, and thus,
will never be used. The value specified for the CACHE option is the maximum
number of identity column values that could be lost in these situations.
Hello,
for quite some time, the SQL interface of IBM i offers automatic
generation of a unique identity field value on INSERTs. I assume this is an
atomic operation, so concurrent INSERTs do not cause a duplicate key. Is
this correct? I also assume this facility works with the peculiarities of
commitment control and still not generating duplicate keys. Is this
assumption also correct?
Now, assume a really old release of the OS (V4R5) on a very low level
machine (150). No identity columns, and SQL is usually outperformed by
native I/O calls in the order of magnitude. Thus using SQL is generally
undesired at best.
How were "atomic" updates to PFs handled back then? Was this even possible?
For use cases without commitment control, I currently tie a LF with just
the identity field to the PF. Speaking in RPG lingo, I then do a SETGT and
a READP on this LF to obtain the highest ID value. If there is no BOF
error, I add 1 to the obtained value for the next WRITE. Of course, the
same applies to the C record I/O API calls as well.
I was also considering putting a loop-until-no-error around the WRITE,
incrementing the ID value in each iteration to catch concurrent writes in a
graceful way.
This clearly isn't atomic but works fairly well for a single user machine
like mine. ;-)
The interesting part starts when I want to use commitment control. Now
it's possible to have two WRITEs pending in the journal. Both uncommitted
records have obtained the same id value from said LF. Committing the second
transaction to the PF would throw a duplicate key error. How to recover
from this? As far as I understand, just incrementing the ID value and WRITE
again would not help because the erroneous WRITE with the duplicate key is
still in the journal, waiting to be committed. Issuing a ROLLBACK is
undesirable, because this would throw away successful, prior changes to
PFs. I know that it's programmatically possible to remove entries from the
journal but this feels to get messy pretty quick: How to just delete my own
erroneous WRITE from the journal and no other entries?
What was best practice back in the days for handling the described cases?
Thanks!
:wq! PoC
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related
questions.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.