× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



One thing to realize is an identity column isn't guaranteed to have a complete set of values..
That's why you should add either a key constraint (primary or unique key) on the identity column or add an unique index with the identity columns.
Duplicates only occur if you are using CYCLE and it restart numbering from the lowest value.

Mit freundlichen Grüßen / Best regards

Birgitta Hauser
Modernization – Education – Consulting on IBM i
Database and Software Architect
IBM Champion since 2020

"Shoot for the moon, even if you miss, you'll land among the stars." (Les Brown)
"If you think education is expensive, try ignorance." (Derek Bok)
"What is worse than training your staff and losing them? Not training them and keeping them!"
"Train people well enough so they can leave, treat them well enough so they don't want to. " (Richard Branson)
"Learning is experience … everything else is only information!" (Albert Einstein)

-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx> On Behalf Of Charles Wilt
Sent: Monday, 5 February 2024 16:04
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx>
Subject: Re: Atomic updates in the olden days

One thing to realize is an identity column isn't guaranteed to have a complete set of values..

CACHE integer
Specifies the number of values of the identity column sequence that
the database manager preallocates and keeps in memory. The minimum
value that can be specified is 2, and the maximum is the largest value
that can be represented as an integer. The default is 20.
In certain situations, such as system failure, all cached identity
column values that have not been used in committed statements are
lost, and thus, will never be used. The value specified for the CACHE
option is the maximum number of identity column values that could be lost in these situations.


IIRC, this used to mention something about each job getting its own set of cached values.

But I don't have any reference before 7.1.

Charles


On Mon, Feb 5, 2024 at 3:07 AM Patrik Schindler <poc@xxxxxxxxxx> wrote:

Hello,

for quite some time, the SQL interface of IBM i offers automatic
generation of a unique identity field value on INSERTs. I assume this
is an atomic operation, so concurrent INSERTs do not cause a duplicate
key. Is this correct? I also assume this facility works with the
peculiarities of commitment control and still not generating duplicate
keys. Is this assumption also correct?

Now, assume a really old release of the OS (V4R5) on a very low level
machine (150). No identity columns, and SQL is usually outperformed by
native I/O calls in the order of magnitude. Thus using SQL is
generally undesired at best.

How were "atomic" updates to PFs handled back then? Was this even possible?

For use cases without commitment control, I currently tie a LF with
just the identity field to the PF. Speaking in RPG lingo, I then do a
SETGT and a READP on this LF to obtain the highest ID value. If there
is no BOF error, I add 1 to the obtained value for the next WRITE. Of
course, the same applies to the C record I/O API calls as well.
I was also considering putting a loop-until-no-error around the WRITE,
incrementing the ID value in each iteration to catch concurrent writes
in a graceful way.

This clearly isn't atomic but works fairly well for a single user
machine like mine. ;-)

The interesting part starts when I want to use commitment control. Now
it's possible to have two WRITEs pending in the journal. Both
uncommitted records have obtained the same id value from said LF.
Committing the second transaction to the PF would throw a duplicate
key error. How to recover from this? As far as I understand, just
incrementing the ID value and WRITE again would not help because the
erroneous WRITE with the duplicate key is still in the journal,
waiting to be committed. Issuing a ROLLBACK is undesirable, because
this would throw away successful, prior changes to PFs. I know that
it's programmatically possible to remove entries from the journal but
this feels to get messy pretty quick: How to just delete my own erroneous WRITE from the journal and no other entries?

What was best practice back in the days for handling the described cases?

Thanks!

:wq! PoC

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To
subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription
related questions.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx for any subscription related questions.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.