× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



OK having done a bit more reading and a bit more testing I might concede
that the problem is of my own making, although I think a big warning about
Identity Columns in the DB2 manual might help.

Because I managed to create the same error via an SQL insert which did not
specify the ID on the insert, it made me realise that it was more a problem
around what the Database Manager thinks is the next available ID.

My believe at this moment is that once you create a table with one of these
ID columns, you must not use CPYF to add records to it, only SQL Inserts.

In some of my test changes I created a test version of the file via "cpyf
fromlib/fromfile tolib/tofile *n *n *add crtfile(*yes). This creates the
new file, copies the records from the original file, keeping the ID columns
they already have. But the new file is primed to start generating next
numbers at 1 - a problem waiting to happen.

I tried creating the file first with my SQL script and then copying the
data in via copy file. Same problem. The records keep the ID value from
before but the new file is primed to start with ID of 1.

I tried creating the file via create table lib2/file as ( select * from
lib1/file ) with data. This creates the new file and keeps the original IDs
and in this case the ID column is no longer an ID column ( maybe there are
other clauses I could have used to make it so )

The thing that worked was to create the empty table using my SQL Script and
then use an SQL insert, excluding the ID column on the insert. This creates
the data in the table, starting with ID 1. Subsequent inserts work.

Lastly, I tried using CRTDUPOBJ to create the file. This copies the IDs. I
expected this to also correctly duplicate the next ID value. In my case the
next insert failed and although it doesn't quite tie in with my early
tests, I think this is because I haven't yet recreated and fixed the file I
was doing a CRTDUPOBJ on so I am not surprised it had a problem. Although
it didn't fail on the UniqueID I would have expected.

So. I think probably the RPGLE RLA is ok but the issue has been caused by
me using CPYF on the file which I would typically do if I was changing a
file - more or copy the original file, create the new file and then copy
the data back.
I have started using CREATE OR REPLACE TABLE more if appropriate.

I can't help think that it's a problem waiting to happen that someone will
use CPYF to add or replace records in one of these files.
Marvellous.

best regards,
Craig

On 22 June 2018 at 09:52, Craig Richards <craig@xxxxxxxxxxxxxxxx> wrote:

So, this issue has reared it's head again on another file.

I think I see the problem.

Here is what happened this morning.
I signed on and the first thing I did was select and try to copy a record
using my maintenance program.
It failed with a duplicate key.
I came out of the program and went back in. It continued to fail.
I did this 2 or 3 times.

I signed off and back on.
I tried the same thing again. It worked...

Here is the way the ID column is defined:

Trigger_UniqueID for column UniqueID Integer
not Null
generated always as identity ( start with 1 increment by 1 cycle )
primary key


Here's the error:
Message ID . . . . . . : CPF5009 Severity . . . . . . . : 10

Message type . . . . . : Diagnostic

Date sent . . . . . . : 22/06/18 Time sent . . . . . . :
08:27:31


Message . . . . : Duplicate record key in member TRGCTL00.

Cause . . . . . : The output or update operation to member number 1
record
number 0 format TRGCTL0R, for member TRGCTL00 file TRGCTL00 in library

CISYS, failed. Member number 1 record number 3 format TRGCTL0R has the
same
record key as member number 1 record number 0 format TRGCTL0R. If the

record number is zero, the duplicate record key occurred on an output

operation.

Recovery . . . : Change either record key so that the keys are unique.

Then try your request again.

Here is how the file looked at that point:
( There are gaps on the UniqueID numbers as you would expect as records
get deleted )

REL_REC UNIQUEID
--------------------------------
1 3
2 4
3 9
4 10
5 11
6 12
7 16
8 17
9 18
10 19
11 20
12 21
13 22
14 23
15 24
16 25
17 26
18 27
19 28
20 29
21 37
22 38
23 5
24 8

So, record number 3 has a UniqueID of 9.
That's the one it's complaining about.

It seems like it's not a coincidence that the highest relative record
number in the file has a UniqueID of 8.
( The record with a UniqueID of 9 IS NOT the one I was copying )

As I mentioned, I tried again a few times which failed, and then
eventually worked.
The file now looks like this:

REL_REC UNIQUEID
--------------------------------
1 3
2 4
3 9
4 10
5 11
6 12
7 16
8 17
9 18
10 19
11 20
12 21
13 22
14 23
15 24
16 25
17 26
18 27
19 28
20 29
21 37
22 38
23 5
24 8
25 13

So, my few attempted writes that failed with a duplicate key caused the
UniqueID to bump up through the failing 9, 10, 11, 12 to where there was
a gap at 13 which is when the write succeeded.

So now I would expect the next two writes for UniqueID 14 and 15 to be ok
as that is a gap but then 16 will fail.

To re-iterate, the environment - it's a service program supporting the
database access of this file for a maintenance subfile and detail program.
The only maintenance of this new file at all on the system is via this new
program.
The service program uses a cursor to build a page of subfile records but
all updates of the file are done via RPGLE RLA.
The RPG program uses the UniqueID to retrieve a record for UPDATE but
never manipulates the value of the UniqueID .
The RPG program DOES NOT use data structures on the WRITE operation to
limit the list of fields specified ( e.g to try to take the UniqueID out
of the equation )

My understanding is that once you create an IDENTITY column, you don't
need to concern yourself with how it gets populated on writes to the table
via RLA or SQL.
The Database Manager should take care of that.

Does anyone disagree with this?
Does this seem like a bug or am I making incorrect assumptions or doing
something foolish? ( it wouldn't be the first time... )

I'm going to change my writes to this table to SQL inserts.

best regards,
Craig



On 15 June 2018 at 18:12, Justin Taylor <JUSTIN@xxxxxxxxxxxxx> wrote:

I just meant whatever you're writing to the PF, whether a DS or just
individual variables.



-----Original Message-----
From: Craig Richards [mailto:craig@xxxxxxxxxxxxxxxx]
Sent: Thursday, June 14, 2018 4:22 PM
To: RPG programming on the IBM i (AS/400 and iSeries) <
rpg400-l@xxxxxxxxxxxx>
Subject: Re: Identity Columns

I'm not sure which buffer you mean Justin.

But if you are talking about the memory location that the program uses
for the ID Column of the table - well that is never referenced by the
program.

If assignment of ID Column values was down to my program, the first ID
would write as zero and all subsequent ones would fail as duplicates.

The file is Prefixed with Cnt_

The ID Column is called UniqueID.

The field Cnt_UniqueID is never referenced in the program.

Debugging the program - stopped on the WRITE. The File contains UniqueID
values of 1,2,3,4,5,6,7,8 and 32.

The first test I try is to Add a record so the Service Program has not
yet accessed the table.
Well, it has to build a subfile page but that bit has been done in SQL.

The value of Cnt_UniqueID has a Value of 0 because no RLA has occurred
yet in the Service Program.

After the WRITE there is a new UniqueID of 46 on the file. I've been
adding and deleting a few records in testing which would account for the
gap.
So the RLA WRITE ( or whatever goes on between my program and the
Database Manager ) has ignored the value of 0 which was in CNT_UniqueID and
generated the new value of 46.

Second test. I Update a record with UniqueID 5 and then go on to Copy a
Record to create a new record on the file..
This time, before the WRITE there is a value of 5 in CNT_UniqueID ( from
the RLA Access for the previous Update ) After the WRITE there is a new
UniqueID of 47 in the file.

So this all appears as I expected, that for RLA Writes to this table the
values my program has for the ID field are inconsequential.
Hence my confusion about how a Duplicate Key Error can occur if the
Database Manager is looking after it.

Anyway, I've tried adding / deleting / copying / updating records and
switching from one library to another ( after coming out of the program
which correctly closes everything down AND reclaims the named activation
group ) and now it's decided to behave itself.

I'll just have to wait and see if it appears again...
I like a good mystery /sigh

As usual - thanks all for your thoughts. It is very helpful to share
knowledge on these forums and I'm always grateful for the people who take
time to add their thoughts.
best regards,
Craig

--
This is the RPG programming on the IBM i (AS/400 and iSeries) (RPG400-L)
mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at https://archive.midrange.com/rpg400-l.

Please contact support@xxxxxxxxxxxx for any subscription related
questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: http://amzn.to/2dEadiD




As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.