× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi, Vern
 
Rather than using MATTOD I would recommend using MATMDATA with materialization option x'0004'.  MATTOD relies on local time which, in the case of daylight saving time, could potentially return a duplicate value when "falling back".  MATMDATA option x'0004' uses UTC time, which is not subject to DST.
 
GENUUID uses time as one of the inputs to the determination of the return value.  This time value should also be based on UTC.
 
*DTAARA access is indeed very fast.  I have however seen cases where high volumes of activity caused severe contention for a *DTAARA.  The same is true (and perhaps more so) for files.  There's fast and then there's super fast :)  Using any of the various MI instructions (MATMDATA, GENUUID, CMPSWP will tend to outperform an approach based on running through additional code paths due to the involvement of database or work control (which owns data areas).
 
Bruce


Bruce Bruce Vining Services 507-206-4178

--- On Wed, 11/19/08, Vern Hamberg <vhamberg@xxxxxxxxxxx> wrote:

From: Vern Hamberg <vhamberg@xxxxxxxxxxx>
Subject: Re: problem with duplicate records
To: "RPG programming on the AS400 / iSeries" <rpg400-l@xxxxxxxxxxxx>
Date: Wednesday, November 19, 2008, 6:29 AM

One can also use MATTOD and CVTCH to get a 16-character value that is
guaranteed to be unique by the system - if the documentation is right.
There are 16 values per character position, so the total possible is 16
to the 16th power (18,446,744,073,709,551,616) - however, not every
value is used - I don't know the practical limit, but it still has to be
considerable.

There is also the UUID - unique universal ID (universal unique?) - I
think the function is GETUUID but not sure - look it up in the API
finder. Another unique 16-character value, as I understand it.

These might be overkill - one can lock a data area, get the number,
increment it and save the value, update and release the data area - all
very fast.

Bruce Vining wrote:
I'm a little confused about the scenario in that the first note
suggests an archive number while the second suggests a test for duplication of
client calculated data, but if this is simply a sequential archive number then
several possible solutions exist.

If volume is low, and expected to remain low, the external procedure could
access a database record for update which contains the next available archive
number. The procedure uses this number as is, adds 1 to the value and then
updates the record with this new value for the next reader. A variant of this
would be to use a *DTAARA.

If volume is high, the external procedure could utilize a shared memory
location and use the Compare and Swap instruction CMPSWP. I discuss CMPSWP in
the article found at
http://www.mcpressonline.com/programming/cl/the-cl-corner-incrementing-a-numeric-value-across-jobs.html
though this is a CL oriented article. You shouldn't have much difficulty
mapping this to RPG though.

Other solutions also exist such as SQL ROWID, etc.

Bruce

Bruce Bruce Vining Services 507-206-4178

--- On Wed, 11/19/08, David FOXWELL <David.FOXWELL@xxxxxxxxx> wrote:

From: David FOXWELL <David.FOXWELL@xxxxxxxxx>
Subject: RE: problem with duplicate records
To: "RPG programming on the AS400 / iSeries"
<rpg400-l@xxxxxxxxxxxx>
Date: Wednesday, November 19, 2008, 5:17 AM

Sorry, the contents of my file are misleading. The time isn't part of
the
reference number.

ArchiveFile
Product Client Ref Time
Product1 Client1 RefNo1 10:41:57
Product2 Client2 RefNo1 10:41:57


Summing up, 2 jobs writing to the same file. One job is quicker than the
other.
The first job calculates the data to write, tests to see it doesn't
already
exist and writes the data. In RPG, how should we prevent another job from
doing
the same and writing the same data before the first one?


-----Message d'origine-----
De : rpg400-l-bounces@xxxxxxxxxxxx [mailto:rpg400-l-bounces@xxxxxxxxxxxx]
De la
part de David FOXWELL
Envoyé : mercredi 19 novembre 2008 09:33
À : RPG programming on the AS400 / iSeries
Objet : problem with duplicate records

Hi,

Two users have managed to create the same archive reference number while
creating two different clients.

Like this :

ArchiveFile
Product1 Client1 RefNo1 10:41:57(time)
Product2 Client2 RefNo1 10:41:57

The program would crash if the problem occurred with the same product. As
it is
2 users in different services created the records at the same time.

The user enters the client details and hits the enter key.
The program writes the client to the client file then calls an external
procedure to find an unused archive number then write this number to the
archive
file.

The client files look like this
Product1_clients
Client1 created at 10:41:53

Product2_clients
Client2 created at 10:41:56

I'm assuming that the creation of client1 took so long that the
creation of
client2 caught up. The same time in the ArchiveFile is just a hazard.

I can think of ways to fix it, but I'd rather ask what are we doing
wrong?

Thanks.
--
This is the RPG programming on the AS400 / iSeries (RPG400-L) mailing list
To
post a message email: RPG400-L@xxxxxxxxxxxx To subscribe, unsubscribe, or
change
list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives at
http://archive.midrange.com/rpg400-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.