|
Charles, As far as I can tell, UUID is supposed to be based on time and network adapter (read MAC address), which is supposed to always be unique for any network card. I know the AS400 allows us to specify the network adapter MAC address, but it never forgets the hardware address, so I ASSUME that GENUUID would be grabbing the actual hardware MAC as opposed to the user assignable MAC address..... If not, then I see how two seperate servers *could* potentially generate the same value, but it still seems unlikely..... No, I didn't forget the old adage..... <g> Eric DeLong Sally Beauty Company MIS-Project Manager (BSG) 940-898-7863 or ext. 1863 -----Original Message----- From: CWilt@xxxxxxxxxxxx [mailto:CWilt@xxxxxxxxxxxx] Sent: Monday, July 12, 2004 3:51 PM To: midrange-l@xxxxxxxxxxxx Subject: RE: Fastest way to get a unique identifier/tracking column change s Just a quick FYI, but GENUUID may not guarantee a unique ID. Check out this thread in the newsgroup: http://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=9nu8eb.dt1.ln%40m odula.bender-dv.de&rnum=1&prev=/groups%3Fhl%3Den%26lr%3D%26ie%3DUTF-8%26selm %3D9nu8eb.dt1.ln%2540modula.bender-dv.de Seems as if duplicates can occur on a multi-processor system when multiple processes are using it at the same time. Unless it turned out to be a bug that was corrected. HTH, Charles > -----Original Message----- > From: Kevin Mohondro [mailto:kevin.mohondro@xxxxxxxxxxxxxxx] > Sent: Monday, July 12, 2004 1:29 PM > To: 'Midrange Systems Technical Discussion' > Subject: RE: Fastest way to get a unique identifier/tracking column > change s > > > You could get a UUID (Universally Unique IDentifier). This > will give you a > unique ID that you could use. > > D getUUID PR ExtProc('_GENUUID') > D UUID_DS * Value > > D UUID_DS DS > D BytPrv 10u 0 Inz(%size(UUID_DS)) > D BytAvl 10u 0 > D Hold 8a Inz(*allx'00') > D UUID 16a > > C CallP getUUID(%addr(UUID_DS)) > C Eval Key = UUID > > -Kevin > -=-=-=-=-=-=-=-=- > Kevin R Mohondro > Programmer/Analyst > Ashworth, Inc > > -----Original Message----- > From: Reeve Fritchman [mailto:reeve.fritchman@xxxxxxxxxx] > Sent: Monday, July 12, 2004 7:22 AM > To: 'Midrange Systems Technical Discussion' > Subject: Fastest way to get a unique identifier/tracking > column changes > > > I'm designing a new system with a requirement for detailed > tracking of, and > inquiry into, column-level changes. I've decided to build a > single file > with before and after values, etc. for all the tables by > using triggers. > Some of the tables have complex keys (order number/SKU/shipper > location/consignee location/release number), and I don't want > to burden my > historical tracking file with a nasty key structure to > support inquiry into > the details of the changes. I'm not going to track added records or > date-of-last-change timestamps in the tables; the majority of > the changes > will be on a limited number of columns (of the status and > date nature). > > > > My design is to assign every row an "entity number"; the > entity number would > be like a record serial number, would be unique on a > system-wide basis, and > would be the key to the historical tracking table. When a > user wants to see > the details of the changes to a specific row, the row's > entity number would > allow simple access to the tracking file. Using SQL's AS > IDENTITY with the > table name could work to provide a key to a specific record. > > > > The challenge is to determine a way to get the entity number quickly. > Having a control file is okay but probably limiting performance-wise; > another possibility is a journaled data area. Is there a system API > providing a guaranteed unique sequential number? Or is there a better > approach for tracking column-level changes? > > > > Thanks, > > Reeve > > > > > > -- > This is the Midrange Systems Technical Discussion > (MIDRANGE-L) mailing list > To post a message email: MIDRANGE-L@xxxxxxxxxxxx > To subscribe, unsubscribe, or change list options, > visit: http://lists.midrange.com/mailman/listinfo/midrange-l > or email: MIDRANGE-L-request@xxxxxxxxxxxx > Before posting, please take a moment to review the archives > at http://archive.midrange.com/midrange-l. > > ############################################################## > ####################### > Attention: > The information contained in this message and or attachments > is intended > only for the person or entity to which it is addressed and may contain > confidential and/or privileged material. Any review, retransmission, > dissemination or other use of, or taking of any action in > reliance upon, > this information by persons or entities other than the > intended recipient > is prohibited. If you received this in error, please contact > the sender and > delete the material from any system and destroy any copies. > > Thank You. > ############################################################## > ####################### > -- > This is the Midrange Systems Technical Discussion > (MIDRANGE-L) mailing list > To post a message email: MIDRANGE-L@xxxxxxxxxxxx > To subscribe, unsubscribe, or change list options, > visit: http://lists.midrange.com/mailman/listinfo/midrange-l > or email: MIDRANGE-L-request@xxxxxxxxxxxx > Before posting, please take a moment to review the archives > at http://archive.midrange.com/midrange-l. > -- This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options, visit: http://lists.midrange.com/mailman/listinfo/midrange-l or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.