× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: Re: duplicate record Id's in multi user environment
  • From: "Peter Dow" <pcdow@xxxxxxxxxxxxxxx>
  • Date: Mon, 16 Oct 2000 17:09:43 -0700

Hi Booth,

The date, time and time stamp fields are not as excessive as DSPPFM would
lead you to believe. Given the following file:

File: ADATE       Test date, time & timestamp data types
Libr: CC#PGMPCD   Fmt: TESTREC
Mbr : ADATE                                    Scan:
 Seq Pg K Field      T  Len Fd Dd  From   To Text
   5  1   DATE1      L   10           1   10 DATE
  10  1   TIME1      T    8          11   18 TIME
  15  1   TSTMP      Z   26          19   44 TIMESTAMP

the SQL HEX function shows that the lengths are not actually what they seem:

....+....1....+....2....+....3....+....4....+...
DATE1     HEX ( DATE1 )
05/14/97    00256497        actual length 4 bytes

TIME1     HEX ( TIME1 )
15:02:45     150245         actual length 3 bytes

TSTMP                       HEX ( TSTMP )
1997-05-14-15.02.45.000000  00256497150245000000  actual length 10 bytes

At some level they're playing tricks, and it's apparently a pretty low level
since DSPPFM shows the formatted date, time and timestamp.

Another way to prove it is to create a file with a single date field and
initialize it with 1000 records. Subtract the overhead (size of the file
with no records) and you'll see that it comes out to about 4 bytes per
record. I say about because I think there are some other housekeeping bits
or bytes in there like the deleted record flag.

Regards,
Peter Dow
Dow Software Services, Inc.
909 425-0194 voice
909 425-0196 fax




----- Original Message -----
From: <booth@martinvt.com>
To: <RPG400-L@midrange.com>
Sent: Monday, October 16, 2000 2:52 PM
Subject: Re: duplicate record Id's in multi user environment


> If your ID field is going to have the sort of excessive size-to-need ratio
> that a timestamp provides  then lets just have a large numeric field. Then
> a random number generator can create random numbers as needed, and chain
> to see if it exists.  The performance hit would be slight because the odds
> are that only 1 out of a dozen or more attempts would have to take two
> tries.  Almost none would need a third try.   There is no rule that says
> ID numbers have to be in ascending order and matching arrival sequence is
> there?
>
> _______________________
> Booth Martin
> Booth@MartinVT.com
> http://www.MartinVT.com
> _______________________
>
>
>
>
> rob@dekko.com
> Sent by: owner-rpg400-l@midrange.com
> 10/16/2000 12:53 PM
> Please respond to RPG400-L
>
>
>         To:     RPG400-L@midrange.com
>         cc:
>         Subject:        Re: duplicate record Id's in multi user
environment
>
>
> And your trigger program can have a time delay on the receive from data
> queue.  If it times out then it can run the batch process to fill it
> again.
>
> Why wait until 999999999?  If you use hex shouldn't it go from 000000009
> to
> 00000000A, and so forth?  Or you could roll your own hex, and use all
> letters instead of stopping at F.  Regular hex may be easier because of
> existing hex math.
>
>
> Rob Berendt
>
> ==================
> Remember the Cole!
>
>
>
>                     Jim Langston
>                     <jimlangston@conexfr        To: RPG400-L@midrange.com
>
>                     eight.com>                  cc:
>                     Sent by:                    Subject:     Re: duplicate
> record Id's in multi user environment
>                     owner-rpg400-l@midra
>                     nge.com
>
>
>                     10/16/00 10:51 AM
>                     Please respond to
>                     RPG400-L
>
>
>
>
>
>
> Hmm... This sounds like an idea...  If the batch job keeps the data queue
> filled with, say, at least 1000 entries, it shouldn't be a problem.  It
> would be possible for a program to grab an entry and not use it (the job
> bombs out, user turns off tube, etc...) but this shouldn't be a problem
> as we are not really looking for sequential numbers, just unique ones.
>
> Of course, the key field in the data files should be declared as character
> fields and not numeric (no math is ever going to be performed on them)
> which
> gives us a lot of advantage.  One of the biggest ones being that once you
> hit
> 999999999 you can start over at, say, A000000001.
>
> And even at this, a trigger program can be used to pull the number from
> the
> data queue, so you only have one place to modify the changes.
>
> Regards,
>
> Jim Langston
>
> Steve Brazzell wrote:
> >
> > Seth,
> >
> > Remarkable, but I was just thinking the same thing.  Also, if after each
> > receive from the queue an increment and send were done, even the batch
> > feeder job wouldn't be needed.  Of course, that brings up the problem of
> a
> > program somehow failing between the receive and send, but still an
> > interesting idea, imo.
> >
> > Steve Brazzell
> > Innovatum, Inc.
> >
> > -----Original Message-----
> > From: owner-rpg400-l@midrange.com [mailto:owner-rpg400-l@midrange.com]On
> > Behalf Of Seth.D.Shields@blum.com
> > Sent: Monday, October 16, 2000 1:45 PM
> > To: RPG400-L@midrange.com
> > Subject: Re: Re[2]: duplicate record Id's in multi user environment
> >
> > Just an idea.  How about a batch job feeding sequential Order #'s to a
> data
> > queue.  When a job needs a Order number, it just pulls
> > the next one from the queue.
> >
> > Regards,
> > Seth Shields
> > Julius Blum, Inc
> +---
> | This is the RPG/400 Mailing List!
> | To submit a new message, send your mail to RPG400-L@midrange.com.
> | To subscribe to this list send email to RPG400-L-SUB@midrange.com.
> | To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
> | Questions should be directed to the list owner/operator:
> david@midrange.com
> +---
>
>
>
> +---
> | This is the RPG/400 Mailing List!
> | To submit a new message, send your mail to RPG400-L@midrange.com.
> | To subscribe to this list send email to RPG400-L-SUB@midrange.com.
> | To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
> | Questions should be directed to the list owner/operator:
> david@midrange.com
> +---
>
>
>
>
> +---
> | This is the RPG/400 Mailing List!
> | To submit a new message, send your mail to RPG400-L@midrange.com.
> | To subscribe to this list send email to RPG400-L-SUB@midrange.com.
> | To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
> | Questions should be directed to the list owner/operator:
david@midrange.com
> +---

+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.