× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: RE: Re[2]: duplicate record Id's in multi user environment
  • From: "Steve Brazzell" <steve@xxxxxxxxxxxx>
  • Date: Mon, 16 Oct 2000 10:15:41 -0000
  • Importance: Normal

Seth,

Remarkable, but I was just thinking the same thing.  Also, if after each
receive from the queue an increment and send were done, even the batch
feeder job wouldn't be needed.  Of course, that brings up the problem of a
program somehow failing between the receive and send, but still an
interesting idea, imo.

Steve Brazzell
Innovatum, Inc.


-----Original Message-----
From: owner-rpg400-l@midrange.com [mailto:owner-rpg400-l@midrange.com]On
Behalf Of Seth.D.Shields@blum.com
Sent: Monday, October 16, 2000 1:45 PM
To: RPG400-L@midrange.com
Subject: Re: Re[2]: duplicate record Id's in multi user environment



Just an idea.  How about a batch job feeding sequential Order #'s to a data
queue.  When a job needs a Order number, it just pulls
the next one from the queue.

Regards,
Seth Shields
Julius Blum, Inc







                      "Peter Dow"         To: <RPG400-L@midrange.com>
                      <pcdow@MailAndN     cc:
                      ews.com>            Subject:    Re: Re[2]: duplicate
record
                      Sent by:                Id's in multi user environment
                      owner-rpg400-l@
                      midrange.com


                      10/15/00 06:02
                      PM
                      Please respond
                      to RPG400-L






Hi Richard,

Is there some way to tell the system you don't require the actual physical
disk write?  Just leave the data area in memory and update it there until
the volume slows down?  For a file there's the force write ratio, but afaik
that only applies to a single program/job. Is there any way to do something
similar for a data area that would apply globally?

Perhaps the solution for such a high volume situation would be a server
program that simply updates a variable and returns it a caller, increments
it, then handles the next caller without having to write anything to disk
until the volume slows down.

Regards,
Peter Dow
Dow Software Services, Inc.
909 425-0194 voice
909 425-0196 fax



----- Original Message -----
From: "Richard Jackson" <richardjackson@richardjackson.net>
To: <RPG400-L@midrange.com>
Sent: Sunday, October 15, 2000 1:48 PM
Subject: RE: Re[2]: duplicate record Id's in multi user environment


> My concern is not that two people can read the data area at the same time.
> My concern is that writing the data area requires a synchronous disk
write.
> If a physical IO is required, I think that my timings are correct - less
> than 1 millisecond for a read or write, between 3 and 4.5 milliseconds for
> average latency depending on rotation speed.
>
> A read cache can reduce the read time and a fast write cache connected to
> the disk drive will reduce the synchronous physical write time to a small
> value but, under certain circumstances, a large machine can overwhelm the
> caches.  Then, when the load is the greatest, the technique doesn't keep
up
> and the write time for the data area goes from sub 1 millisecond to
between
> 3 and 15 mills.  That really chokes up the system.
>
> I understand that each user waits for the previous one to release the data
> area.  In fact, that is the problem.  If enough users wait in line for the
> data area, you will be able to measure the response time.  If it gets
really
> bad, you will be able to see the jobs slow down.  Last January, I had a
> measurement of a single physical IO operation that took 8 seconds to
> complete.  The data was captured on a 12-way machine using SMTRACE.
>
> If all of this is true, how can you see 400 per second?  Blocking, write
> cache, mirrored or unprotected disk.  Since you are incrementing the value
> in the program, not using the data area, you aren't forcing a synchronous
> write for each record.  Try it again with force write ratio equal to 1 and
> tell me what happens.
>
> I am not trying to dissuade people from using this technique, it will work
> just great most of the time.  I offer a mild warning that it can bite you
> under certain pretty rare circumstances.  If people don't know about the
> teeth, they can spend a lot of diagnostic time looking at the wrong thing.
> The key symptom for this one is high disk queuing times and CPU
utilization
> and job throughput drops.  You will also see very noticeable lock wait
times
> on the data area (lock report).  The drive where the data area resides
gets
> very busy - wrkdsksts shows one very busy drive.  SMTRACE shows a large
> number of physical writes for the data area.  This can be aggravated if
the
> writes also require a seek.
>
> Richard Jackson
> mailto:richardjackson@richardjackson.net
> http://www.richardjacksonltd.com
> Voice: 1 (303) 808-8058
> Fax:   1 (303) 663-4325
>
> -|-----Original Message-----
> -|From: owner-rpg400-l@midrange.com [mailto:owner-rpg400-l@midrange.com]On
> -|Behalf Of Eric N. Wilson
> -|Sent: Sunday, October 15, 2000 11:21 AM
> -|To: Richard Jackson
> -|Subject: Re[2]: duplicate record Id's in multi user environment
> -|
> -|
> -|Hello Richard,
> -|
> -|Sunday, October 15, 2000, 8:15:08 AM, you wrote:
> -|
> -|> Ladies and gentlemen:
> -|
> -|> Please beware of this technique in a very high-volume situation
> -|- it will
> -|> work most of the time but it is a problem in high volume.
> -|
> -|Richard, when you use the technique as I described with locking the
> -|data area etc you should not have any trouble in a high volume
> -|situation. This is because only one job is going to be able to lock
> -|the data area and the changed data is flushed to disk prior to the
> -|lock being released. Then the next job that is waiting on the lock
> -|will be able to acquire the lock and do the same... Each time the
> -|trigger flushes the data area and explicitly unlocks it.
> -|
> -|Or at least that is my understanding ( I think this has been the
> -|behavior for as long as data areas have existed ). I have been using
> -|this technique for quite some time and I have added more than 400
> -|records per second in some tests of a very simple trigger that just
> -|auto incremented the integer key of the table.
> -|
> -|Thanks
> -|Eric
> -|
> -|
> -|+---
> -|| This is the RPG/400 Mailing List!
> -|| To submit a new message, send your mail to RPG400-L@midrange.com.
> -|| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
> -|| To unsubscribe from this list send email to
RPG400-L-UNSUB@midrange.com.
> -|| Questions should be directed to the list owner/operator:
> -|david@midrange.com
> -|+---
>
> +---
> | This is the RPG/400 Mailing List!
> | To submit a new message, send your mail to RPG400-L@midrange.com.
> | To subscribe to this list send email to RPG400-L-SUB@midrange.com.
> | To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
> | Questions should be directed to the list owner/operator:
david@midrange.com
> +---

+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator:
david@midrange.com
+---




+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator:
david@midrange.com
+---

+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.