× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.


  • Subject: RE: Re[2]: duplicate record Id's in multi user environment
  • From: "Richard Jackson" <richardjackson@xxxxxxxxxxxxxxxxxx>
  • Date: Sun, 15 Oct 2000 22:41:00 -0600
  • Importance: Normal

I am concerned that someone think that this thing that I am describing is a
problem for 10 users - it positively is not a problem for 10 users.

There is a small risk.  If the system fails and you haven't written the data
area to disk lately, you may not have the last value properly recorded.
There are solutions but that is the risk.

Remember, this is only a problem when the volume is near the device capacity
limit for a single disk arm.

I believe that the system is using the ensure object MI instruction.  That
is how I would do it.  If that instruction were not issued, the write would
not occur.  I do not know if RPG is issuing that instruction or if it is
part of the change data area command processing program.  I suspect the
latter but perhaps Barbara Morris Or Hans Boldt can confirm that RPG is not
doing it.

If you did your own version using a user space, you could keep a counter in
the user space.  Every time you pulled a new unique value from the user
space, you could bump the counter.  When the counter equaled 10, you could
force it to disk.  That technique would reduce the physical disk IO by about
a factor of 10.  You could probably figure out some technique for tracking
the rate and write to disk less frequently if the user space was being hit
very hard.  I would have to think about how to do that.  It would make the
amount error indeterminate.  The ideal solution to that problem is (I think
!:)) -- the only way to get a next number is through the trigger program.
When the trigger starts the first time, it checks for the last key in the
file and resets the value of the data area based on the file.

Richard Jackson
mailto:richardjackson@richardjackson.net
http://www.richardjacksonltd.com
Voice: 1 (303) 808-8058
Fax:   1 (303) 663-4325

-|-----Original Message-----
-|From: owner-rpg400-l@midrange.com [mailto:owner-rpg400-l@midrange.com]On
-|Behalf Of Peter Dow
-|Sent: Sunday, October 15, 2000 4:02 PM
-|To: RPG400-L@midrange.com
-|Subject: Re: Re[2]: duplicate record Id's in multi user environment
-|
-|
-|Hi Richard,
-|
-|Is there some way to tell the system you don't require the actual physical
-|disk write?  Just leave the data area in memory and update it there until
-|the volume slows down?  For a file there's the force write ratio,
-|but afaik
-|that only applies to a single program/job. Is there any way to do
-|something
-|similar for a data area that would apply globally?
-|
-|Perhaps the solution for such a high volume situation would be a server
-|program that simply updates a variable and returns it a caller, increments
-|it, then handles the next caller without having to write anything to disk
-|until the volume slows down.
-|
-|Regards,
-|Peter Dow
-|Dow Software Services, Inc.
-|909 425-0194 voice
-|909 425-0196 fax
-|
-|
-|
-|----- Original Message -----
-|From: "Richard Jackson" <richardjackson@richardjackson.net>
-|To: <RPG400-L@midrange.com>
-|Sent: Sunday, October 15, 2000 1:48 PM
-|Subject: RE: Re[2]: duplicate record Id's in multi user environment
-|
-|
-|> My concern is not that two people can read the data area at the
-|same time.
-|> My concern is that writing the data area requires a synchronous disk
-|write.
-|> If a physical IO is required, I think that my timings are correct - less
-|> than 1 millisecond for a read or write, between 3 and 4.5
-|milliseconds for
-|> average latency depending on rotation speed.
-|>
-|> A read cache can reduce the read time and a fast write cache
-|connected to
-|> the disk drive will reduce the synchronous physical write time
-|to a small
-|> value but, under certain circumstances, a large machine can
-|overwhelm the
-|> caches.  Then, when the load is the greatest, the technique doesn't keep
-|up
-|> and the write time for the data area goes from sub 1 millisecond to
-|between
-|> 3 and 15 mills.  That really chokes up the system.
-|>
-|> I understand that each user waits for the previous one to
-|release the data
-|> area.  In fact, that is the problem.  If enough users wait in
-|line for the
-|> data area, you will be able to measure the response time.  If it gets
-|really
-|> bad, you will be able to see the jobs slow down.  Last January, I had a
-|> measurement of a single physical IO operation that took 8 seconds to
-|> complete.  The data was captured on a 12-way machine using SMTRACE.
-|>
-|> If all of this is true, how can you see 400 per second?  Blocking, write
-|> cache, mirrored or unprotected disk.  Since you are
-|incrementing the value
-|> in the program, not using the data area, you aren't forcing a
-|synchronous
-|> write for each record.  Try it again with force write ratio
-|equal to 1 and
-|> tell me what happens.
-|>
-|> I am not trying to dissuade people from using this technique,
-|it will work
-|> just great most of the time.  I offer a mild warning that it
-|can bite you
-|> under certain pretty rare circumstances.  If people don't know about the
-|> teeth, they can spend a lot of diagnostic time looking at the
-|wrong thing.
-|> The key symptom for this one is high disk queuing times and CPU
-|utilization
-|> and job throughput drops.  You will also see very noticeable lock wait
-|times
-|> on the data area (lock report).  The drive where the data area resides
-|gets
-|> very busy - wrkdsksts shows one very busy drive.  SMTRACE shows a large
-|> number of physical writes for the data area.  This can be aggravated if
-|the
-|> writes also require a seek.
-|>
-|> Richard Jackson
-|> mailto:richardjackson@richardjackson.net
-|> http://www.richardjacksonltd.com
-|> Voice: 1 (303) 808-8058
-|> Fax:   1 (303) 663-4325
-|>
-|> -|-----Original Message-----
-|> -|From: owner-rpg400-l@midrange.com
-|[mailto:owner-rpg400-l@midrange.com]On
-|> -|Behalf Of Eric N. Wilson
-|> -|Sent: Sunday, October 15, 2000 11:21 AM
-|> -|To: Richard Jackson
-|> -|Subject: Re[2]: duplicate record Id's in multi user environment
-|> -|
-|> -|
-|> -|Hello Richard,
-|> -|
-|> -|Sunday, October 15, 2000, 8:15:08 AM, you wrote:
-|> -|
-|> -|> Ladies and gentlemen:
-|> -|
-|> -|> Please beware of this technique in a very high-volume situation
-|> -|- it will
-|> -|> work most of the time but it is a problem in high volume.
-|> -|
-|> -|Richard, when you use the technique as I described with locking the
-|> -|data area etc you should not have any trouble in a high volume
-|> -|situation. This is because only one job is going to be able to lock
-|> -|the data area and the changed data is flushed to disk prior to the
-|> -|lock being released. Then the next job that is waiting on the lock
-|> -|will be able to acquire the lock and do the same... Each time the
-|> -|trigger flushes the data area and explicitly unlocks it.
-|> -|
-|> -|Or at least that is my understanding ( I think this has been the
-|> -|behavior for as long as data areas have existed ). I have been using
-|> -|this technique for quite some time and I have added more than 400
-|> -|records per second in some tests of a very simple trigger that just
-|> -|auto incremented the integer key of the table.
-|> -|
-|> -|Thanks
-|> -|Eric
-|> -|
-|> -|
-|> -|+---
-|> -|| This is the RPG/400 Mailing List!
-|> -|| To submit a new message, send your mail to RPG400-L@midrange.com.
-|> -|| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
-|> -|| To unsubscribe from this list send email to
-|RPG400-L-UNSUB@midrange.com.
-|> -|| Questions should be directed to the list owner/operator:
-|> -|david@midrange.com
-|> -|+---
-|>
-|> +---
-|> | This is the RPG/400 Mailing List!
-|> | To submit a new message, send your mail to RPG400-L@midrange.com.
-|> | To subscribe to this list send email to RPG400-L-SUB@midrange.com.
-|> | To unsubscribe from this list send email to
-|RPG400-L-UNSUB@midrange.com.
-|> | Questions should be directed to the list owner/operator:
-|david@midrange.com
-|> +---
-|
-|+---
-|| This is the RPG/400 Mailing List!
-|| To submit a new message, send your mail to RPG400-L@midrange.com.
-|| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
-|| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
-|| Questions should be directed to the list owner/operator:
-|david@midrange.com
-|+---

+---
| This is the RPG/400 Mailing List!
| To submit a new message, send your mail to RPG400-L@midrange.com.
| To subscribe to this list send email to RPG400-L-SUB@midrange.com.
| To unsubscribe from this list send email to RPG400-L-UNSUB@midrange.com.
| Questions should be directed to the list owner/operator: david@midrange.com
+---

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.