× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Joe,

On Freitag, 9. Juli 2004 16:22, Joe Pluta wrote:
> > From: Dieter Bender
> >
> > breaking it down, we have 270 complex transactions/second on the load
> > boundary
> > of the system (multiprocessor with 7 cpus in the partition)
>
> With each of these transactions being 50 updates, that's about 13,500
> updates per second.  Of course, that's with seven CPU's, and I don't
> quite know how to take that into account, but it's a great baseline.
>
> I just ran a simple program that updates 100,000 records in a file.
> Nowhere near as complex as what you're doing, of course, but I just
> wanted a quick check.  On my little model 270 single-CPU processor (with
> 370CPW), I managed to update those records in 15 seconds.  That's a
> little better than 6500 updates per second.  So we're in the same
> ballpark.

If we run a single process, we get about the double number of transactions/
sec.The number above is the maximum the machine is able to do. At this point 
we use 100% cpu and its very important to have fast dasd! The transactions 
are rather complicated (there is done a lot of work besides simple inserts 
and aggregation) and are in heavy conflicts to each other, because  we have 
one million concurrent inserts to one file and even the key drawing came to 
problems. The point I want to make clear is, that we (project leader and 
architect) decided to use journaling and commitment control for a process 
that has to be optimized for speed, we had the same discussion as in this 
list with nearly all programmers, that we should not use this approach - we 
did never the less and we succeeded. Not one person left, who says commit is 
to slow!

One point for this decision was, that we can't tolerate inconsistent data. We 
must hold the data of three years and a complete rebuild would take about 
1000 h, more than 40 days!!! And I'm absolutely shure, that bringing another 
approach to this stability would be slower than this and last not least:
I'm a lazy programmer and I hate to write code for doing work, that is already 
done. <grin>  

BTW: maybe you have to revisit your perspective of performance. A traditional 
approach with record level access would use less ressources, but would be 
slower!!! I have seen another datawarehouse installation, with a leading 
as400 software for this - they used /38 FormatData and they only needed less 
than 2% cpu and it took more than 6h for one step of work. A try with the 
creation of an index with parallel database feature used 100% cpu of 4 
processors and finished the comparable work in 40 minutes.

>
> This sounds like a great place to do pure performance checking.  Once I
> get the bulletin board up and running, I'll extend the test suite and
> then we'll see what happens.
>
> > (BTW:
> > this problems were solved by changing order by clauses of processes,
> > impossible with record level access, very easy with embedded SQL).
>
> I'm not sure what this statement means, Dieter.  Please provide some an
> example, and tell me why I couldn't just order my RLA updates the same
> way.

Changing the order of processing of the records with RLA needs to change the 
program logic, doing it with SQL you might change this even at the beginning 
of the process itself. BTW we use dynamic sql with prepared statements (too 
slow, would many people say) and we don't use binding at compile time (too 
slow would many peaople say again). So we are very flexible, we can customize 
the process at startup time as we want. 

>
<snip>
>
> So in an order, you would update all the A/R and everything, right? 

Normally that would be a logical unit of work, yes.

>  And
> in your experience this does not lead to bottlenecks.  One issue though:
> from how you talk, it's as if you take a batch file of a million updates
> and run it through the machine at one time.  How long has it been since
> those records were actually entered?  The reason I ask is that I can see
> ways to tune a batch update process.  Does the same hold true when you
> have 1000 users entering online orders simultaneously?
>

I don't see any problem, you won't have without commit. 1000 users wouldn't 
produce a comparable workload to the above. We have the complete transaction 
workload of more than 1000 users of the greatest german do-it-yourself-store 
chain frome the whole day in one hour with transactions wich are more complex 
than during order entry.
Interactive workload would be slower and with less contention.

> > If you cancel the job with *immed, within a transaction, the system
> > recognizes
> > in the step of completion of the job, that an uncommited transaction
>
> was
>
> > interrupted and does an automatic rollback, in other words: goes back
>
> to
>
> > the
> > last completed transaction.
> > If the endjob *immed ocurred by pulling off the power plug of the
>
> as400
>
> > (or
> > another catastrophy) this will be done at next IPL.
>
> This works when the transaction boundary includes EVERY record in the
> unit of work.  You're saying that's how you do it (and I wonder if the
> reason you can do that is because of the batch nature of your process -
> does it change when you're doing wholly interactive updates?).
>
> What I was trying to say was: If you have more than one transaction per
> unit of work and the system crashes between those transactions, then you
> in effect have the same problems as if you had no commitment control,
> right?  Because what you would in effect have is a partially updated
> unit of work.

I have used unit of work and transaction as two words for the same thing. CC 
ensures that the complete transaction works like a single database operation. 
In your example: if I make a complete order entry to one transaction, the 
complete order with all its records is written to database, or none of them. 
In my program I youst start my transaction with commit and make my updates, 
if I succeed with all operations I end the transaction with commit, in the 
case of error, i might decide to say rollback and none of the updates is 
written. 
In the rare case that the programm ends without commiting the last started 
transaction (the cancel scenario) the system simply makes rollback.
>
Dieter Bender


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.