× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



first point to clearify, is the balance of the new system compared to your current workload. especialy I/O controller, disk arms and disk space.

I have embedded further comments in your post.

D*B


My change involves reading and updating a couple of records after each
transaction is processed. Currently it processes about 35 to 50,000
transactions per hour depending on system usage (the transaction processing
is complex involving scores of I/O operations, though I am not sure how
many) and the new system is supposed to be at least twice as powerful (in
RAM and processor technology).


now you have 10 to 15 complex transaction per second, (equates > 10.000 elementar database operations per transaction) so 4 additional reads per transaction wouldn't hurt, if there is an access path.


I have another program, which I was told is reading a file, writing the
records to 2 different work files based on one of the fields is positive or
negative, then reading both the files to compare and delete matching
positive and negative records. Then the remaining records are read again and
written to 2 different files. When this program was run on the current
system with about 1.5 million input records, it finished processing and
created the 2 final files with about 300K records under 3 minutes.

So it did 1.5 million reads, 1.5 million writes, then 1.5 million reads
again, about 1.2 million deletes and 0.3 million writes under 3 minutes? Is
that possible? Or do I need to check for myself what this program is
actually doing (I took someone else's word for it).

here we have 6 million elementar database operations in 3 minutes. equates 30.000 per second, not so bad, but if some blocking is involved (writing the workfiles, reading sequential) and well sized hardware, plausible.
adding 4 I/O operations per transaction here, could have influence, up to doubling the workload.

If you are using SQL, database monitor would help a lot to have a look, what's happening to the database!


The files in above program had only about 20 fields. Does number of fields
or number of records in a file have an impact on I/O over the file?

number of fields is no critical factor, size of record will have a little bit influence (you would need more memory and will get more synchronous read operations)
number of records? it depends on your access method (record level or sql? sequential or keyed? ), if a full table scan is needed, or building an access path by the fly it will have dramatic influence. For recent hardware my experience is, that db2/400 scales very good up to some hundred of millions records, above this the influence of the number of records has to be evaluated.


Given the new system will be faster, should my 4 extra I/Os have
any significant impact?

On Tue, Jan 18, 2011 at 4:56 PM, Nathan Andelin <nandelin@xxxxxxxxx> wrote:

Unless you designed your batch job to run multiple parallel threads, then
it
will probably just use 1 of the CPUs, even on a 16 CPU server. It's known a
CPU
affinity. So your question probably comes down to how much faster the new
CPUs
are.

-Nathan.

----- Original Message ----
From: Vinay Gavankar <vinaygav@xxxxxxxxx>
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Sent: Tue, January 18, 2011 2:34:48 PM
Subject: Database I/O throughput speeds

Hi,

Can any one give me some ball-park figures for number of I/O operations
(chain, update, write, delete) that can be achieved (per second or per
minute) in an RPG program running in batch mode (priority 50) on a high end
newer model? I am not sure what model my company is going for, but it is
supposed to have 16 CPUs (if I heard it right). I realize that the
throughput may vary based on the other usage of the system, but I would
have
something to go by.

I am trying to gauge the impact of adding a few I/O operations to every
transaction when processing a batch of 1-2 million transactions. Our
development machine has nowhere near that power, so running a sample
program
on it would not tell me much.

Thanks
Vinay
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.