× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



> From: Uma Balasubramanian
>
> forgot to mention in my earlier mail that, apart from select criteria for
> PNO range (so that multiple jobs can be submitted in parallel), ther is
> also another selection criteria which excludes some of the
> records from the
> file ....
> hence not all records of primary file are being processed (as rightly
> questioned by Martin Booth earlier "You are processing 100% of
> them anyway,
> right? ")
>
> really breaking my head to get performance improvement in this program.

Anything is possible, given enough time and money.

You say that you break the run up into several ranges by PNO, and you also
have another selection in your logical view that removes some of the records
from processing.  Is this selection constant (that is, the same selection is
applied every time?).  If so, there is a possible, VERY radical change that
can speed up this specific run.

Change your database.  Create nine different physical files for each of your
normal physicals, with each file having a certain range of the PNO.  Change
your I/O logic systemwide to write new records to the correct physical based
on PNO, and to use a JOIN logical for the reads.  This will handle the
segregation by PNO and allow you to use a straight matching record to read
the records.

If the additional selection criteria you mention is constant, you can use
that to create another file that contains only those records that don't
match the second criteria.  You'd have to then include this tenth file in
all your join logicals.

I realize this is a radical change, and has consequences elsewhere in the
system.  But I'm trying to use it as an example of thinking creatively to
solve your problem.  In this case, I chose a specific goal: removing the
logical views from the read.  This is what I came up with.

Another possibility is a shadow file.  Your files aren't that large, only a
few million records.  When you do your run, are all records updated?  Or are
certain PNOs skipped after a few calculations?  If the latter case is true,
you could create shadow files with just a few fields (specifically only the
fields used to determine the process/no-process decision).  Process those
files, and then if it is determined that a certain PNO needs processing,
then pass that PNO to a second program that actually updates the files.

Finally, and I'm, amazed I'm the one suggesting this, what about SQL?  While
I don't have any idea how it could be done, I'm constantly told that SQL
performs so much better for mass updates under the covers.  Can this update
be translated into SQL to let the database do its magic?  By putting the
decision making process at a lower level, you might get performance
improvements.

Joe

P.S. Going back to bed.  Too much Christmas cheer.  My tummy woke me up, and
being the weird individual I am, typing a long technical email helps
<smile>.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.