× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



The recommendations from IBM are as follows:

* SQL would be a better choice for long running batch processes because it
/automatically uses blocking operations for read only cursors and index and
table scans are parallel enabled through the use of the Symmetrical
Multiprocessing (SMP) feature of OS/400.

* Using  SQL for mass file updates minimizes I/O since only rows matching
the local selection criteria are read for update. In addition, DB2 UDB for
iSeries is capable of generating an updateable join. 

* The SQL blocked FETCH and INSERT support provides an additional layer of
blocking over and above the physical and logical layers. This is application
layer blocking. SQL automatically blocks read and write operations at the
maximum blocking factor available for that release (Currently using a 128k
buffer vs. 4 - 8 for DDS).  The programmer need not be concerned about
OVRDBF calculations.  In essence, one FETCH or INSERT statement can read or
write a maximum of 32767 rows in one operation from the program to the Open
Data Path (ODP).  Native programs must issue multiple reads or writes to
move data to and from the ODP.

The long range proposal is to move to a three tiered approach where the
database is de-coupled from the application and the database is accessed
through a set of I/O modules.  Various aspects of the existing database
would be duplicated using views and indexes as I understand it.

Out of curiosity I created two programs to read through a physical file
performing a math calculation and date conversion on each record read.  The
first program was written using the READ op-code inside a Do loop.  The
second program was written using a blocked SQL fetch inside a DO loop.
Running on an 890 under V5R2 I could see no significant difference in speed
after processing over 200 million records from a physical file.

Right now I am just trying to gather information and evaluate some of the
suggestions.  I would really appreciate hearing of anyone else's
experiences.

Rick

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.