×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
Carsten Flensburg escribió:
Hello Antonio,
Since I wrote the original example, I've had an opportunity to write another
one that's probably a bit closer to what you want to achieve. It'll update
one or more fields in an existing file record based on it's releative record
number. Let me know if you want a copy.
Carsten, thanks for your offer.
When I read your original example I saw it was meant to be a "Service
Program", I guess to be used occasionnaly, and it gathered info on the
file's DDS fields thru the APIs.
My case was quite different. I might be getting some thousand EDI type
messages quite often. They are just thousands of lines with just "INSERT
INTO ..." for many different files.
Ech "INSERT" has a different number of fields and values, usually in the
range of about 8 to 10 pairs; never more than 30.
So I took a different way. It's got some limitations that will not
affect us, as discussed with the customer.
My program is almost ready. I've done some small tests and it works. And
it seems to be very much faster than doing the INSERTS thru ODBC as
intended in the original (3rd party) application.
In summary, the 3rd party application will store now the thousand
"INSERT" lines as a file in the IFS, then use RMTCMD ro call my program.
The number of files being appended to is a reasonnable number (say,
around 100) and their field structure is fairly stable. So, I've built
(with a different program) a file with the fields structure for all of
them, one line per file, so acccess to each file's field structure is
just on CHAIN. Very fast. Customer is aware if he ever changes any of
the files structure, he'll have to regen this "descriptive file" of fields.
When RMTCMD calls my application pgm, it copies "INSERT" lines from IFS
to a big work file (CPYFRMSTMF) then starts reading it sequentially. For
each "INSERT into FIleXXX" type it will call a different program
depending on the File being appended into. These pgms are called just
once, and kept open for the rest of the process, until all INSERT lines
have been processed. Then they're all closed. Each of these programs is
just a copy of one "sample" pgm, with just 4 lines of code changed:
- F specs for File name
- DS with external name
- write RecordName
- KFLD value to CHAIN to fields structure file
In our case, only character and decimal zoned fields are used, so I did
not have to deal with packed, binary or date fields, though these could
be added easily.
It seems that what about 3000 INSERTS thru ODBC could take about 50
seconds minimum (depending on system load, it could go up to even 4 or 5
minutes...) now takes about 10 seconds... So it's much better time.
It still has to undergo further testing, but customer is quite happy
with initial results.
Thanks to all who helped with their ideas.
--
Antonio Fernandez-Vicenti
afvaiv@xxxxxxxxxx
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.