× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Chuck,

We have to build one CIF record from these incoming records.. this is also an example only and actual elements can go up-to 1600.

Options that I was thinking was to -: Directly read data from IFS and reformat in RPGLE.

Option2: Copy from IFS -- > Read all 39 million records that I have to read to come to 720K records that what we are expecting but that seems to inefficient with the current approach --> Generate the required DDS file data.

Option3: Copy from IFS ---> Output directly by reading the file into DDS file - COPY from IFS takes a long time.

* Are the components consistently numbered and always one through sixteen? Ashwani: No they can vary.
* So component-2 has one element having the value of '1XX-50-0XXX' as an apostrophe-delimited string, and similarly the component-3 has four elements with elem-1 having the value 'XXXXXXX', elem-2 having the value '', elem-3 having the value '', and elem-4 having the value 'XXXX'? Ashwani: No that's a single record for us.

* Each component-# has a known\expected number of elements; in the OP, one [the primary] was expected to have five elements but the original example showed three-of-five whereas the posted\attached data file shows only one element for component-1. So is the problem general, in that any component-# might have included fewer than the expected number of elements? Ashwani: Correct ,that was just an example


^CIF(1000001,1)="XXXXXXXXXXXX,XXXXX"
^CIF(1000001,2)="1XX-50-0XXX"
^CIF(1000001,3)="XXXXXXX|||XXXX"
^CIF(1000001,5)="XXXXXXXXX|NJ|XXX|US|84 XXXXX|||||0"
^CIF(1000001,6)="XXXXXX|NJ|07436|US|84 XXXX"
^CIF(1000001,10)="41573|F||0|2|0|O||||||||||||0"
^CIF(1000001,11)=""
^CIF(1000001,12)="XXXXXXX||||||XXXXXXXXXX.COM"
^CIF(1000001,13)="XXXXXXX|1||||0|0|0|0||0|||0||1|||0"
^CIF(1000001,14)="58573|||||63572|0|0"
^CIF(1000001,15)="|||||PETHERC 00|||US||0"
^CIF(1000001,16)=""

So we are to presume then, that the /flat file/ is a Stream File (STMF), and not a Database Physical File (PF) [as either a program described data or perhaps a source file]? If already in a database [flat] file, then the opportunity to use SQL in addition may override an implication that directly reading the stream file with parse+write might be better.

Apparently we have an entity described by the identifier 1000001 that has some sixteen different components, each with an expected\fixed number of elements, and that entity described as a pipe-delimited data record is effectively the concatenation of all of these components [aka sub-records]?

Some of those answers might be worth knowing otherwise, but I ask only if perhaps the SQL might be able to perform some of the work to limit the amount of work in a program that parses the data; i.e. likely pertinent and\or valuable, only if the record-data is in a database file rather than in a stream file, because obviously copying the data from stream to database solely to allow the SQL is not going to perform as well as just reading the stream data directly.



after I bring all these records into 1 records.. I have one more
operation to do, move this data to DDS file hence need to have at the
least number of fields matching for each record else would not be able
to copy to DDS file.

If performance is critical, then do not spend any time reformatting the existing data into another copy [i.e. do not combine them to make one pipe-delimited record out of the combined components\sub-records] before parsing the elements, just generate the database record(s) and write the results directly to the database file.


So all these 16 records will make up a single record in my new file
and then I will do copy to DDS file.


Specifically, I would build-up multiple database rows whereby each database row was built-up directly from the multiple data-records for each primary-record and its sub-records [thus skipping the rework of the pipe-delimited data], and then do a multi-row INSERT using embedded SQL [using purely dynamic or extended dynamic using parameter markers are not good choices by contrast to the embedded multiple-row INSERT].

--
Regards, Chuck

--
This is the RPG programming on the IBM i (AS/400 and iSeries) (RPG400-L) mailing list To post a message email: RPG400-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives at http://archive.midrange.com/rpg400-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.