× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 7/17/2014 3:50 PM, Richard Reeve wrote:
Henrik,

The data will look more like
01|"fFILE DESCRIPTION"|20140717|"HOSPF001"|" "|" "|" "|" "|
10|123456789|123456789|111223333|"LAST NAME"|"FIRST
NAME"|"MIDDLE NAME"|"SUFFIX"|"ALIAS LST NAME"|"NICKNAME"|
11|19640202|"M"|"ADDRESS LN 1"|"ADDRESS LN 2"|"CITY"|"ST"|"ZIP
CODE"|"COUNTRY"
12|" t "|"CHECKING"|"BANK"|"ACCOUNT"|"C" |
.......and so on with many more record types, each format having
its own use and format.

. The purpose of this exercise is that my client is moving from one
vendors software package to another package and the target package needs
this extract.

Basically, what I'm doing is reading multiple files in a db2 database
(using RPG) and building these output records as I go Where I am getting
tripped up is:
1. Should I just create data structures for each record type and
write to a file defined with a record length of say 400 (the longest of the
record types plus 50 for the delimiters that I'll be adding or would it be
wiser to create actual files for each record type (adding employee # to
each as a hidden field so that I can properly order the extract file).

2. I am confused as to how to get the pipe delimiter in the extract
file efficiently and removing trailing blanks.

You could pretty much use Scott or Jon's article and do it all in one go
- input file to delimited IFS file - if you could find a way to properly
sequence the input files.

My guess is that you can't easily order the input files. If so, then
I'd define that work file a little bit differently than the way you're
thinking. Imagine DDS with just the key field(s) and then a giant text
field. The key fields would contain everything you'd need in order to
properly sequence the results.

Step 1 would be to read each of the input files record by record, load
the key fields in the work file and then use code that Scott posted to
convert the individual fields into the single text field, trimmed,
escaped and delimited.

If you wanted to, you could use the QUSLFLD API to list the fields in
each file and set up automatic code to deal with each one, whether you
need to edit numeric fields, trim character fields, etc. Carsten wrote
an article on that:
http://iprodeveloper.com/rpg-programming/apis-example-list-fields-quslfld
This has a bit of a learning curve.

Step 2 would be to write an RPG program that reads the work file by key
and writes only the text field out to the IFS. Scott's IFS APIs can
help, as might Jon's RPGOA article. Either one is easy to use and will
not impose a huge learning curve. Basically just READ the table and
write() the IFS file.

I don't miss dealing with home grown EDI one bit.
--buck

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.