× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Buck,

Glad to see you're getting into this!  I agree it's probably better to
separate the file i/o and the parsing.  As Doug pointed out, with this
separation, you could decide how to parse a record after you'd read it.

My thoughts on keeping it simple were based on the fact that simple parsing
won't handle a .pdf or similar files (.xls?) that have quite complex
internal structures.  PDF's in fact expect to be read from the end of the
file towards the beginning, and include a table for randomly accessing
objects within the file.  By having a READ into a variable length string,
with no record delimiters specified, I could read the whole stream at once
and do any manipulation on the string.

But for simple files like a .csv, a READ with CR/LF/CRLF delimiters, and a
.csv decode (like your

    eval columnDS = %decode(inputBuffer:EOMSalesCSVTemplate)

example, but simpler --

    eval EOMSalesFields = %decode(inputBuffer:'CSV')

-- that is, the 'CSV' decode would be more generic than your
EOMSalesCSVTemplate, more along the lines of Doug's AWK pattern matching;
EOMSalesFields would be a data structure with the fields in it.  Interesting
point though -- how would this syntax handle a decoding error, e.g. no room
in the destination field for the extracted data?  or a decimal data error?
Maybe it should be

    'csv'    decode(e)    inpBuffer    EOMSalesFlds

but of course that doesn't lend itself well to those who prefer free format.

And don't forget keywords on the F-spec for CCSID.

Rick Baird said:
>>It's not an all or none proposition is it?

Buck Calabro said:
>Not at all; see the above quote about "just the most common."  But that's
>the general idea; translate from CSV into my native DB2 table layout.  As
>far as knowing what our files are going to look like, my environment is
>radically different from yours.  My bean counters are constantly tinkering
>with their spreadsheets, and consequently the layout of the CSV file.

It seems to me that with either method (API's and a lot of parsing) or
"integrated" file i/o and parsing, you'd have to be changing your programs
and/or file formats a lot.  Is that the case?



[From Peter]
>It's amazing the amount of resistance to a new
>idea I'm seeing <g>.

[Buck's response]
I voted YES to native IFS file support in the last poll.  I just want to see
the mechanism.  Since this is a theoretical discussion I figured I'd throw
out some things to theorise upon...

[Peter again]
It just seemed to have a _touch_ of sarcasm in it...but you had some good
points too. I just got off the phone with a colleague who actually could use
something like UNSTRING for a current project that has nothing to do with
the IFS.  So separation of church and state ooops I mean file i/o and
parsing would be a good thing.

Regards,
Peter Dow
Dow Software Services, Inc.
909 793-9050 voice
909 522-3214 cellular
909 793-4480 fax






As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.