× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



>(no disrespect intended ;-)

And none found.  I'm grinning widely as I type!

>whoa there big fella, stop right there!   no one's
>asking for even a fraction of this stuff.

Hm?

"Now imagine you want to read a .csv file with
 lots of numbers in them, e.g.
 123,456.78,23,104568.20-,134.4343,134234,08797,0789.99

 Would you rather use the API's or something like
 the following:

    d Data1        ds
    d  Nbrs                   31p 9     dim(20)

    c            read    XYZ        Data1

"Having the native I/O routines handle the data
 parsing and conversion for us would go a long
 way towards answering your 2nd concern about
 handling the data once it's been read."

and
"Besides using a things like read-delimited
 into an externally described DS..."

and
"a. Read delimited -- read the input stream
    into a separate DS subfield, using EVAL
    rules, up to user-specified delimiter."

Sounds like a universal translator to me.  Of course I've been accused of
being a Trekkie before so maybe I have my oeuvre's mixed up!  :-)

>We would really need nothing more than what's
>specified on the CPYFRMSTMF and CPYTOSTMF commands.
>and don't tell me to 'just use the commands', they
>are unwieldy, and tough to decifer errors from,
>plus why would anyone use the api's if the commands
>were better?

Er... if we really need nothing more than the commands, but the commands
need to be better then aren't we asking for something more than the
commands?

>No need for updating a file. (read in entire file
>and rewrite it if that's what you're after).

"You might want another parm to tell
 whether to open it as input, output or update."

>as far as processing multiple different types of
>files, no one wants that either - we know what
>our files are going to look like, and will work all
>of that out in testing.  Even if you did want to
>process an excel file, parse the damn thing once
>you get the data in your program, not in the read
>bif or opcode.

"It doesn't have to handle every possible way
 to read/write a stream file, just the most common."

and
"Bean counters and other such like these [csv files]
 a lot. With OS400 data."

>It's not an all or none proposition is it?

Not at all; see the above quote about "just the most common."  But that's
the general idea; translate from CSV into my native DB2 table layout.  As
far as knowing what our files are going to look like, my environment is
radically different from yours.  My bean counters are constantly tinkering
with their spreadsheets, and consequently the layout of the CSV file.

[From Peter]
>It's amazing the amount of resistance to a new
>idea I'm seeing <g>.

I voted YES to native IFS file support in the last poll.  I just want to see
the mechanism.  Since this is a theoretical discussion I figured I'd throw
out some things to theorise upon...

>Nor do I believe it is very radical --
>very little syntax change, and some
>new F-spec keywords.

It's not the syntax issue that I'm worried about.  I use /free, C and APIs.
Daily.  (Except today <grin>!)  I work with vendor supplied stream files
every day.  Every day.  Prototyping open() and read() is simple and easy, no
matter what company I move to.  Using open() and read() is equally simple,
no matter what company I move to.  Just like I re-do my subfile template to
match a new employer, I will no doubt re-do my stream file prototype.  Not
really an issue for me personally, but I DO understand the power of official
IBM backing.

The hard part is understanding that stream of bytes once I've got it into my
program.  Hence the implicit requirement for a runtime decoder of some kind.

>My first response is that I'm not looking
>for a universal file transfer, I'm looking
>for a simple implementation that would
>handle most cases -- comma-separated-values
>(.csv) files was the closest to a decoder
>that I suggested -- and be the basis for
>more complex stream file processing.

Fair enough, and granted that I went overboard in my trip through space, but
what will happen in real life?  That's why the farfetched speculation.  I'm
in favour of getting as many wishes as possible on the table so that the
requirements folks can see how full their plate really is.  Think about it.
Toronto give us CSV decoding and then get blasted for ignoring XML.  Or Tab
delimited files...

>If it's too traumatic to combine file
>i/o and parsing (i.e. reading into a
>data structure), how about an opcode
>like COBOL's UNSTRING?

hee hee... Touché!  I deserve that :-)

It isn't traumatic to have a ReadAndDecodeCSVFileIntoDS operation code
(catchy name, eh?  Came up with it all by myself!)  But I think you've made
a good point when you mention UNSTRING.  (I'll be wandering over there
now...)  It's not really the I/O operation part of this that needs help;
it's the string handling that's hard.

When I said YES to native IFS support, what I wanted was an option on the
CRTRPGMOD command to include BNDDIR(QC2LE) and bring in a stdio prototype so
that everybody could use IBM's names and IBM's prototypes for the standard
I/O functions.  I completely agree that having the IBM imprimatur on
something gives it huge legitimacy AND makes it more portable from employer
to employer.  Absolutely agree.

I would truly really madly love to see eval columnDS = %decode(inputBuffer:
EOMSalesCSVTemplate).  From IBM, for the above reasons.  I would also love
to see the reverse (%encode).  That's the mechanism I would use for simple
CSV formatting.  Programmer supplied template, like an edit word.  That way
the programmer can be in (dynamic) charge of the data and not feel limited
by the compiler.

Great thread!
  --buck


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.