× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi Mike,

Long ago (in a distant galaxy, far far away) I wrote an example for this mailing list called gettok(). Link to archive:
http://archive.midrange.com/rpg400-l/200408/msg00372.html

With that subprocedure, you could do something like this:

x = 0;
myds.field1 = gettok(line: '|': x)
myds.field2 = gettok(line: '|': x)
myds.field3 = gettok(line: '|': x)

and so forth...

Presumably the data in 'line' would be loaded from a stream file, perhaps using fopen()/fgets()/fclose() to process the file.

Unfortunately, there really isn't any way to do this so that the whole DS is populated in one shot. To do that, you'd really need a compiler or precompiler, since the subfield definitions aren't generally known at run-time.

Technically, this is a pipe-delimited file (not CSV, which stands for "Comma Separated Values")

On 5/10/2011 8:43 AM, Mike Wills wrote:
I have a document I need to parse where each row is a different
format. The document is pipe "|" delimited. Is is possible to write a
more generic subprocedure to take each row and a data structure and
fill in the data structure? What I am thinking is iterating through
each field in the data structure and filling it with the corresponding
field from the CSV line that I parse. Is this even possible?


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.