|
>It seems to me that with either method (API's and >a lot of parsing) or "integrated" file i/o and >parsing, you'd have to be changing your programs >and/or file formats a lot. Is that the case? Not much. The work comes in finding out the new rules. I never get to throw out the old rules, alas. I haven't yet perfected a foolproof table-driven pattern matching algorithm that performs quickly enough to deal with a few million records a pop, so there's code involved for each 'record type.' The decoder stores the decoded messages in a dynamic array and passes back the message count: mainline read block eval count = decode(block: messages: recordType) for i = 1 to count process messages(i) end loop The decode procedure is nothing more than a traffic cop, directing control to the appropriate decoding routine. There's one routine for each record type. In my case, the I/O really is trivial, and the vast bulk of the processing goes to decoding the messages. The beauty of doing it this way is that my decoding is device independent. It doesn't matter if the raw data comes from the IFS, a CD-ROM or a tape drive (yes I have to deal with all of these.) The I/O bit is all that changes; the decoding service program doesn't care one whit. Cheers! --buck
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.