× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Gary, what you are asking for has been available for years at
www.think400.dk/downloads.htm under my name Alan Campin. The utility is
called COMPILE and does exactly what you are wanting. Let me know if you
have questions.

On Fri, Dec 26, 2014 at 9:47 AM, Gary Kuznitz <docfxit@xxxxxxxxxxxx> wrote:

Hi Rob,

Thank you very much for spending the time to spell all this out to me.

Comments below...

On 26 Dec 2014 at 10:21, rob@xxxxxxxxx (rob@xxxxxxxxx <
midrange-l@xxxxxxxxxxxx>)
commented about Re: How to find the number of records in an IFS f:

You are trying to convert
from an IFS file
to a DB file.

Yes

Therefore the IFS file (by that I assume you mean a stream file outside
of
the QSYS.LIB file system) already has it's "number of records"
determined.

Yes.

So you must be trying to set the number of records to the db2 file.
Then,
yes, that would be a simple OVRDBF, CHGPF, or doing the CRTPF or CREATE
TABLE differently.

Let me reread this...
Ok, so on the DB2 file you do not want to use *NOMAX but you want to set
the maximum to some count of the stream file. And stream files have no
concept of 'records'. However, (after doing a bunch of research), let me
try this.

I could set it to *NOMAX when I create the file manually. My intent in
finding out the
number of records in the stream file and setting the DB2 file to accept
that number of
records was so it would be accomplished programmatically. I have nothing
against
setting it to *NOMAX.
I was using the OVRDBF which doesn't allow for *NOMAX. I can use the CHGPF
to set the file with *NOMAX and call it a day.

Ideally it would be great to have a compile system in place where I could:
1. Place compile statements in the source member so I wouldn't have to
remember how
to compile a program.
2. Have a compile system that would read the source statements and run the
compile as
it was originally programed for.
I have wanted something like this for many years and have never taken the
time to do it.

STRSQL
CREATE TABLE ROB/TEST (MYCHAR CHAR (10 ) NOT NULL WITH DEFAULT)
Table TEST created in ROB.
INSERT INTO ROB/TEST VALUES('ROW 1')
1 rows inserted in TEST in ROB.
INSERT INTO ROB/TEST VALUES('ROW 2')
1 rows inserted in TEST in ROB.
INSERT INTO ROB/TEST VALUES('ROW 3')
1 rows inserted in TEST in ROB.

CPYTOIMPF FROMFILE(ROB/TEST) TOSTMF('/rob/test.txt')
STMFCCSID(*PCASCII)
RCDDLM(*CRLF) STRDLM(*NONE)
DSPF '/ROB/TEST.TXT'
....+....1....+....2....+....3.
************Beginning of data*
ROW 1
ROW 2
ROW 3
************End of Data*******

QSH

cat /rob/test.txt
ROW 1
ROW 2
ROW 3
$
cat /rob/test.txt | wc -l
3
$

So now I see you are looking to retrieve that '3'.
So, when I run
qsh cmd('cat /rob/test.txt | wc -l')
in an interactive session the following displays on the screen:
3
Press ENTER to end terminal session.

When I run
qsh cmd('cat /rob/test.txt | wc -l')
in a batch session.
SBMJOB CMD(QSH CMD('cat /rob/test.txt | wc -l'))
Job 420557/ROB/QDFTJOBD submitted to job queue QBATCH
wrkJob 420557/ROB/QDFTJOBD
4. Work with spooled files
I get a spool file with one row displaying 3 in it.

I tried a few things
qsh cmd('cat /rob/test.txt | wc -l >
/qsys.lib/qtemp.lib/mydtaara.dtaara'
)
Command ended normally with exit status 2.
qsh cmd('cat /rob/test.txt | wc -l > /qsys.lib/qtemp.lib/mytable.file')
Command ended normally with exit status 2.
qsh cmd('cat /rob/test.txt | wc -l > /rob/RecordCount.txt')
Command ended normally with exit status 0.
So directing the output to a data area or a db2 file wasn't going to
happen. Directing it to a stream file was ok though.

Which leaves you a couple of choices.
- Run in batch and use CPYSPLF.
- Direct to a stream file and read that

It seems like using CPYSPLF would be very simple if I knew how to direct
it to a stream
file. I know with CPYSPLF the output can be a stream file on the IFS. In
this case I
would need the input to be a stream file. /rob/RecordCount.txt so I
could direct it to
a DB2 file so I could read it into the CLP.

or do a CPYFRMSTMF on it.
-Skip CPYFRMIMPF altogether and use Stream file APIs (like the Scott
Klement tutorial) to read the stream file and write it to a DB2 file.
You
can either set the DB2 file to *NOMAX and stop you read loop if you read
too many records and just don't want to process the rest, or continue to
use a maximum and condition the write to say if it's full abort the read
loop.
- Continue using your existing process and error trap it.

I don't know why the phobia against nomax.
None. I didn't think of that. It's been many years since I needed to
process many files
with millions of records. Thank you for bringing this to light.

It would be understandable if you were actually taking the WC output and
saying if it's over x then don't just do an OVRDBF but instead do...
But if you are just going to set it to whatever the size of the stream
file is then why not use *NOMAX?

I will.

That's really great of you to list out my options.

Thank you very much,

Gary


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





From: "Gary Kuznitz " <docfxit@xxxxxxxxxxxx>
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Date: 12/26/2014 02:27 AM
Subject: How to find the number of records in an IFS file
Sent by: "MIDRANGE-L" <midrange-l-bounces@xxxxxxxxxxxx>



I have a CLP that converts an IFS file to a DB file.

I would like to do an ovrdbf for number of records on the output file to
make sure it
doesn't exceed it's limits.

There are millions of records in the files.
I have tried this but I think it only works on System i files. (Not IFS)
DCLF FILE(QAFDMBR)
DSPFD FILE(&INPUTFILE) TYPE(*MBR) OUTPUT(*OUTFILE) +
FILEATR(*ALL) OUTFILE(QTEMP/QAFDMBR)
OVRDBF FILE(QAFDMBR) TOFILE(QTEMP/QAFDMBR) POSITION(*START)

RCVF
/* The number of records are in &MBNRCD */
OVRDBF FILE(&OUTPUTFILE) SEQONLY(*YES &MBNRCD)

I have tried this but I don't understand where the number of records ends
up.
QSH CMD('cat /home/Payroll/ATU_PAYROLL_DATA_FY14_372_398.txt | wc
-l')

I have never used QSH before.

Thanks,

Gary Kuznitz
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.