× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Yesterday, I reran the extracts and splits. The code was identical, just a
new run. This time everything worked correctly.

This machine is a pretty busy machine and my suspicion is that something is
going wrong when the system is busy. I have seen this in the past with the
buffering. I would then change the force write ration to 1 and force the
data to disk every write/update.

This seems to be a similar issue.


As the process worked this weekend, I am not going to use CPYF.

Darryl.

On Sat, Apr 18, 2015 at 6:45 PM, CRPence <crpbottle@xxxxxxxxx> wrote:

On 18-Apr-2015 08:53 -0500, Darryl Freinkel wrote:

I did a check on the big file. It has 229,386 records. The maximum
RRN number for this file is 229,386. That means there are no deleted
records in the file and DSPFD shows no deleted records.


If the issue persists against that specific [unchanged] copy of
the_big_file, for which the effect of the following requests is anything
but "exactly 10,000 records" having been inserted into all [but the last]
of the list of smaller-files being created, then that inconsistent result
would suggest a[n apparently] /solid recreate/ of the issue; an issue that
is, as described, conspicuously a defect.

create table TO_LIB/CPYF_0001 as
( select * from FROM_LIB/FROM_FILE as A
where rrn(A) between 0000000001 and 0000010000
) with data
;
create table TO_LIB/CPYF_0002 as
( select * from FROM_LIB/FROM_FILE as A
where rrn(A) between 0000010001 and 0000020000
) with data
;
[...]
create table TO_LIB/CPYF_0022 as
( select * from FROM_LIB/FROM_FILE as A
where rrn(A) between 0000210001 and 0000220000
) with data
;
create table TO_LIB/CPYF_0023 as
( select * from FROM_LIB/FROM_FILE as A
where rrn(A) between 0000220001 and 0000230000
) with data



This is concerning that the creation of the smaller files is
skipping records in the file.


Agreed. Although if the problem does not occur with the current version
of the_big_file, yet has occurred against prior iterations of the
processing whereby preceding the scripted CREATE TABLE requests there is a
DELETE FROM the_big_file rather than a CLRPFM the_big_file, then I suspect
the issue arises from situations in which the fast-delete was not eligible
and so the SQL left many deleted records in the_big_file; i.e. as has been
noted repeatedly, if the_big_file has any deleted records, use of RRN
[instead of an OLAP query with ROWNUMBER()], the results should be expected
to be unpredictable.


I will have to test splitting of the file using CPYF using RRN in
the command.


The Copy File (CPYF) feature has the Copy From Record Number (FROMRCD)
and Copy To Record Number (TORCD) as the equivalent to the BETWEEN
predicate for the RRN() in the queries. There is an alternate variation
possible, available with the Number Of Records To Copy (NBRRCDS) parameter,
to specify NBRRCDS(10000) with each successively larger FROMRCD()
specification [instead of a TORCD() specification].


--
Regards, Chuck

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.





As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.