× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



--
[ Picked text/plain from multipart/alternative ]
Okay, here's what I have "working" so far - a lot of the suggestions I have
received are exactly what I already have.  I apologize for the lack of clear
explanation.

Here is what the process does currently -

1.  Divide the number of records by the number of "parallel jobs" requested
(the file has REUSEDLT(*YES) so I'm ignoring the possibility of large gaps
due to deleted records, except that the CPYF for the last "chunk" goes to
*END rather than the record count)
2.  Remove all dependent logical files
3.  Submit a CPYF using FROMRCD(xxxxxxxxxx) TORCD(yyyyyyyyyy) FMTOPT(*MAP
*DROP), which will copy the record range in parallel with all the other jobs.
 This CPYF has an OVRDBF SEQONLY(*YES zzz) before it.
4.  Monitor for the completion of all the submitted jobs
5.  Then submit parallel rebuilds of all the LFs

It appears to me that CPYF FMTOPT(*MAP *DROP) is not the fastest critter out
there.  Is there a dynamic way to do FMTOPT(*MAP *DROP) other than CPYF?  I
know COBOL has "MOVE CORRESPONDING" but that would require specific version
compiles, which I want to avoid if possible.

Just as one other data point, the record format is HUGE - over 1800 bytes.
(an inherited situation, but I have to live with it for now, at least)

Thanks for all your replies so far!
Michael

> Good idea to have a primary key on the physical file.  You'll have no need
> to ensure that your unique's stay unique like one person warned you of.
>
> If you decide to create a read old / write new HLL program to copy the
> data from the old into the new then you could drop the K in the F spec and
> that might help blocking.  Using a primary file (if you are using RPG)
> will help the system do the blocking automatically for you.
>
> However, if the bulk of your access to this file is by the primary key,
> then this might not be a bad time to sort it.  Therefore you might want to
> read it by key and write it to the new file.  This should help your
> performance (in the long run - a penalty might be incurred during the
> conversion).  Or read the old data by the logical which is most important
> to your business - performance wise, (provided the select/omit are not
> prohibitive).
>
> Rob Berendt



As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.