× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hi, John,

If you could send me just the DDS source for the PF in question, and the LF in question, then I can recreate the files here on a test system, and try to investigate.

I suppose to do it correctly, I need to see the way the DDS for the PF was, before the 'conversion' so I can create that version of the PF, then create the LF over it, then attempt to do the same CHGPF, to see if I can detect what may be going on.

Or, in this case, since you know this particular LF is "problematic" you could just issue DLTF to delete that LF before or after the CHGPF, and then issue CRTLF to recreate it from the DDS source.  That way, the access path should be rebuilt afresh, and I think you should be "back in business."

This has always been a very touchy area of the whole CHGPF process, where IBM somehow compares the new DDS source version with the existing PF definition, and then dynamically determines the equivalent SQL ALTER TABLE statement(s) to issue, "under the covers."  Sadly, IBM never documented any of this, and does not show us those SQL statements.

I hope that helps somewhat?

All the best,

Mark S. Waterbury

On Tuesday, March 9, 2021, 3:10:18 PM EST, smith5646midrange@xxxxxxxxx <smith5646midrange@xxxxxxxxx> wrote:

I’m resurrecting this old post because I have a problem.

We have rewritten our conversion to use CHGPF where we can.  Today someone tried to DBU a logical file built over a huge physical (182M records) and it took 15 minutes (this is a severely restricted machine) and it said something about rebuilding the index. 

I thought CHGPF updated the indexes while it was running (excluding the mismatched ones that we discussed earlier).  We can’t use the CHGPF if that is going to delay the normal processing the first time that it tries to use the index.

Is there something I missed in all of this?



From: Mark Waterbury <mark.s.waterbury@xxxxxxxxxxxxx>
Sent: Thursday, January 28, 2021 12:14 PM
To: smith5646midrange@xxxxxxxxx
Subject: Re: Converting large amount of data [PRIVATE REPLY]



John,



Wow -- it appears that at some point in time, IBM has significantly "dumbed down" the documentation for the CHGPF command.  :-/  I could swear it used to contain far more detail about the process.



See also:

    https://code400.com/forum/forum/iseries-programming-languages/dds/8925-chgpf



and

    https://www.itjungle.com/2006/10/04/fhg100406-story03/



and

    https://www.itjungle.com/2005/07/13/fhg071305-story01/  ;





Hope that helps,   



Mark S. Waterbury



On Thursday, January 28, 2021, 11:57:13 AM EST, smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx>  <smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx> > wrote:





What I have found through testing is if you define the fields in the LF, regardless of the record format, it does not update the field sizes in the LF when executing the CHGPF.

As for your RTFM comment...I DID read the documentation on the CHGPF (https://www.ibm.com/support/knowledgecenter/ssw_ibm_i_73/cl/chgpf.htm).  I did not see anywhere on the page or any links to "edge cases".  If you can point me to where this is called out in the doc, I will be more than happy to re-read it and see how I missed this in what I read.


-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx <mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx> > On Behalf Of Mark Waterbury
Sent: Thursday, January 28, 2021 11:21 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx <mailto:midrange-l@xxxxxxxxxxxxxxxxxx> >
Subject: Re: Converting large amount of data

John,
It has ALWAYS worked this way, ever since CHGPF was first introduced in V3R2, IIRC.
This is a case of "RTFM" -- you need to read the documentation for the CHGPF command, to understand the "edge cases" -- situations it handles in a certain way -- that may not always be the way YOU wanted it to do it.  Then, you need to deal with those cases yourself.
Hope that helps,
Mark S. Waterbury

  > On Thursday, January 28, 2021, 11:16:17 AM EST, smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx>  <smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx> > wrote: 

I just recreated his example on our V7.3 machine and the problem still exists.  Not even a report of any type to tell you that you might have a problem with the second logical. 

What a potential disaster waiting to happen!!!!!

-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx <mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx> > On Behalf Of Rob Berendt
Sent: Thursday, January 28, 2021 10:39 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx <mailto:midrange-l@xxxxxxxxxxxxxxxxxx> >
Subject: RE: Converting large amount of data

If you do need to do some digging I'm pretty sure the guy who wrote that article would not be using any of those display to an outfile commands unless he just had to.  He is on the sql services bandwagon big time.

Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600 Mail
to:  7310 Innovation Blvd, Suite 104
          Ft. Wayne, IN 46818
Ship to:  7310 Innovation Blvd, Dock 9C
          Ft. Wayne, IN 46818
http://www.dekko.com

-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx <mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx> > On Behalf Of smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx>
Sent: Thursday, January 28, 2021 10:24 AM
To: 'Midrange Systems Technical Discussion' <midrange-l@xxxxxxxxxxxxxxxxxx <mailto:midrange-l@xxxxxxxxxxxxxxxxxx> >
Subject: RE: Converting large amount of data

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


So in our conversion meeting yesterday, the CHGPF was discussed.  Later someone forwarded and OLD article to me from 2013 about a problem with CHGPF relating to PF and LF record formats not matching (link below).  At the bottom of the article, it says the article was written for V7.1.  We are running on V7.3.  Does anyone know if this is still an issue or do I need to do some digging to find out if any record formats do no match?

https://www.rpgpgm.com/2013/06/chgpf-theres-quirk-that-can-bite-your.html#:~
:text=The%20Change%20Physical%20File%20(CHGPF)%20is%20a%20useful,the%20follo
wing:%20Delete%20all%20the%20dependent%20logical%20files.

-----Original Message-----
From: smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx>  <smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx> >
Sent: Friday, January 22, 2021 1:43 PM
To: 'Midrange Systems Technical Discussion' <midrange-l@xxxxxxxxxxxxxxxxxx <mailto:midrange-l@xxxxxxxxxxxxxxxxxx> >
Subject: RE: Converting large amount of data

CHGPF never entered my mind even though I do use it occasionally.

For the files being converted with CPYF, we are doing a *MAP *DROP.
Anything that could not be handled via *MAP *DROP (i.e. an alpha field changing from 2A to 3A and needing a leading zero added) are being converted via a custom program.

Do you know that CHGPF is quicker than CPYF?  I supposed that might depend on how many fields are changing.


-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx <mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx> > On Behalf Of Rob Berendt
Sent: Friday, January 22, 2021 1:17 PM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx <mailto:midrange-l@xxxxxxxxxxxxxxxxxx> >
Subject: RE: Converting large amount of data

Good point.  There are a number of people who never knew that many years ago IBM allowed you to do the following to modify and add columns.
CHGPF FILE(MYFILE) SRCFILE(MYLIB/QDDSSRC) They only remember the early releases required you to copy from one file to another to modify/add columns.

Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600 Mail
to:  7310 Innovation Blvd, Suite 104
          Ft. Wayne, IN 46818
Ship to:  7310 Innovation Blvd, Dock 9C
          Ft. Wayne, IN 46818
http://www.dekko.com


-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxxxxxxxx <mailto:midrange-l-bounces@xxxxxxxxxxxxxxxxxx> > On Behalf Of x y
Sent: Friday, January 22, 2021 1:06 PM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxxxxxxxx <mailto:midrange-l@xxxxxxxxxxxxxxxxxx> >
Subject: Re: Converting large amount of data

CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.


Copy data into PF's without logicals, SQL indexes, or constraints (referential integrity and such).

In CPYF, use FROMRCD(1) and COMPRESS(*NO) to try to force the system to do a block copy.  You mentioned "new format", so you'll probably use FMTOPT(*MAP *DROP); don't use it unless you need to.

WRT "new format", remember you can change your DDS source (multiple columns at once) and then execute CHGPF (including the DDS source file and member
names) and the system will change the format for you; it's the equivalent of SQL's ALTER TABLE.  *My experience suggests the logicals are magically maintained through this process; this can be a major time-saving benefit*.
So, if you need to add new columns, resize others, change the text, edit codes, or column headings, CHGPF is the way to go.  I've been midranging since the days of the System/3 and using CHGPF to change the layout of a file is IMO one of the top five enhancements ever.

On Fri, Jan 22, 2021 at 8:27 AM <smith5646midrange@xxxxxxxxx <mailto:smith5646midrange@xxxxxxxxx> > wrote:

We have some files that are pretty big that we are going to be
converting to a new format and I'm looking for some info on converting
the data.



I am not looking for answers about splitting in threads or anything
like that.  We are converting over 700 files so threads will not help me.
Threads will just delay other files from being converted.



That said, I have one specific question that I am looking for an
answer
on.



Is it more efficient to

1.      Create the new physical files and their logical files, and
then convert the data

or

2.      Create the new physical files, convert the data, and then add
the logical files



I know the machine has 3 CPUs with the ability to pull 3 more from the
pool.

I forget the memory but it is a pretty decent amount

There are 12 I/O channels.



If there is something else you need to know to give me an answer,
please ask.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L@xxxxxxxxxxxxxxxxxx>  To subscribe,
unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx>
Before posting, please take a moment to review the archives at
https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx <mailto:support@xxxxxxxxxxxxxxxxxxxx>  for any subscription
related questions.

Help support midrange.com by shopping at amazon.com with our affiliate
link: https://amazon.midrange.com



--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L@xxxxxxxxxxxxxxxxxx>  To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx>
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx <mailto:support@xxxxxxxxxxxxxxxxxxxx>  for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L@xxxxxxxxxxxxxxxxxx>  To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx>
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx <mailto:support@xxxxxxxxxxxxxxxxxxxx>  for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L@xxxxxxxxxxxxxxxxxx>  To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx>
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx <mailto:support@xxxxxxxxxxxxxxxxxxxx>  for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L@xxxxxxxxxxxxxxxxxx>  To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx>
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx <mailto:support@xxxxxxxxxxxxxxxxxxxx>  for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link:
https://amazon.midrange.com

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L@xxxxxxxxxxxxxxxxxx>  To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx>
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx <mailto:support@xxxxxxxxxxxxxxxxxxxx>  for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link: https://amazon.midrange.com
 
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L@xxxxxxxxxxxxxxxxxx>  To subscribe, unsubscribe, or change list options,
visit: https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx <mailto:MIDRANGE-L-request@xxxxxxxxxxxxxxxxxx>
Before posting, please take a moment to review the archives at https://archive.midrange.com/midrange-l.

Please contact support@xxxxxxxxxxxxxxxxxxxx <mailto:support@xxxxxxxxxxxxxxxxxxxx>  for any subscription related questions.

Help support midrange.com by shopping at amazon.com with our affiliate link: https://amazon.midrange.com


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.