×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
On 05-Apr-2012 07:14 , Stone, Joel wrote:
CPYF has no support? Please see my next response - I don't
understand!!
CRPence on Wednesday, April 04, 2012 6:15 PM
On 04-Apr-2012 14:51 , Stone, Joel wrote:
I'm trying to write a CL routine to promote a file with fields
renamed AND several fields added into the middle of the record
format.
I am looking for an example data conversion program. If you have
one you could share to get me started, that would be great!
1) CPYF *nochk to copy the data 2) CPYF *map to move the data
into the proper fields
<<SNIP>>
Irrespective of CMS, the CPYF has no support to effect [well], the
described changes; i.e. not without either significant data loss
or using multiple passes of DDL and data-copy activity. <<SNIP>>
That response [On 05-Apr-2012 07:32 , Stone, Joel wrote:] is copied
below for reference, as if it were part of the quoted message for my
reply, with the above quoted text left for context. My reply follows
the quoted text below:
re: [Charles Wilt on Thursday, April 05, 2012 8:37 AM wrote:]
"CL is a poor choice for your needs..."
Why is a very simple CL with CPYF a poor choice? It will do the
job WITHOUT naming fields, and could be used on ANY file conversion
- as opposed to SQL, which would need specific fields named and is
NOT reusable for other files??
Step1:
To copy data with ONLY field name changes (no new fields, no field
length changes: CPYF FMTOPT(*NOCHK) from SOURCELIB to QTEMP version
of file with field name changes
Step2:
CPYF FMTOPT(*MAP *DROP) from QTEMP to TARGETLIB version of file with
field name changes AND new fields inserted.
Please explain why this simple, reusable method would be a poor
choice??
As I had alluded, in further clarification of "has no support", the
use of CPYF is effective only with *multiple* passes of the data using
intermediate definition(s) [i.e. DDL] for the TABLE. As well, there are
caveats for use of FMTOPT(*NOCHK) for which the generic and thus
"reusable" capability of that feature of the Copy File utility may leave
one wanting for any scenarios involving more-than the simplest record
formats. The CPYF utility has limitations as well; e.g. does not
participate in isolation. There are also idiosyncrasies that must be
understood, for which the inference of /generic/ may haunt if the
effects are unknown\misunderstood; e.g. FROMRCD(*START) would be
discouraged in such scenarios, but that is the parameter default. And
as with any *CMD, all parameters capable both of being specified and
having defaults changed, should be specified to avoid unexpected
behavior from any changed defaults, if the command runs somewhere that
strict consistency with the test environment is not ensured.
FWiW I actually consider using SQL the better choice specifically
because the request is not generic. If I want to upgrade a database
file, I do not want to merely /assume/ everything is going great, as
would be the general effect with CPYF FMTOPT(*NOCHK) and CPYF
FMTOPT(*MAP *DROP). For example, I would want the upgrade to diagnose
an error for unexpectedly missing column names or wrong number of
columns. I would rather not have to code something to interrogate the
nature of the existing file to be upgraded [presumed-failure coding
style; i.e. verify upcoming request should complete without error, but
for which error handling is still apropos], I would rather just let the
SQL diagnose any failed assumptions when the SQL interrogates the
existing file with regard to the requirements of the statement being
executed [presumed-success coding style; i.e. do not pre-verify the
assumptions of upcoming request which will diagnose any difficulties,
for which error handling is required].
Regards, Chuck
As an Amazon Associate we earn from qualifying purchases.