× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Don,

How your LPARS connected, ANYNET or EE.

Check your WRKCFGL on all LPARS, also confirm 10. Work with TCP/IP host table entries

Display Configuration List PENCOR05
09/06/13 09:05:38
Configuration list . . . . . . . . : QAPPNRMT
Configuration list type . . . . . : *APPNRMT
Text . . . . . . . . . . . . . . . :

------------------------------------------APPN Remote Locations-------------------------------------------
Remote Remote Control Local Pre-
Remote Network Local Control Point Secure Single Number of Control established
Location ID Location Point Net ID Loc Session Conversations Point Session
PENCOR06 APPN PENCOR05 PENCOR06 APPN *YES *NO 10 *NO *NO
PENCOR07 APPN PENCOR05 PENCOR07 APPN *YES *NO 10 *NO *NO

Wanted to add a few notes.
We do a full SAVWA of all libraries every night on the source Prod LPAR.
This makes the refresh process simpler, no need for any extra saves.
The restore to DEV or R&D confirms 3 items.
1) Verifies that you have a good backup
2) Tests your app restore procedure
3) Refreshes your R&D or DEV environment.

Years back I used SAVRSTLIB, but as volume gets large, this can get slow.
We were up to 10 to 15 hours.
Save to tape, restore from tape will be much faster, need a tape library that is sharable via fiber switch.
Keep in mind, SAVRSTLIB, SAVRSTOBJ commands actually save to a save file, save file is then sent using object connect, then restored.
If you have a large library, you will need enough temp space to for the temp savf.

Thanks
Paul


-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Don Wereschuk
Sent: Friday, September 06, 2013 9:02 AM
To: Midrange Systems Technical Discussion
Subject: RE: Data Transfer

Rob: I did have a look at this article and thanks for the link. However the issue here is not trying to maintain a current copy of the data for disaster recovery purposes. We have I-Terra to do this and it's working quite well. All I want is a copy of today's data to use for testing and investigation purposes tomorrow. Our production, with a few minor exceptions, is shut down between 2:00 am and 6:00 am every day (and most weekends) so this would be the ideal time to take a copy of the day's data for use in our development environment. This also aids in investigating any problems that the production department had the day before. Saving and restoring the data doesn't seem to be the main concern. I've done a SAVOBJ on the data I need which took approx. .5 hrs. I then transferred about 17 GB to the other system and the restore of this data took about 5 min. which means that a restore of the full 85 GB should take about .5 hrs. Using FTP the transfer of 17 GB of data took approx

. 3 hrs which means to transfer all 85 - 100 GB of data would take approx. 20 hrs. which is useless to me. It appears that the SAVRSTOBJ is the best solution.(Thanks to all who suggested this) I tried to run a test and got an error on the transfer :

SAVRSTOBJ OBJ(A*) LIB(MYLIB) RMTLOCNAME(MYLOCNAME) OBJTYPE(*FILE) SAVACT(*LIB)
ACCPTH(*YES) MBROPT(*ALL) ALWOBJDIF(*ALL) RSTLIB(SPSL)

Route to specified location not found.

An error occurred during the SAVRSTOBJ operation.

We changed the location name using CHGNETA to be MYLOCNAME but this didn't seem to help. Does anyone know if I have to IPL for this to take effect or is there something else I have to do here?

Thanks to the list for all the suggestions and links. I have been investigating them all and am learning a lot which might not help me in this instance but I imagine will come in handy in the future.


******************************************
Don Wereschuk
ISD - Programmer/Analyst
Simcoe Parts Service Inc.
Phone: 705-435-7814 Ex: 302
Fax: 705-435-5029
mailto:dwereschuk@xxxxxxxxxxxxxxx
******************************************
If I'd have done what I said I should have done, then I'd be sitting here saying I should have done what I did do.



-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Brian Piotrowski
Sent: Friday, September 06, 2013 8:24 AM
To: Midrange Systems Technical Discussion
Subject: RE: Data Transfer

Oops, I overlooked that second paragraph. Thanks, Rob.

/b;

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Thursday, September 05, 2013 3:35 PM
To: Midrange Systems Technical Discussion
Subject: RE: Data Transfer

how about the second paragraph then?


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600 Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





From: Brian Piotrowski <bpiotrowski@xxxxxxxxxxxxxxx>
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>,
Date: 09/05/2013 02:58 PM
Subject: RE: Data Transfer
Sent by: midrange-l-bounces@xxxxxxxxxxxx



We're already running iTera on our Prod -> DRP machine, and I doubt Senior Management would have an appetite for adding another node into the mix (at a cost), so we're basically looking for either a freeware / IBM supplied / homegrown solution.

/b;

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [
mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Thursday, September 05, 2013 1:45 PM
To: Midrange Systems Technical Discussion
Subject: RE: Data Transfer

Mimix? Not only do they have a HA product but they also have a replicator product.

Roll your own Google "high availability on a shoestring" or something like that, by Larry Youngren.


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1
Group Dekko
Dept 1600
Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





From: Brian Piotrowski <bpiotrowski@xxxxxxxxxxxxxxx>
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>,
Date: 09/05/2013 01:15 PM
Subject: RE: Data Transfer
Sent by: midrange-l-bounces@xxxxxxxxxxxx



The other option we were considering is some type of incremental backup.
I'd sooner just back up whatever file changes occurred over the course of
the day as opposed to backing up every record as there are some records
that haven't changed in months.

Does the SAVRST function allow for incremental backups, or should we be
looking at some other type of command (or 3rd party backup tools) to
perform that function?

Thanks!

/b;

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [
mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Thursday, September 05, 2013 1:12 PM
To: Midrange Systems Technical Discussion
Subject: RE: Data Transfer

That's a good point Gary. However, when you have 10,000 files in your
production environment you want to replicate over to development it gets a

bit hairy.


Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600 Mail
to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com





From: "Monnier, Gary" <Gary.Monnier@xxxxxxxxx>
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>,
Date: 09/05/2013 12:44 PM
Subject: RE: Data Transfer
Sent by: midrange-l-bounces@xxxxxxxxxxxx



Don,

If all you are interested in is copying the data from production to
development and you already have the physical and logical files on the
development system try using a combination of DDM and CPYF. This approach


has some advantages. First, you don't have to lock the production objects


while saving objects. Second, disk space usage is reduced on both systems
since you don't have to have large save files on both systems. Third, you


can customize data extraction.

It is relatively easy to put together a CL program to automate the copies.

o Run DSPFD FILE(yourlib/*ALL) TYPE(*MBRLIST) OUTPUT(*OUTFILE)
FILEATR(*PF) OUTFILE(QTEMP/yourfile)
o Read through the output file in QTEMP. For each entry submit a job to
copy data from production to development.
SBMJOB CMD(CALL PGM(MYPGM) PARM(&MLLIB &MLFILE &MLNAME))
JOB(&MLNAME) JOBQ(QUSRNOMAX)

MYPGM does the following.

o Creates a DDM file pointing to &MLLIB/&MLFILE(&MLNAME) on the
production system.
o Runs CPYF FROMFILE(DDMFile) TOFILE(&MLLIB/&MLFILE) FROMMBR(*FIRST)
TOMBR(&MLNAME)

You can substitute MYPGM for a conversion program if the developers have a


file change.

Have fun.

-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [
mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Don Wereschuk
Sent: Thursday, September 05, 2013 7:59 AM
To: MIDRANGE-L@xxxxxxxxxxxx
Subject: Data Transfer

Hi All: I'm having a problem transferring current data from a production
box to our development box. I've tried using FTP but I'm having issues
with this. First I did a SAVF on the library but it seems there is a limit


on the amount of data that can be transferred at a time. I believe I can
change the limit but that's not my main concern. I split the save into
multiple save files and only saved PFs and LFs but I still end up with
approx. 100 GB of data to transfer. My problem is that in using FTP it is
taking about 3 hrs. to FTP 16 GB of data. This would result in a time of
almost 20 hrs. to transfer the 100 GB. Is there a faster way to do this or


is using a tape backup and physically delivering it to the other machine
the fastest way? I only have a window of approx. 4-5 hrs. Any and all
suggestions are appreciated.

TIA

******************************************
Don Wereschuk
ISD - Programmer/Analyst
Simcoe Parts Service Inc.
Phone: 705-435-7814 Ex: 302
Fax: 705-435-5029
mailto:dwereschuk@xxxxxxxxxxxxxxx
******************************************
If I'd have done what I said I should have done, then I'd be sitting here
saying I should have done what I did do.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.