Aaron,
In theory yes you could that would set the PTF images in one place
which could be used by many partitions. However you first must be
careful that you're getting all the PTFs because by default SNDPTFORD
won't download PTFs you already have. THus attempting to use these on
another system or partition may fail with unsatisfied dependencies.
Another issue is letting the system name the files you don't know when
you can delete them.
Rob,
You tried what I first tried and of course itnoworkie. The problem here
is that you must have the images in a place where the image catalog can
properly access them EVEN IF it's in restricted condition. (e.g. where
all communications is shut down.)
So what I do instead is to use Fix Central to download each group as it
changes into the correct directory for that release. Such as
/ibmi/v7r2/ptf. The files are named by their group and number so DB19
for the database group level 19. When DB20 comes out I order that and
can now delete the DB19 files. Because I do NOT tell fix central to
query my system for existing PTFs it creates full groups with every pre
and co req. Yes they are bigger but I only need one set this way.
Now how does one access this set, as Rob's way fails. Fortunately way
back in the annals of unsupported releases IBM had already included
support for such a thing and it's really pretty cool and trivially
simple to the most casual of observers.
1) Order and download the images into the shared place as I describe above.
2) Create the image catalog on that system as you would for any system.
3) When you verify it (you ALWAYS verify it first, right?) enter *YES in
the NFSSHARE Parm. This creates the file VOLUME_LIST in the image
catalog directory and populates it.
4) Assure the VOLUME_LIST file has CCSID 819. Failure to do this may
drive you batty. e.g.:
CHGATR '/ibmi/v7r2/ptf/VOLUME_LIST' ATR(*CCSID) VALUE(819)
5) Gve the public read and execute authority to the directory tree.
CHGAUT OBJ('/ibmi/') USER(*PUBLIC) DTAAUT(*RX) SUBTREE(*ALL)
6) Share the directory read only much like Rob did below. e.g.
CHGNFSEXP OPTIONS('-i -o ro') DIR('/ibmi/v7r2/ptf')
The PTFs are now available to any system but you do need to set up
access to them and that's where Rob and I failed initially. We used
straight up NFS. (Actually I also tried /QFileSvr.400 and /QNTC as well.
I'm a slow learner)
This requires two bits to accomplish:
1) Assign a service tools interface to the partition. That interface
must be on the same network or be able to reach the network that
contains the share above. This is done in Service Tools (STRSST) and
then option 8 then F13. Select an available Ethernet port (one that is
NOT being used by IBM i and yes it may be a virtual interface) and
assign it an IP address and gateway. Save it and remember to activate it
as well.
Ignore the warning that "Selected resource is not full LAN console
capable" as you're not using the interface for that. Side note: If you
already have LAN console on this partition then this step is already
completed!
Note that you MUST Be able to PING from the NFS Host partition to this
service tools address. Until that works DO NOT proceed as you will not
get far.
2) Create a network based virtual optical drive.
CRTDEVOPT DEVD(OPTNET01)
RSRCNAME(*VRT)
TYPE(*RSRCNAME)
LCLINTNETA('*SRVLAN')
RMTINTNETA('1.2.3.4')
NETIMGDIR('/ibmi/v7r2/ptf')
TEXT('PTF Images on NFS Hosting Partition')
Where:
1.2.3.4 is the IP address of the server where the CHGNFXEXP was performed.
'/ibmi/v7r2/ptf' matches *EXACTLY to the export. Case Is SeNSitIve!
Now vary on the virtual optical and if all is well then WRKOPTVOL will
show the first image in the catalog as available. Use this in GO PTF
option 8 or INSPTF. It will roll through all the images in the
VOLUME_LIST file.
A few notes.
1) After use VARY OFF the OPTNET01 device. This is important as it only
queries the VOLUME_LIST file at Vary ON time so you'll want that to
happen when you need it.
2) No Jumbo Frames. Itnoworkie.
3) The Image Catalog on the NFS host is not used. It's only use is to
assist you in validating the images are all there and valid and to
create the VOLUME_LIST file. You do not need it to be mounted in a
virtual optical drive to use it.
4) Performance is vastly superior this way than copying all those images
to a partition and using them there. First you save the copy and second
the disk space and third the I/O load is split - reading on the host and
updating IBM i on the target.
5) Any number of partitions can access these images at the same time.
Each is only reading via NFS the various image files.
6) If you DO replace DB19 with DB20 you do not need to redo the entire
image catalog on the host. Simply edit the VOLUME_LIST file and change
DB19_1.bin to DB20_1.bin etc and you are good to go.
No Rob you can't write to such an image catalog, tried it, got the
raspberries. Read Only. Way ahead of you there.... :-(
And there you have it piece of cake.
Merry Christmas!!
- Larry "DrFranken" Bolhuis
www.Frankeni.com
www.iDevCloud.com - Personal Development IBM i timeshare service.
www.iInTheCloud.com - Commercial IBM i Cloud Hosting.
On 12/23/2015 1:50 PM, rob@xxxxxxxxx wrote:
On a new lpar,
which happens to have a fully loaded
/fixes/V7
/fixes/CUME
I ran
EXPORTFS OPTIONS('-I -O') DIR('/fixes')
STRNFSSVR SERVER(*ALL)
On a different lpar I ran:
MOUNT TYPE(*NFS) MFS('domtest:/fixes') MNTOVRDIR('/domtest/fixes')
OPTIONS('rw,suid,retry=5,rsize=32768,wsize=32768,timeo=20,retrans=5,acregmin=30,acregmax=60,acdirmin=30,acdirmax=60,soft,async,sec=sys,vers=3:2,nocache')
I can do
WRKLNK '/domtest/fixes/cume/*'
and see stuff. But when I try
CRTIMGCLG IMGCLG(DOMTESTPTF) DIR('/domtest/fixes/cume') CRTDIR(*NO)
I get
CPD4F06 - Unable to determine absolute path name.
Recovery . . . : Make sure the directory resides in a file system
supported
by the image catalog commands.
I'm suspecting that mounts are not supported...
Rob Berendt
As an Amazon Associate we earn from qualifying purchases.