|
Rob,--
The best documentation I found is TR7 sg247858.pdf
7.4.1 Disk virtualization
I'm a bit confused by this statement.
More than one VSCSI pair can exist for the same client partition in
this environment. To minimize the performance impact on the host
partition, the VSCSI connection is used to send I/O requests, but not
for the actual transfer of data. Using the capability of the POWER
Hypervisor for Logical Remote Direct Memory Access (LRDMA), data is
transferred directly from the physical adapter that is assigned to the host partition to a buffer in memory of the client partition.
Further down
For performance reasons, you might consider creating multiple storage
spaces that are associated with multiple NWSDs. The rule of thumb is 6
- 8 storage spaces for each client partition. This setup implies that
you are also creating multiple sets of VSCSI adapter pairs between the
hosting partition and the client partition. Associate each hosting
partition’s server VSCSI adapter with a separate NWSD by referencing
the VSCSI adapter’s resource name in the NWSD, and then link storage spaces to the NWSDs. This action supplies multiple disk arms for the client partition to use.
If I'm reading this correctly, its not the multiple NWSSTG that add
the performance, but multiple NWSD
Is this correct?
Paul
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Saturday, April 26, 2014 7:59 PM
To: Midrange Systems Technical Discussion
Subject: RE: Distribute IBM i on available disk units Was: 70GB load
source and 5 other drives
Each NWSD has an entry on the host configuration in the HMC. One server scsi entry for each client.
Now, I have this one machine that has a HUGE, busy, guest. That has 2
or
3 server scsi entries for that one guest and each server scsi entry has a client scsi entry. Why the multiples? Because there's a limit to the number of NWSSTG items per NSD.
WRKNWSD
Network
Server Text
MAIL1 MAIL1, Domino Guest lpar
MAIL1D1 MAIL1, More disk drives
MAIL1TO MAIL1, Tape and Optical
All three of these point to just one lpar.
WRKNWSSTG
MAIL1D101 MAIL1D1 1 *DYN *UPDATE
MAIL1D102 MAIL1D1 2 *DYN *UPDATE
MAIL1001 MAIL1 1 *DYN *UPDATE
MAIL1002 MAIL1 2 *DYN *UPDATE
MAIL1003 MAIL1 3 *DYN *UPDATE
MAIL1004 MAIL1 4 *DYN *UPDATE
MAIL1005 MAIL1 5 *DYN *UPDATE
MAIL1006 MAIL1 6 *DYN *UPDATE
MAIL1007 MAIL1 7 *DYN *UPDATE
MAIL1008 MAIL1 8 *DYN *UPDATE
MAIL1009 MAIL1 9 *DYN *UPDATE
MAIL1010 MAIL1 10 *DYN *UPDATE
MAIL1011 MAIL1 11 *DYN *UPDATE
MAIL1012 MAIL1 12 *DYN *UPDATE
MAIL1013 MAIL1 13 *DYN *UPDATE
MAIL1014 MAIL1 14 *DYN *UPDATE
MAIL1015 MAIL1 15 *DYN *UPDATE
MAIL1016 MAIL1 16 *DYN *UPDATE
Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600 Mail to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com
From: "Steinmetz, Paul" <PSteinmetz@xxxxxxxxxx>
To: "'Midrange Systems Technical Discussion'"
<midrange-l@xxxxxxxxxxxx>
Date: 04/26/2014 07:47 PM
Subject: RE: Distribute IBM i on available disk units Was: 70GB
load source and 5 other drives
Sent by: midrange-l-bounces@xxxxxxxxxxxx
Rob,
At this time, I'm only using the host and guest LPARs for creating and
testing DSLO images.
I'm the only user on these LPARS, so no performance issues.
I'm keeping it simple, 1 NWS.
However, things can change overnight, I want to have a good
understanding of the pros/cons with multiple NWS.
When you added additional NWS, did you also have to do any HMC config
changes?
Paul
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [
mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Saturday, April 26, 2014 7:29 PM
To: Midrange Systems Technical Discussion
Subject: RE: Distribute IBM i on available disk units Was: 70GB load
source and 5 other drives
Depends on the guest.
One has ~8 Domino servers. Each of these servers is clustered on to
at least two other servers.
One has only 1 Domino server and it sits in our DMZ as another cluster
of our DMZ Domino servers One has 1 Domino server and a full set of
Infor's software for testing.
Neither has a whale of a lot of active users.
We went with guesting so that with scattering each lpar gets 20 arms.
That, and if we add more we do not have to add more raid controllers.
This is the first time the biggest one is guested. It used to be the
host. Now we've gone to a dedicated host lpar.
The others used to have just one storage space and seemed rather slow.
Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600
to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com
From: "Steinmetz, Paul" <PSteinmetz@xxxxxxxxxx>
To: "'Midrange Systems Technical Discussion'"
<midrange-l@xxxxxxxxxxxx>
Date: 04/26/2014 07:14 PM
Subject: RE: Distribute IBM i on available disk units Was: 70GB
load source and 5 other drives
Sent by: midrange-l-bounces@xxxxxxxxxxxx
Rob,
I'm not sure if you previously stated, what is this guest LPAR used for?
Paul.
-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx [
mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of rob@xxxxxxxxx
Sent: Saturday, April 26, 2014 7:00 PM
To: Midrange Systems Technical Discussion
Subject: Re: Distribute IBM i on available disk units Was: 70GB load
source and 5 other drives
I'm digging what you're saying there. I'm no stranger to TRCASPBAL.
Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600
to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com
From: Sue Baker <sue.baker@xxxxxxxxxx>
To: midrange-l@xxxxxxxxxxxx
Date: 04/26/2014 02:46 PM
Subject: Re: Distribute IBM i on available disk units Was: 70GB
load source and 5 other drives
Sent by: midrange-l-bounces@xxxxxxxxxxxx
rob@xxxxxxxxx wrote on Sat, 26 Apr 2014 15:58:16 GMT:
At which point do you run:
STRASPBAL TYPE(*ENDALC)
of the load source?
1: After Task 8: Restore the operating system, beginning with ƒ
oTask 1: Starting to restore the operating systemƒ ?
2: In the middle of the task 8 above? If so, at which step?
When?
If i is hosting i, I thing I would elect to attempt to run with no
drives in *ENCALC status until I actually see some sort of a
performance issue where high I/Os to the load source can be clearly
identified as the culprit.
At that time, I would run STRASPBAL *ENDALC for UNIT(1) followed by a
STRASPBAL *MOVDTA.
Other options could be to run a TRCASPBAL followed by a STRASPBAL
*USAGE to rearrange "cold" data. This would be in the hope the hot
load source becomes warm.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.