Paul, I don't know any details, just heard the rumor. I'm guessing they
just stole the code from ATape and VIOS and they plan to run it in PASE.
But that is a shear guess on my part.
--
Jim Oberholtzer
Agile Technology Architects
-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxx> On Behalf Of Steinmetz,
Paul
Sent: Wednesday, December 19, 2018 10:12 AM
To: 'Midrange Systems Technical Discussion' <midrange-l@xxxxxxxxxxxx>
Subject: RE: NWSD and NWSSTG recommendations
< I've heard rumor that IBM intends to virtualize tape libraries in IBM i
now, augmenting the device virtualization. When that happens about a third
of my VIOS installations will cease to be needed.
I can't wait.
So then you would only need one FC card to the tape library, or VTL.
I'm curious on how this will work?
Paul
-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Jim
Oberholtzer
Sent: Wednesday, December 19, 2018 11:06 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: NWSD and NWSSTG recommendations
Yea, we do that occasionally too. VIOS tape and Ethernet support is
excellent.
That may be changing soon too. I've heard rumor that IBM intends to
virtualize tape libraries in IBM i now, augmenting the device
virtualization. When that happens about a third of my VIOS installations
will cease to be needed.
--
Jim Oberholtzer
Agile Technology Architects
-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxx> On Behalf Of Rob Berendt
Sent: Wednesday, December 19, 2018 10:00 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Subject: RE: NWSD and NWSSTG recommendations
Things can get interesting in the guesting world. One guest can have more
than one host. For example, disk could be hosted from IBM i, while tape and
ethernet could be hosted from VIOS. We do this.
Rob Berendt
--
IBM Certified System Administrator - IBM i 6.1 Group Dekko Dept 1600 Mail
to: 2505 Dekko Drive
Garrett, IN 46738
Ship to: Dock 108
6928N 400E
Kendallville, IN 46755
http://www.dekko.com
From: "Jim Oberholtzer" <midrangel@xxxxxxxxxxxxxxxxx>
To: "'Midrange Systems Technical Discussion'"
<midrange-l@xxxxxxxxxxxx>
Date: 12/19/2018 10:32 AM
Subject: RE: NWSD and NWSSTG recommendations
Sent by: "MIDRANGE-L" <midrange-l-bounces@xxxxxxxxxxxx>
Kevin:
Are you keeping VIOS and using ViSCSI support or going completely to IBM i
hosting IBM i?
IBM i cannot host anything if it is virtualized as well.
IF VIOS then the rules are very different than IBM i as a hosting
environment. Frankly VIOS is awful at iSCSI virtualization so if that's all
that VIOS is doing I would far prefer an IBM i host. (IBM i can host IBM i
, Linux, and AIX)
--
Jim Oberholtzer
Agile Technology Architects
-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxx> On Behalf Of Kevin
Bucknum
Sent: Wednesday, December 19, 2018 9:08 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Subject: RE: NWSD and NWSSTG recommendations
Is there a guide somewhere on setting this up? And maybe a best practices?
We are working on getting our next box now. Our dev partition which is 2
mirrored 283 gig drives hosted on vios (and slow as hell!) is going to
become about 500 gigs in an i on i situation. I'd like to optimize that disk
speed, as that is the only really bad thing about that partition now.
-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Jim
Oberholtzer
Sent: Wednesday, December 19, 2018 8:53 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: NWSD and NWSSTG recommendations
200GB drives in the quantity you are proposing should work well.
The limits on the number of NWSSTG units a NWSD instance would support was
16 and was raised later, I'm not sure what the limit is now, but I've always
found performance is best when we put no more than 12-13 drives per NWSD
instance. It also gives you the ability to add drives later and spread them
out vs. putting all the new drives in one place.
Watch memory in the machine pool and *BASE, carefully. That's where quite a
bit of this virtualization is done, you'll be surprised at how much memory
it takes. Don't short it.
--
Jim Oberholtzer
Agile Technology Architects
-----Original Message-----
From: MIDRANGE-L <midrange-l-bounces@xxxxxxxxxxxx> On Behalf Of Steinmetz,
Paul
Sent: Wednesday, December 19, 2018 8:38 AM
To: 'Midrange Systems Technical Discussion' <midrange-l@xxxxxxxxxxxx>
Subject: NWSD and NWSSTG recommendations
New P9, 9009-42A, V7R3.
I will be virtualizing about 20 TB of SSD to client R&D LPAR.
The entire R&D LPAR will virtualized.
My plan was 10 NWSD each consisting of 5 200gb NWSSTG.
This would give me 50 arms, each about 200gb each, one would be the load
source.
Some rules and guidelines from previous posts.
More arms the better.
For performance - at least 6 NWSSTG per system, max 16 (32?) NWSSTG per NWSD
Consistently sized and at least 70GB ??
Goal is for optimal performance on both the host and the client LPARs.
1) Does anyone have any history on the optimal number of NWSD?
2) Can too many NWSD cause a performance issue.
Also, I will be repeating this process for 2 additional sandbox LPARS, each
about 5 TB.
When complete, host LPAR will have about 20 NWSD.
Any thoughts from the group?
Thank You
_____
Paul Steinmetz
IBM i Systems Administrator
Pencor Services, Inc.
462 Delaware Ave
Palmerton Pa 18071
610-826-9117 work
610-826-9188 fax
610-349-0913 cell
610-377-6012 home
psteinmetz@xxxxxxxxxx
http://www.pencor.com/
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe,
or change list options,
visit:
https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link:
https://amazon.midrange.com
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe,
or change list options,
visit:
https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link:
https://amazon.midrange.com
[
https://www.medtronsoftware.com/img/MedtronMinilogo.bmp] Kevin Bucknum
Senior Programmer Analyst
MEDDATA / MEDTRON
120 Innwood Drive
Covington LA 70433
Local: 985-893-2550
Toll Free: 877-893-2550
https://www.medtronsoftware.com
CONFIDENTIALITY NOTICE
This document and any accompanying this email transmission contain
confidential information, belonging to the sender that is legally
privileged. This information is intended only for the use of the
individual
or entity named above. The authorized recipient of this information is
prohibited from disclosing this information to any other party and is
required to destroy the information after its stated need has been
fulfilled. If you are not the intended recipient, or the employee of
agent
responsible to deliver it to the intended recipient, you are hereby
notified
that any disclosure, copying, distribution or action taken in reliance on
the contents of these documents is STRICTLY PROHIBITED. If you have
received this email in error, please notify the sender immediately to
arrange for return or destruction of these documents.
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing
list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe,
or change list options,
visit:
https://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at
https://archive.midrange.com/midrange-l.
Please contact support@xxxxxxxxxxxx for any subscription related
questions.
Help support midrange.com by shopping at amazon.com with our affiliate
link:
https://amazon.midrange.com
As an Amazon Associate we earn from qualifying purchases.