There are numerous ways to achieve what you want.
Since the company i work for is not yet a BP i don't have access to the
eConfig tool. Therefore i have no idea on pricing (not even list prices).

But you should compare:

IBM i hosting IBM i, with the maintenance headaches you already mentioned
(integrated controllers not being concurrent and the like) and the single
point of failure (the hosting partition)
Dual VIOS to external storage without NPIV (be it a v3700/v5000 or v7000)
Dual VIOS to external storage WITH NPIV (requires NPIV capable switches)

What you spend on one side you save on the other, for example, a Dual VIOS
approach might need as little as 1 HBA per VIOS to handle all traffic (IIRC
with NPIV you CAN mix tape and disk traffic since they will be segregated
to their own virtual adapters and zones) but you have to pay for switches
and storages. An external storage also brings it's own administration
issues but it's usually fully concurrent for maintenance... Also, v7000's
EasyTier might not be as awesome against IBM i but it still accelerates a
lot of workloads and as such a combination of SSD + Spinny on the v7000
might be comparable to the ALL-SSD enviroment you are planning under IBM i
(Don't take my word for it, this is the BP's job, to find your performance
needs and meet them).

Also, as the Doc mentioned, Power8 has currently no expansion avaiable,
only one in the System Planning Tool is the EXP24S which i assume is for
use with dual PCIe controllers (and you dont have that many slots on the
Power8)

I would recommend you download the System Planning Tool since it's a good
way to play with the possibilities.

Best Regards,

Roberto

On Tue, Dec 9, 2014 at 12:55 PM, Jim Oberholtzer <
midrangel@xxxxxxxxxxxxxxxxx> wrote:

Paul,

The design work that would get you to where you want to go will not be
trivial. Your environment tends to be more complex than most. If your
business partner does not have folks that can give you a satisfactory
design
as part of the sale, you need another partner.

One thought how to have SSDs for production and "spinny" disk for the
balance is to use iASPs and completely host your environment. Put the SSDs
in one iASP and create all the network storage for that partition there.
Then all the spinny disk can be used for the balance of the partitions.

More memory is always better than less. Since you'll be added a PRIMARY
partition to host everything, make sure you count on that extra bit of
resource.

I know you're not inclined to do VIOS, but it really is the best solution
for tape virtualization. Ethernet virtualization is fine on IBM i or VIOS.
A dedicated fibre card for each partition (don't forget the PRIMARY!!!!!)
is
the other option.

--
Jim Oberholtzer
Chief Technical Architect
Agile Technology Architects


-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of
DrFranken
Sent: Tuesday, December 09, 2014 9:18 AM
To: Midrange Systems Technical Discussion
Subject: Re: Power8 S824 config

Paul,

Your questions remind me of a sign:
Silly Answers: $0
Answers which require thought: $10
Correct Answers: $100
:-)

Seriously this is enough stuff to require significant thought and
likely additional information. Together you might get enough information
from folks like Jim and Pete and Roberto and others but be prepared to do
some 'Assembly' of the responses!

A few thoughts straight up.
1) The PCIe drawer does not hold disks, only cards. Worse it's not
supported
on the 824 and it's not expected to be so far as I know.
2) The internal RAID cards on the 824 are bigger better faster than the
#5913. Those have been replaced as well with the better cards too. (#ESA3)
3) The internal DVD can be virtualized so you don't need another one.
4) If you are doing all internal disk then IBM i 7.2 as your host is the
correct answer. VIOS is *NOT* the correct answer there.

- Larry "DrFranken" Bolhuis

www.frankeni.com
www.iDevCloud.com
www.iInTheCloud.com

On 12/9/2014 10:05 AM, Steinmetz, Paul wrote:

I'm getting a quote for a Power8 S824 config, comparing numbers to
renewing maint on current P7 740 E6C.

I realize I may not be able to configure my 4 LPARS as I do currently,
dedicated hardware for each.
For R&D LPAR, It will be hard to cost justify 100% SSD for R&D, majority
of this is cold data.
I'm considering one LPAR for all the hardware, then hosting the 4 LPARS.
I haven't ruled out VIOS, but I don't think I have the number of LPARS to
justify and I don't want the VIOS issues recently posted.

Current Production LPAR
100% SSD 18-1794 177 gb ; paired 5913 -2 Raid5, no hot spare, 2.85
tb.
5767 1Gb Ethernet UTP 2Port
5735 8 Gigabit PCI Express Dual Port Port Fibre Channel Adapter-
to LTO5 3573 tape library via FC switch
2893 PCIe 2-Line WAN w/Modem
Current R&D LPAR is 100% 10K 24-1962 571 gb ; paired 5913 - 2 Raid5, no
hot spare, 12.57 tb.
5767 1Gb Ethernet UTP 2Port
5735 8 Gigabit PCI Express Dual Port Port Fibre Channel Adapter-
to LTO5 3573 tape library via FC switch
Current Upgrade LPAR is 100% 10K 8-1916 gb ; integrated paired 57CB -
1 Raid5, no hot spare, 4 tb Current DSLO LPAR hosted from UPGRADE LPAR.

I'm considering
EPXE - 2 S824 6-core (3.89 Ghz)
EJOP - Storage backplane option 3 , with optional EJTM SSD cage.
256 GB mem, possibly 512 GB

Questions, Notes and issues.
1) Production has high demand on IO, must configure this for max
performance
2) What are the largest SSF-3 SSD drive (max 18) that can be placed in
the
CEC with EJ08 Storage backplane option 3?
3) What are the largest 1.8-inch SSD (max 8) that can be placed in the
SSD
Cage EJTM?
4) The internal SATA DVD-RAM (#5771) is controlled by internal SAS
controller, so it cannot be DLPAR. Do I need a 2nd DVD-RAM?
5) Is the Power8 EXP24S the same as the Power 7 EXP24S? I currently have
2 EXP24S (5887-1) from Power7 - Should I consider moving 1 or both with
DASD
to Power8? One is 100% SSD, the other 10K.
6) One negative with using the integrated SAS controllers. How much
weight
should I put on this. Consider using 5913 or other?
SAS bays and storage backplane options
Unlike the hot plug PCIe slots and SAS bays, concurrent maintenance
is not
available for the integrated SAS controllers. Scheduled downtime is
required if a
service action is required for these integrated resources.
7) For tape, staying with FC, considering upgrading 3573 LTO5 to LTO6 or
soon TBA LTO7. Will consider other options.
8) For tape, if I go the hosted route, would I need one FC adapter, is
this correct? Can this be used by all 4 LPARS simultaneously? If not, I
will
need multiple adapters?
9) Should I consider the soon TBA PCI expansion drawer to be part of this
config, replacing my current 2 5877?

Thank You
_____
Paul Steinmetz
IBM i Systems Administrator

Pencor Services, Inc.
462 Delaware Ave
Palmerton Pa 18071

610-826-9117 work
610-826-9188 fax
610-349-0913 cell
610-377-6012 home

psteinmetz@xxxxxxxxxx
http://www.pencor.com/






--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe,
unsubscribe,
or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a
moment to review the archives at http://archive.midrange.com/midrange-l.


--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.



This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].