× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.




Pete,

A bus is where the I/O cards are located at and grouped. Typically there are between 3-5 PCI-X slots per bus(or PCI slots for older servers). In
order to add additional cards we're forced to add additional buses. Each bus has a connection to the system I/O board and allows each bus to talk to
the CPU/Memory/etc... In order to keep the I/O load at peak performance you do not want to overload a single bus, that's why a server will have
multiple busses. A very high end SCSI card can push the boundaries of a single bus with enough disk drives attached to it. That's also why there are
different speed buses. Like the 133MHZ PCI-X bus or the DDR 266MHZ PCI-X2 bus. The nice thing about the newer PCI-e (express) busses are that they
have a single connection to the I/O system board (no bus) for each card slot. So you can feed a TON of data through a single card on the newer PCI-e
cards. The issue with PCI-e cards are what size you have. There are PCI-e x1, x2, x4, x8, x16 which means how many "lanes" the card has for
transferring data. Think of lanes as an interstate; with only 1 lane you can only "safely" move so many cars before the "lane" get's full. You add
lanes to increase throughput, or you increase speed (MHZ) to increase the speed limit so that cars can travel to their destination faster. The
biggest thing is to not overload a single bus, that's why IBM ranks certain cards with a "weight". Certain buses can only hold a given "weight"
before the bus becomes saturated. You don't put VERY high I/O load cards in the same bus, as a rule. Take for example two 4GB fibre cards connecting
to SAN storage (2 for redundancy). You would only install one fibre card per bus, which would also give you redundancy in the case of a failed bus.
By placing one card on two different buses, it would prevent each bus from being overloaded.

An IOP can only support a certain "weight" before the IOP becomes saturated and you require additional IOP's to support other IOA's. Usually an IOP
can only support one disk connection, whether it be a fibre card or a SCSI card. One IOP can usually support a couple of network cards or two fibre
cards that are connected to tape drives, etc...

Good luck with this bit of trivia,
Bill Epperson Jr.
Systems Communications Analyst
Memorial Health System
(719) 365-8831





Pete Helgren <Pete@xxxxxxxxxx>

Sent by: midrange-l-bounces@xxxxxxxxxxxx To
Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
cc
03/11/2009 11:50
Subject
Re: Disk upgrade with no downtime - possible? Follow up question
Please respond to
Midrange Systems Technical Discussion
<midrange-l@xxxxxxxxxxxx>






Former question is still open (in response to Kirk) but may be related
to this one:

What are the requirements for having more than one parity set in an
ASP? It *looks* like the same IOA can have multiple parity sets but I
can't tell what the "bus" requirements are. Basically, reading the
documentation, it says that each "bus" can support a parity set (I
think) but I am not sure what a "bus" is (unless it is the large 4 wheel
type that carries people). It may mean that each IOA has multiple
connectors for a set of disks. I am asking specifically what do I need
to look for in the config in order to make sure that I *can* have
multiple parity sets on the LPAR?

The reason I am asking is that I would like to make use of the 5 - 70GB
drives in the development LPAR and, since I cannot add them to the
existing parity set, perhaps I can create two parity sets?

Pete


Pete Helgren wrote:
How about two parity sets ? One with 5 4326 and one with 5 4327's? I
know I couldn't do them piecemeal, but if I did a save option 21 and
then rebuilt the raid sets with the larger drives and then restored
would that work?

Sounds like I can't do it without taking the LPAR offline but that was
plan "B" anyway. My next issue would be: "Can I use the 5 - 70GB drives
in this LPAR or do I now need to replace all 10 drives to make use of
larger drives?

Pete


Kirk Goins wrote:

Pete Helgren wrote:


Seems like I have either asked or seen this before but I am planning to
upgrade 5 out of 10 disks allocated to an LPAR which is currently at
86%. My goal is to hopefully replace the drives while the partition is
active so the question is: Can I replace the drives one at a time and
just allow RAID-5 to rebuild the drives?

The info center for V5R4 says this:

"A disk unit that is running with device parity protection can be
exchanged only if it has failed. A disk unit running with device parity
protection cannot be replaced with a non-configured disk even if it has
failed."

So it sounds like I *can't* do it.

Any pointers here? Seems like I should be able to just replace each
disk one at a time and let then it rebuild before replacing the next
one, but having only replaced truly failed drives, I am in uncharted
territory.

The existing drives are model 4326 type 072 (30GB) and I plan to
replace them with 4327 (70GB). V5R4M0 9406 520

Pete







Pete, this isn't going to work. #1 All drives in a Raid Set Must be the
Same Size, so you can't 'upgrade' this way. #2 At least in the past
doing Concurrent Maint Drive swaps I'e had problem with the LIC/OS
getting confused if I pulled 1 size drive and inserted another.





--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.