× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Hello Pete,

The following is what I find works best for me in a small environment with no SLAs whatsoever. Add my strong background about virtualization with Vmware ESXi. Others may disagree or even reject the validity of my Vmware skills applied to an IBM LPAR environment.


Am 19.01.2021 um 03:49 schrieb Pete Helgren <pete@xxxxxxxxxx>:

1) Processors: I gotta admit, this always threw me, even with the JS12 and VIOS. I am assuming that I want one dedicated processor for IBM i which it is licensed for. Correct?

The HMC CPU thing is a bit more complicated. You can add one CPU (core) to be dedicated to an LPAR (usually a waste of resources), or use the shared mode. Dedicated lessens the threat of being affected by Meltdown.

https://en.wikipedia.org/wiki/Meltdown_(security_vulnerability)

The first section has a minimum (fractional!), a desired, and a maximum number of CPU (cores) to be assigned to an LPAR. So you can add a minimum of 0.1 CPU (cores) to an LPAR, a desired amount of 0.5 and a maximum of 1.

I never bothered to learn what this maximum setting is all about. My guess is, that you can’t add more than that setting with dynamic partitioning (adding resources while the LPAR is running). The other settings prevent the start of an LPAR if there is less CPU available than configured for that LPAR.

For the Guest OS, there’s no such thing as 1/10 CPU. So, I always round the three values up to a whole number to what I added before. Examples: 0.1 CPUs => 1 vCPU. 0.5 CPUs => 1 vCPU. 1.5 CPUs => 2 vCPUs.

I always enable the uncapped setting: If you don’t, an LPAR can never use more processing capacity than what’s configured, even if the other CPU (cores) are idle. This also seems to me a waste of resources, so I always allow uncapped to any LPAR. I use the weight value to have an LPAR prefer to get more CPU if it’s required: Even if all LPARs need max vCPU themselves, the LPARs with a higher relative value get a higher relative priority.

2) Any advantage to shared memory vs

It’s the same as with CPUs. If you dedicate memory to an LPAR, it cannot be used by others. I recommend to always go with „shared“.

3) On the I/O page: Is there any reason I *wouldn't *want to include all the physical resources like Gigabit Adapter and the two SAS RAID adapters?

Depends. If you can’t answer yourself, then the default answer is „no, there’s not“. :-)

Again the plan here is to carve out a couple of Linux LPARS after I free up some resources.

Since all resources needed by Linux are provided virtual by the hypervisor, you normally don’t need to dedicate stuff in here.

4) As I ponder creating this new profile, how will this affect the existing resources on the IBM i?

It will have less, of course. But you can minimize the impact, especially in terms of CPU. See above.

In terms of memory: You’ll have less cache, so I/O will be increased. How much depends on your particular workload.

Most notably, the Ethernet adapters that currently exist. Do they lose the underlying resources since I am now creating a new virtual resource for it? Just wondering if when this guy IPL's if it will be accessible....

That’s something I’ve not entirely grasped. My hardware works with LHEA, but this has been superseded, I’ve been told. So maybe others can provide more information on this particular topic.

I’ve also read that there (is? has been?) the possibility of bridging a virtual NIC and a physical NIC for an LPAR. After I learned that this bridging is supported by the CPU of the hosting LPAR shoveling packets, I refrained from digging deeper into this topic.

That's only the first few dumb questions....I noticed that the SCSI adapters have a "Server" or "Client" option (I'll assume SERVER although the document I am looking at says server and then later on talks about creating a client SCSI....).

The hypervisor „bridges“ physical SCSI resources such as optical and tape drives, as well as emulated SCSI DASD (NWSSTG), and even virtual optical drives to the client LPARs. To accomplish this, a communications channel has to be established. That’s a matching pair of server- and client SCSI adapter. I always set the IDs to the same value as the LPAR ID, so it’s more easy to see what belongs to what.

And, the checkbox next to this adapter is required for partition activation?

If an LPAR cannot access it’s IPL source, it can’t work. It can work, if it can’t access, for example, a dedicated SCSI adapter with an external tape drive (because another LPAR is currently running and claiming this resource).

The more interesting part is the tagged I/O setting. You are *required* to have IPL and alternate IPL source to the *same* client SCSI adapter. At least with IBM i as guest. I’m setting it also for Linux LPARs, because it can’t hurt. (This was my most confusing experience, after older machines had different devices for primary and alternate IPL sources.)

More dumb questions later.....

I’m waiting. :-)

Honestly, when I started with LPAR stuff two years ago, the settings in the HMC also were highly confusing to me. But with time, experimentation and reading of documentation, confusion will be replaced by knowledge. :-) So, I know how you feel.

:wq! PoC


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.