On Fri, Dec 5, 2008 at 5:13 PM, McKown, John
<John.Mckown@xxxxxxxxxxxxxxxxx> wrote:
A sysplex is a way for z/OS and subsystems to "talk" to each other to
coordinate changes and access to resources. For example, RACF is the
security system on the z. I have a single database which contains all
users and all access definations for all systems. Every LPAR can safely
and directly do I/O to that database on DASD because they "talk" via
sysplex communications to "enqueue" access as needed for updating.
I can't imagine sharing data between Development/QA and Production.
How does that work?
Very well <grin>. This may be something only terrible people do. Like to
debug a problem with a report which only reads production data, a test
version of the program is written and run against the actual production
data. It is considered "bad practice." But we have a LOT of stuff which
is "bad practice."
A new system might offer a good chance to get rid of the "bad" practices. <grin>
Normally on the i, you'd simply grab a copy of the production data for
use on the Dev LPAR.
Before SOX, it wouldn't be unusual to debug a program directly on the
Pdn LPAR Or compile a test program into a different library on Pdn
that accessed the Pdn data.
The idea behind an LPAR is that its logically a separate machine,
certain hardware devices can be moved between LPARS, but for all other
intents and purposes an LPAR is just like a physically separate box.
Same with the z, but physical hardware such as DASD devices can be
updated, safely, by multiple LPARs concurrently. Individual files cannot
be, but z/OS has methods than can ensure that need not happen (OK, the
programmer can get around it, but woe unto him who does it!).
Theoretically, from a user program, there's no such thing as a DASD
device. The architecture of the i hides all that below the MI. In
fact originally, all DASD was in a single auxiliary storage pool
(ASP). Real world, it turned out that it was useful to be able to
control which disks certain objects were placed on. For instance, in
the past it was quite helpful to place journal receivers in there own
storage pool ( ie. set of disks). So IBM started allowing additional
ASPs to be created; along with providing ways to explicitly place user
level objects into those other ASPs.
I'm going to ASSuME that it is more like UNIX. The DASD is not shared,
but a program on one system can do a database call to read or maybe even
update a database on a different system. I will also assume that the
SPOOL is not sharable. Yes, we do that. Users on the development LPAR
can watch jobs on the Production system as they run and view their SPOOL
files both as the jobs run and after the fact.
Generally, you are correct DASD isn't shared. Independent Auxiliary
Storage Pools (IASP) are an exception. IASP's allow you to have two
libraries (schemas) on the system with the same name each in a
different database. Even after additional ASPs were supported, the i
had a single DB and everything was in that one DB. IASP came out, if
I recall correctly, originally as an HA feature to allow an ASP to be
moved between systems. Then IBM started supporting multiple DBs via
IASPs. This allowed a company who had purchased another company that
ran the same software to move the other company onto the same box
without needing an LPAR. Generally, it was useful when for instance
you wanted to keep order entry and inventory DB seperate for each
company but consolidate accounting.
The thing to remember about IASPs is that there's only one system. So
there's only one copy of the OS. With LPARs, you've got multiple
copies of the OS which is nice if you're testing IBM fixes or new
versions of the OS.
You are correct that a program on one system can read/update a DB file
on another. You'd use either DDM files with RPG/COBAL record level
i/o or the SQL CONNECT TO statement with a program that uses embedded
SQL. While you could use this functionality to comunicate between
Dev/Qa/Pdn LPARs. I'd say the normal usage is between multiple
systems in the same environment. For instance, say you've got
production machines at local branch offices along with a central
production machine at the corporate office.
As far as viewing production jobs and/or spool files. Developers
generally have a user ID for the production system that allows them to
sign and do such activities. In the twinax terminal days, you'd
simply start a "pass-thru" session from development to production.
Now with PC emulation, you'd usually just have a session open to
production.
I myself, have a single icon I click that starts 4 sessions: two to
Dev, one to Qa, and one to Pdn. If needed, it's a couple of clicks to
start additional sessions to any of the systems.
Charles
As an Amazon Associate we earn from qualifying purchases.