On 07 Jun 2013 01:23, Birgitta Hauser wrote:
<<SNIP>>
Yesterday my manager came up: the tables (and dependent objects) for
this application must be moved into a different schema.
Any particular reason(s) cited for this apparent folly?
Unfortunately both schemas contain a lot of other objects, so save
the library and restore it with a different name is not possible.
There are parameters that make save\restore generally manageable to
effect such a "move". SAVLIB and RSTOBJ is probably the easiest, but
requires good organizing\planning if not as simple as RSTOBJ *ALL
OBJTYPE(*FILE) OPTION(*NEW) MBROPT(*ALL) PVTAUT(*YES). Note: the
journaling is an issue that needs to be given thought to properly
handle... in conjunction with the /move/ activity; best to avoid mucking
up the current journal(s) with the effects of moving and testing.
... and unfortunately the second schema is also in the library list
but AFTER the first one where my tables are located currently.
I was confused as why this mattered... but I figured out, reading
further. I guess the implication is that *LIBL is presumed for all
processing.? I propose an alternate test method if *LIBL is expected.
What's the easiest way to get those tables moved?
If everything [esp. triggers] uses *LIBL, exclusively, or the only
qualified names refer only to libraries other than the first [and the
second too?], then MOVOBJ or SAVOBJ\SAVLIB with RSTOBJ probably would be
sufficient. The Move Object is ideal in that case; the journaling
remains unaffected.
What I've done so far is to generate an SQL script containing all
the creation statements for my tables and dependent objects,
This is always my preference for such projects; mostly just to get
/fresh/ copies of the objects rather than carrying forward definitional
errors [even though they may not /yet/ be manifest as errors], effected
in the past CREATE and any ALTER activity. With good change management
controls this [re]creation into a new library is easy. This should also
be as easy using the retrieved SQL, which in effect, creates what should
have existed via the CM; albeit I personally have a difficult time
trusting that the generated SQL will match what I had myself
established... but it could be more thorough than some poorly managed
and maintained scripts per having used some failed CM strategies.
Then after the objects are created, either copy or restore the data
from the members in the old library to the members in the new library.
CPYF FROMMBR(*ALL) TOMBR(*FROMMBR) MBROPT(*ADD) FMTOPT(*NONE), INSERT
OVERRIDING SYSTEM VALUES, or RSTOBJ OPTION(*OLD) MBROPT(*OLD)
ALWOBJDIF(*FILELVL). The big issue with the former options is ordering
the requests, so again, good organizing\planning is required. With the
latter, the database restore should be responsible to make things happen
properly.
in the sequence I need to create the objects (Tables first with
identity columns GENERATE BY DEFAULT without constraints and
triggers, then copy data, change identity columns back to GENERATE
ALWAYS and RESTART WITH the next Id, then creating all constraints,
views, indexes). I' currently testing whether all objects will be
generated and copied correctly.
With both CPYF and INSERT\OVERRIDING, the ALTER RESTART WITH will be
required :-( but there should be no reason to modify how to create them
and then to ALTER them to have the corrected\desired attributes.?
Again, the RSTOBJ puts the onus on the DB to do the right thing; no
messing with the identity columns nor any alters.
... and finally I'll rename all existing objects before deleting
them after having tested it.
I was at first confused about how any RENAME\RNMOBJ would be required
in the [testing] scenario. Then thinking more... is the implication
instead that the process is?:
1> create into schemaNew
2> copy from schemaOld into schemaNew
3> rename existing objects in schemaOld
4> test against schemaNew [per objects in schemaNew were renamed]
5> if test=successful then delete renamed objects
5.1> else reverse the rename of existing objects in schemaOld
5.2> effect corrections in scripting+process, and start over
I would suggest instead, the testing should be into a third schema,
placed before schemaOld for testing. I would avoid the renames. They
are expensive and complex processing for the *DBXREF, and even complex
for DB recovery processing [though better under isolation via RENAME].
Those renames would have to be repeated after each iteration of a failed
test if resetting to the original\old is required. The choice of a
third schema for test is not /exactly/ the same as testing the creates
directly into the second schema, but I think that would be way better
than scripting the renames for both directions.
Is there a better way?
My preference: Run the creation script, save the data from the old,
restore just the data to the new, delete the old objects.
BTW according to my manager this morning:
1. "It is normal to move productive data from one schema to another
one" (may be for test issues OK, but NOT productive data!)
More often than should happen. Typically a reflection of poor
planning, but sometimes due to legitimate requirements; e.g. library
name matched that of a 3rd party product added to the system.
2. "There are thousands of applications outside that work well
without Identities, Referential Integrity, triggers and commitment
control." "Why are you so arrogant to think you know it better!"
All database SQL literature will validate the value of those. Albeit
I find few valid use cases for Identity, except those that effectively
reduce a very /large key/ to prevent massive duplication of data in the
indexes and normalized tables; i.e. I see most are done for all tables
irrespective of apparent need.
Also (according to my manager): "Those techniques are useless
especially because error messages are sent, that an user cannot
understand."
Hardly useless if they save many hours of programming and give peace
of mind about the integrity of the data from activity outside the
application. But as far as the errors... if the developers are not
presenting the users with an appropriate error in response to the
database errors, that is a problem with the application, not a problem
with having implemented those techniques themselves.
As an Amazon Associate we earn from qualifying purchases.