×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
<Paul>
Maybe I should provide a bit more background on the reason for this question
(and the overriding system value) one.
The problem is that we need to change the structure (ie. this can be
anything from extending, adding, dropping, renaming, complex conversions,
... of fields) of a huge (> 100 million records) table while operating in a
24/7 environment.
Fact is that we even have multiple tables like that (that need to go in one
time) so the magnitude is even factor x. This means that we can't end all
jobs, change the layout of the tables (this will take many hours), promote
the software and continue to work as it is downtime that should be avoided.
The idea (which is not mine) is therefore to replicate the existing database
table to a shadow version until everything is in sync (without any impact on
the source tables), find a downtime spot, move the tables in the production
environment, promote the software and continue working. This method should
limit the downtime to minutes instead of many hours.
While there are some tools that offers this... they have certain limitations
so I was looking if I could write a next-gen version of them. The recent
questions are related to one technique I have in mind but there are others
as well (all with pro and contra's).
The goal is also to find a solution that works with all current capabilities
a database has (unique keys, constraints, identity columns, null values,
...).
</Paul>
First of all, I agree with your scenario of replication before switch over
to minimize downtime. Fastest would be to replicate all data to another lib,
shutdown, reach synchpoint, rename original lib, rename shadow, restart.
As an additional point of redesign, I would add a complete view layer, none
of the redeigned tables should be accessed directly by your applications
(maybe some DDS logicals could be needed, (maybe some programms could need a
change ???)
To fullfill all referential constraints the synch process has to be based on
journal to have the same sequence of changes for the shadow. It could bes
started from a savepoint, synchronized by journal entries (using APYJRNCHG),
after this an apply process brings the shadow in synch to the running
production. The apply process of the journal images would have to do the
translation from old to new and it couldn't be one generic process for all
tables (maybe the modules could be generated, or some of these). I don't see
any needs to preserve the same RRNs, maybe the sequence is important,
beacuse some programms are working FIFO or LIFO or whatever (but even this
would be possible - did you testthe snippet from my last posting?).
D*B
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.