This is a multipart message in MIME format.
--
[ Picked text/plain from multipart/alternative ]
We have multiple servers running Trend.  We followed the Trend
installation instructions.  However the smconf.nsf had the same replica id
on the three Domino partitions on this one iSeries.  Which in our cluster,
immediately generated several replication or save conflicts.  The support
people were about calling me a liar that I've done nothing special to
start replication.  They insisted that I must have set up some special
program documents or something.  They claimed that clustering wouldn't do
this.  My response was that if they didn't have the same replication id
then it wouldn't matter if I did a push, pull or told it to jack off.
Support said that was an excellent observation.  They had me delete the
smconf from all three partitions and rebuild it.  This time they did not
have the same replication id.
I told them that I want this fixed because this is just our test 400 and I
want to go to production soon.  I felt that the solution I was given was a
workaround.  The support people claimed that they would talk to
development.

I looked at the joblog from the install.  They have an interesting method
of creating the smconf.nsf.  They copy
/QSYS.LIB/TRENDMICRO.LIB/DATABASES.FILE/SMCONF.MBR
to
/myserver/notes/data/smconf.nsf

In case you are IFS illiterate, try this command
DSPPFM FILE(TRENDMICRO/DATABASES) MBR(SMCONF)


Gee, you suppose they have a rpg bug that was supposed to update the
replica id in that member :-)

Rob Berendt
--
"They that can give up essential liberty to obtain a little temporary
safety deserve neither liberty nor safety."
Benjamin Franklin


This thread ...

Follow-Ups:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].