×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
Some posts to BPCS_L get no significant replies.
Some get good stuff ... thanks ... lots of good ideas here.
I also checked BPCS_L archives on INV900 and ITH such as
http://archive.midrange.com/bpcs-l/200203/msg00008.html
finding references to the 99,999 problem (ceiling on # history records on
one item), and lots of other stuff, most of which I had known already, but
it had fallen out of my brain until I was reminded by these posts
I found in our system parameters file the record LASTCLOS
it contains
File
Key LASTCLOS
Record
ID LC
Data - First
20 2008082820031231
I interpret this as being last end year 2003-12-31 and last end month
2008-08-28
as if instead of keying in for the date
082804
I had keyed in
082808
I think I have found my smoking gun
and BPCS_L was a big help this time.
While looking at the SSARUN03 documentation, the source code, and GO CMDREF
I got at a list of files that INV900 messes with, and I checked # records
there.
They are all consistent with what I would expect to be there, except for
ITH of course
I still not know if there is any problem with the data in those other files.
there are only so many things I can check before I run out of daily wake time.
Using SQL I got a count of how many records in ITH do not have a match to
IIM ... history on items that have been deleted = 144,149
I have some stuff I put on automatic some time ago, to help me monitor file
sizes as needed.
This info pretty much confirms for me that going into Monday transactions
THE ONLY records in ITH were those on the deleted IIM items
Currently I am thinking in terms of ignoring that problem, during this
repair effort.
I have to get the ITH restored from Fri 8/27 backup
I have to get that combined with the ITH that is now on the system.
The history on the deleted records will be doubled up, but dealing with
that is for another day when I will write some SQL to remove ITH records
that do not have a match to IIM
and I suspect in that case the sequence numbers won't make a bit of
difference
I am hoping our BPCS tech support has already written a program to fix ITH
vs. IIM sequence numbering, because this problem cannot possibly be unique
to us. Yes I could write it, but for me to research what is needed, how
test it, etc. there are times it makes more sense to get help, and my boss
has authorized me to call tech support for help with this problem.
I normally work nites, sleep days, but tonite I am going to try to break
off early (like not much later than midnite), taking a break from all the
other stuff I want to check out, get to sleep early, set my alarm, so I can
be awake to call our tech support Wednesday during normal business hours.
My boss changes from time to time, and we operate with the rules imposed by
the prior bosses, until some reason to review what we are doing and why.
2 bosses ago, decision was made to stop saving the ITH records that get
wiped out by INV900 purge, so the last date in YTH is from 2003-01-31 EOM.
It was never explained to me WHY this decision was made
I suspect it might be related to all the problems we were having back then
with bad tape media, and the fact that the IBM tape drive died 2 days
before an EOM, and we had to get IBM service to replace the tape drive.
My feeling based on this latest episode is that we would probably be better
off going back to the old way, any time we are not having that kind of tape
problem, which we have not had in years.
However, what are the odds of using the rope to hang ourselves, and where
are all the places that can go wrong, perhaps other safety measures more
prudent. There are a ton of end-month steps, all controlled by what we key
into the date field ... how are we to know if any one of them done wrong,
and what damage from that. I think my focus needs to be on blocking that
risk, not on reacting to latest mess.
We also had a problem several years ago with our UPS going flakey, bringing
the 400 down faster than OS/400 could make proper logs, and causing some
BPCS files to get corrupted in BPCS terms (the data was there but BPCS
acted like the file was empty) ... that episode taught us that if a 400
file is corrupted, backup/400 won't work on that file ... I have now run
enough backup/400 after problem discovered to verify that this is not a
case of corrupted files.
In addition to what I said earlier, I also think smart to make sure
inventory posts to GL caught up before I messing with this file.
-
Al Macintyre http://www.ryze.com/go/Al9Mac
Find BPCS Documentation Suppliers
http://radio.weblogs.com/0107846/stories/2002/11/08/bpcsDocSources.html
BPCS/400 Computer Janitor at http://www.globalwiretechnologies.com/
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact
[javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.