|
Vern,
I have been following this. Some things I had never had to use before, so
was educational. Glad you got it figured out and shared.
The system we used to used at the hospital originally had two update
processes. Then, with faster machine, that was increased to five.
Different tasks were assigned to a specific thread.
One task sent patient information to a system used to dispense medication.
Started getting erratic when the threads were increased. Original code for
the interface was developed in-house. The guy who wrote it counted on
things to happen in a nice orderly sequence. Which happened when there
were two threads. Information needed for the medication system was not in
the database as the process that wrote that would get backed up. The issue
was given to me to resolve. Took a few lines of code to just try
information retrieval again after a wait.
Details can be missed so easily. It was nice to be able to solve a vexing
erratic issue. I miss those days.
John McKee
On Sun, Jan 26, 2014 at 3:24 PM, Vernon Hamberg <vhamberg@xxxxxxxxxxxxxxx>wrote:
Hi Mark
Maintenance is *IMMED - we'd wondered about that.
A UNIQUE key isn't easily used here - the "duplication" isn't a
single-record kind of thing, it's a set of related items.
The good news is - it now works - it's solved!!
Let's see if I can make sense of this - and the fix came out of writing
my last post - my new "thought".
We have a process that takes data uploaded from handheld devices in the
field to production tables on our iSeries - yeah, it's a 550. (Just
being a beard-tweaker for a minute.)
Through an unfortunate set of circumstances, some sets of related
records are being duplicated in a findings table.
I wrote a program that found these duplicates, deleted them, and copied
them into a log file.
Because the findings file is large, I didn't want to run through the
whole thing all the time - besides, this cleanup has to run several
times in each upload cycle, due to near-real-time reporting requirements.
So I use a data area that has the timestamp of the last run of the
cleanup program.
That worked nicely when called on its own. But it didn't catch
everything when called repeatedly from the upload-processing program. In
fact, in caught hardly any.
So I started wondering about delays in flushing data out to disk - and
tried a couple suggestions from this list.
FEOD didn't work - and neither did changing to USROPN. That became a big
clue that something else was going on.
And it was - every time the cleanup ran, the data area was update.
Unfortunately, the last-run timestamp was now more recent than almost
everything in the upload.
When run stand-alone, that wasn't an issue, but when run in a way to
process the set just written, it didn't fit the original design.
So I changed it to update the data area with the actual upload values of
what was just processed, and ba-da-bing!!
That's a long story - and I'm sticking to it!! I just hope it is
interesting and informative enough as to process. This one really
stumped me for a while!
I do appreciate the presence of this list and all of you on it - just
asking the question gave me a place to start again, and that's a good
thing about y'all, that you do listen to whomever posts here.
On good days, of course!! Fingers pointing back at self on that one!
Regards
Vern
On 1/26/2014 12:08 PM, Mark S Waterbury wrote:
Hi, Vern:program Vern.
Do those logical files over this physical file have "Access path
maintenance" set to MAINT(*IMMED) or *DLY or *REBLD?
Also, why not create a UNIQUE keyed access path over the PF(or add a
unique constraint) to prevent such duplicates from ever getting inserted
in the first place?
HTH,
Mark S. Waterbury
> On 1/26/2014 11:40 AM, Vernon Hamberg wrote:
Tried FEOD both with and without the N extender - it didn't help.
I also just tried USROPN with open/close around the writing subroutine.
Didn't work.
The file in question is a logical file - is there a difference in how
FEOD affects an LF vs a PF?
My next attempt is to use the PF as output-only - leave the LF for its
use to check for and maybe delete under certain conditions. Then use
USROPN on the PF.
This is most curious. The call to my cleanup program is within the
program that is writing to this file. The cleanup uses a different LF
(with S/O) to walk through what was just written, based on pair of
numeric date/time fields.
Hmm, just had a thought - I have to be sure that my date/time is not too
recent - that the upload date & time might not be earlier than the last
run of my cleanup.
OK - having y'all for a blank wall is always helpful - will let you know
what's up - because it seems the FEOD should have worked.
Vern
On 1/24/2014 3:48 PM, Jon Paris wrote:
You need FEOD(N) to be sure of catching anything written by the
programs that update the file - but using FEOD should fix it. For sure useThere may also be an impact from different blocking in the two
the (N) extender. FEOD without it is a performance pig.
wrote:On 2014-01-24, at 4:16 PM, Vernon Hamberg <vhamberg@xxxxxxxxxxxxxxx>
goesY'all
I haven't had to deal with quirky IO issues in a long time, so here
from awith my problem - hope someone has seen this behavior.
I have a program that reads through a file (let's call it VHFILE)
2certain position and is checking for duplicates of a type - not
important - I'm using a state machine to handle this.
When this program is called on its own, it finds the duplicates.
When it is called from a program that just wrote to the file in
question, it rarely finds anything - but it does sometimes catch 1 or
written.instances of the problem.
It seems that this new program doesn't see the data that was just
callVHFILE has 3 logicals - and VHFILE is quite large - one of what we
(update +our big files.
VHFILE - arrival - 51million
VHFILEA - 51million entries - 1.5GB
VHFILEB - 51million entries - 1.8GB
VHFILEC - 10million entries (S/O) - 420MB
OK, here's more info on the call stack -
1. CLPGMA - OPM
calls
2. CLPGMB - OPM with simple OVRDBF on file in question
calls
3. RPGPGMA - ILE - *CALLER - uses VHFILEA as UF A
varies, )add) and has deletes and writes
calls (after all writes are done - the number
it4. CLPGMC - ILE - *DFTACTGRP
calls
5. RPGPGMB - ILE - QILE - uses VHFILEC as UF
(update only) and has only deletes
There is no need of sharing ODPs.
I recognize the "flaky" activation group structure, but I don't think
fromshould matter - in fact, it might help, to isolate opening VHFILEC
whatever.the open of VHFILEA.
No commitment control in place - there is iTera for replication or
(RPG400-L) mailing listSo -
1. Is FEOD a way to overcome this?
2. And is the N extender useful? RPG Reference says it can perform
better - unwritten records in a block are written to the database,
although not necessarily to disk (non-volatile storage).
Or some other idea.
Thanks
Vern
--
This is the RPG programming on the IBM i (AS/400 and iSeries)
--To post a message email: RPG400-L@xxxxxxxxxxxxJon Paris
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.
www.partner400.com
www.SystemiDeveloper.com
This is the RPG programming on the IBM i (AS/400 and iSeries) (RPG400-L)
mailing list
To post a message email: RPG400-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/rpg400-l
or email: RPG400-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/rpg400-l.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.