Note: we aren't actually updating the records in the file. Just passing
them to a web service.
Suppose we have a data area. When a job is ready to process a record, it
reads and locks the data area, SETGTs the file to the record after the
one in the data area, and READs it, and writes the key of the record it
found back out to the data area.
That would (I hope) guarantee that regardless of the number of jobs
working on the file, and regardless of the number of records in the
file, each record would get processed exactly once.
This thread ...
Re: Exclusive locks at the record level?, (continued)
This mailing list archive is Copyright 1997-2020 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact