|
Well, as I said earlier:Writing 10100 good records to a database + 1000 failed duplicates takes 40 seconds.
Writing 10100 good records to a database + 2000 failed duplicates takes 60 seconds.
from which we can infer that a failed write of a duplicate record costs 20 milliseconds, but a write-and-read-back of a good record only costs 2 milliseconds.
Now, with the *USRIDX + *DTAQ method:Writing 10100 good records to a data queue, after filtering out 2000 duplicates by writing everything to a *USRIDX first takes 32 seconds.
Writing 10100 good records to a data queue, after filtering out 3000 duplicates by writing everything to a *USRIDX first takes 37 seconds.
from which we can infer that catching duplicates with a *USRIDX costs only 5 milliseconds per duplicate, and that the combination of a *USRIDX write, a *DTAQ write, and a *DTAQ read costs the same 2 milliseconds as the database write, but from which we can also infer that the database might not be our worst bottleneck after all.
-- JHHL
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.