... performance and its myths.
Reading about DataQ performance compared to sockets or DTAQ versus database
files, or database with or without journaling or RLA versus SQL you could
read very diffrent recommendations, but no reference information or real
benchmarks. World isn't that simple as many people are thinking. The
performance bottleneck for Dataqs is on Q read by many clients, the switch
over has to be synchronized. I don't know, what's faster dataq
synchronisation or database synchronisation, in my release, in the current
release or in an upcoming release. But I know very well what's fast enough
for me. We've used the central key file aproach in a massive parallel load
process in a BI environment and with some blocking (fetching 20 keys at
once, giving it out one by one in this job) we've had unlimited scalability.
We adjusted the parallelism until all ressources where used, to get maximal
throughput and we reached about a million complex transactions equivalent to
about 30 Million inserts and updates (most of them inserts) per hour. BTW:
all files where journaled, all jobs used commitment controll.
This mailing list archive is Copyright 1997-2019 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact