On Thu, Sep 5, 2013 at 12:55 PM, Nathan Andelin <nandelin@xxxxxxxxx> wrote:
Again, nits, but how much 'big data' involves creating a million row table
Valid point. My benchmark was patterned after one that was referenced
in a document at http://www.memsql.com/ which claims to have the
"world's fasted database". They were achieving 200,000 row inserts per
second under Linux using 8-cores and 320 GB RAM.
I achieved 500,000 row inserts per second under IBM i using 2-cores
and 32 GB RAM.
Interesting. I'm not pro or con IBM here, but I just wanted to
mention that MemSQL doesn't claim to have the world's fastest row
insertion. I'm not saying MemSQL is fast or slow, but what it's
really designed to do are (1) live entirely in memory and (2) be as
scalable as possible, in particular taking advantage of as many cores
as are available.
Also, the larger goal for MemSQL is specifically to do fast
*analytics*. I'm not completely sure what that means, but it could
well be a different usage pattern than what typical business databases
endure on a daily basis.
If you look at MemSQL's own "performance estimator" at
>, you see that even as cores
increase dramatically, insert speed doesn't seem to increase very
much. But "scan" speed (supposedly) increases linearly with cores.
And the memory doesn't have any effect at all (in that estimator) on
any kind of performance. Presumably, that just limits the amount of
data you can hold.