From: John Yeung
(1) live entirely in memory
It's marketing; giving them a point to distinguish themselves from traditional DBMS products. They admit that the size of a memSQL database is constrained by the quantity of RAM on the server, however. That should limit their market, in-memory performance not withstanding.
(2) be as scalable as possible, in particular taking advantage of as many
cores as are available.
That's a valid point, one that addresses Big Data.
Also, the larger goal for MemSQL is specifically to do fast
Other vendors use the term "business intelligence", but it takes on new meaning and it's more of a challenge when you're processing billions of records.
If you look at MemSQL's own "performance estimator" ...
I'm glad you posted a reference to that. It's an interesting tool that gets people thinking about what it might take to work with billions of records. Do we have anyone interested (besides me) in writing programs to benchmark "inserts" and "scans" against traditional databases such as MS SQL Server, IBM i DB, or MySQL?
Say someone writes a benchmark to insert rows into a table and it poops out at a rate of say 1K rows per second, and yet memSQL is saying 210K rows per second. Then do the same for "scans" or "reads". What if your throughput rates are say 200 - 2000 times slower? How might you prepare for Big Data?