× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



The company my brother works for uses MySql for their primary DB. They have now 32 'Shards' - that is the DB is broken into 32 pieces each of which is on a dual processor 4 core pizza box server with 8 SSDs and 32GB of memory. And of course they have two of these setups and they are replicated with a 90 second delay so mistakes caught quickly can be stopped from affecting the backup server. (Yes this has happened!)

Their total DB size isn't that large but their I/O load - mostly reads - is nearly insane.

We did some rough calculations that they could do this with DB2 on i on a Power7+ 750 with 512GB and 32 Power7+ cores and one of those cool SSD only drawers. However management is 'comfortable' with their DB spread all over the data-center like runny peanut butter. :-)


- Larry "DrFranken" Bolhuis

www.frankeni.com
www.iDevCloud.com
www.iInTheCloud.com

On 9/5/2013 9:32 AM, Jim Oberholtzer wrote:

The TeraPlex center ( I may have the name a bit off ) at IBM regularly
dealt with multi-billion row tables. Not only can DB/2 handle it, the
Power Systems running DB/2 ( either IBM i or AIX ) can do it. Now, once
you get into the finite details you might find that other tooling is
better than DB/2, but for most it's very viable.

I have multiple customers with tables with 100,000,000+ rows and aside
from back up/recovery we don't even notice the tables are that big.
Creating an index requires some thought as well but that's all in a days
work.

Jim Oberholtzer
Chief Technical Architect
Agile Technology Architects


On 9/4/2013 11:32 PM, Nathan Andelin wrote:
Let me admit up front that this message is something of a teaser.

While experimenting to better understand how IBM i might handle "big data", I inserted 130,000,000 rows into a table, using an RPG program that required 260 seconds of elapsed time to complete on a 2-core model 720 - P05 Group.

When I shared the results in a Linkedin discussion, a couple antagonists expressed doubt in my numbers. They didn't believe one could insert 500,000 rows per second, even using a model 795, and they didn't view IBM i on Power as a viable platform for hosting and handling "big data".

Say you have a need to record millions or billions of records quickly and read them quickly. How would you do it?

-Nathan.

--

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.