I've some experience (I've been the architect and coached the team during implementation) with a BI implementation on AS/400 with single tables with > 999 999 999 records.
Inserting such numbers of records (500 000 per second) might be possible, but it's not the typical speed. It's done by blocking and indexes would slow down this a lot. With SQL Bulk scripts pushing data from one table to another, we had about 20 000 000 records in 5 minutes. All indexes (except primary) to this table where changed to delay before, the index rebuild in parallel took > 2 hours - but this outperformed the insert with synchronous index maintenance.
Never the less DB2 on as400 could handle very large amounts of data, it would scale nearly linear up to 500 000 000 records. Slowest operation might be deleting of multiple rows from a table having some indexes. One of the problems for massive update operations was, that a single stream of controll (one job exclusive) could not use enough system ressources.
Running multiple concurrent jobs, updating the same table, lock conflicts are one of the problems. SQL bulk operations (normally the fastes way) could die in the middle of their work. We journaled all tables and used comminment controll for all transactions. In this environment, you could get maximal throughput by parallelisation. Running 20 jobs in parallel, we've got 2000 complex transactions per second, each tranbsaction having about 30 updates and maybe 20 read operations.