|
In our shop it is common to work with files containing anywhere from tens of millions to hundreds of millions of records. Our SQL
statements
routinely return more than 32,766 records. I don't think it is
uncommon
to exceed this limit. It really isn't that large.
I guess my struggle is why a program would end up processing that many records. I know that databases routinely contain many more records than that but why with SQL do you end up processing so many records? In File I/O it could make sense because you have to read in records to see the values (Assuming I am not using OPNQRYF) but with SQL, I select only what I need. Not disputing what you are saying, just wondering why you would need to process that many records.
As an Amazon Associate we earn from qualifying purchases.
This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].
Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.