Maybe try making a statement joining from fileFIX to filePROD on only those rows that match fileFIX.
If you are lucky, that join will use a single filePROD.column which is also a unique key for filePROD.
Then that sql statement could be a CTE and the target of a join from filePROD to the CTE returning
the desired 100 column values.
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Stone, Joel
Sent: Thursday, August 13, 2015 10:47 AM
To: 'Midrange Systems Technical Discussion'
Subject: SQL: how to speed up join tiny file to giant file
I have a filePROD with 10 million rows, and a fileFIX with 100 rows (both files are identical columns).
I want to join the two files, and update filePROD with fields from fileFIX where the primary key matches.
Unfortunately SQL on v5r4 reads thru the entire 10 million records to locate the 100 to join on.
Is it possible to convince SQL to read sequentially thru the tiny fileFIX with 100 records (instead of the giant file)?
The following SQL ran for an hour - if I could force it to read the fileFIX first, it should only take seconds.
Thanks in advance!
select * from filePROD
(select docid from fileFIX
where filePROD.docid = .fileFIX.docid)
This outbound email has been scanned for all viruses by the MessageLabs Skyscan service.
For more information please visit http://www.symanteccloud.com
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l