×
The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.
You could try
select * from filePROD
where docid in (select docid from fileFix)
Not sure if it will make any difference though. The optimizer generally should choose the most efficient way to query the files.
Mark Murphy
STAR BASE Consulting, Inc.
mmurphy@xxxxxxxxxxxxxxx
-----"Stone, Joel" <Joel.Stone@xxxxxxxxxx> wrote: -----
To: "'Midrange Systems Technical Discussion'" <midrange-l@xxxxxxxxxxxx>
From: "Stone, Joel" <Joel.Stone@xxxxxxxxxx>
Date: 08/13/2015 12:47PM
Subject: SQL: how to speed up join tiny file to giant file
I have a filePROD with 10 million rows, and a fileFIX with 100 rows (both files are identical columns).
I want to join the two files, and update filePROD with fields from fileFIX where the primary key matches.
Unfortunately SQL on v5r4 reads thru the entire 10 million records to locate the 100 to join on.
Is it possible to convince SQL to read sequentially thru the tiny fileFIX with 100 records (instead of the giant file)?
The following SQL ran for an hour - if I could force it to read the fileFIX first, it should only take seconds.
Any ideas?
Thanks in advance!
select * from filePROD
where exists
(select docid from fileFIX
where filePROD.docid = .fileFIX.docid)
______________________________________________________________________
This outbound email has been scanned for all viruses by the MessageLabs Skyscan service.
For more information please visit
http://www.symanteccloud.com
______________________________________________________________________
As an Amazon Associate we earn from qualifying purchases.