× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Um, no, I meant to say that SQL performs better when a primary/unique key exists so that a JOIN will be "fast".
In some cases you may find that creating a new "key" or "index" may be justified . . .

-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Stone, Joel
Sent: Thursday, August 13, 2015 11:45 AM
To: 'Midrange Systems Technical Discussion'
Subject: RE: how to speed up join tiny file to giant file

Are you thinking if I simply create fileFIX to have only one column, which is the primary key to filePROD, then it may speed up by 1000 times?


-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Gary Thompson
Sent: Thursday, August 13, 2015 11:58 AM
To: Midrange Systems Technical Discussion <midrange-l@xxxxxxxxxxxx>
Subject: RE: how to speed up join tiny file to giant file


Maybe try making a statement joining from fileFIX to filePROD on only those rows that match fileFIX.

If you are lucky, that join will use a single filePROD.column which is also a unique key for filePROD.

Then that sql statement could be a CTE and the target of a join from filePROD to the CTE returning the desired 100 column values.


-----Original Message-----
From: MIDRANGE-L [mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of Stone, Joel
Sent: Thursday, August 13, 2015 10:47 AM
To: 'Midrange Systems Technical Discussion'
Subject: SQL: how to speed up join tiny file to giant file

I have a filePROD with 10 million rows, and a fileFIX with 100 rows (both files are identical columns).

I want to join the two files, and update filePROD with fields from fileFIX where the primary key matches.


Unfortunately SQL on v5r4 reads thru the entire 10 million records to locate the 100 to join on.

Is it possible to convince SQL to read sequentially thru the tiny fileFIX with 100 records (instead of the giant file)?


The following SQL ran for an hour - if I could force it to read the fileFIX first, it should only take seconds.

Any ideas?

Thanks in advance!



select * from filePROD
where exists
(select docid from fileFIX
where filePROD.docid = .fileFIX.docid)




______________________________________________________________________
This outbound email has been scanned for all viruses by the MessageLabs Skyscan service.
For more information please visit http://www.symanteccloud.com ______________________________________________________________________
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list To post a message email: MIDRANGE-L@xxxxxxxxxxxx To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx Before posting, please take a moment to review the archives at http://archive.midrange.com/midrange-l.


________________________________________________________________________
This inbound email has been scanned for all viruses by the MessageLabs SkyScan
service.
________________________________________________________________________

______________________________________________________________________
This outbound email has been scanned for all viruses by the MessageLabs Skyscan service.
For more information please visit http://www.symanteccloud.com
______________________________________________________________________

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.