× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Rob:

Are there any other fields in the file that make each record unique?

The following is a routine I use to remove duplicate address records from a 
table but I have unique numbers in the file that 
allows me to do it this way.  Not sure if this is any faster but maybe you can 
alter the delete to a select to fit your needs:  

C/Exec SQL                                                           
C+   DELETE                                                          
C+   FROM MLPML A                                                    
C+      Where Exists                                                 
C+    (Select A.*                                                    
C+     FROM   MLPML B                                                
C+     WHERE  A.MLCUNM = B.MLCUNM and                                
C+            A.MLCAD1 = B.MLCAD1 and                                
C+            A.MLCAD2 = B.MLCAD2 and                                
C+            A.MLCAD3 = B.MLCAD3 and                                
C+            A.MLNAME = B.MLNAME and                                
C+            A.MLDPCD = B.MLDPCD and                                
C+           (A.MLAC1 CONCAT A.MLAC2) > (B.MLAC1 CONCAT B.MLAC2))    
C/End-Exec                                                           

David J. Smith
IT Contractor
dsmith@xxxxxxxxx
517.546.5400 Ext: 1513


date:   Tue, 6 Jan 2004 11:05:24 -0500
from:   rob@xxxxxxxxx
subject:        Using SQL to check for duplicate records

Is there a more efficient way, using SQL, to check for duplicate records 
than the following?
SELECT A.IPROD, RRN(A) FROM IIM A
WHERE A.IPROD IN ( 
SELECT B.IPROD FROM IIM B 
GROUP BY B.IPROD 
HAVING COUNT(*) > 1) 

It works fine on a 3 row dataset, with two of the rows being duplicates, 
or on a 3 row dataset with none of the rows being duplicates.  However on 
a table with 14,036 rows, and no duplicates, it takes forever.


Rob Berendt



As an Amazon Associate we earn from qualifying purchases.

This thread ...


Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.