× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Thanks Vern.

I can't recall who suggested it, but method was to use RRN. Article
provides details. A SELECT (or DELETE) and a subselect. File was
pretty small. Potentially could have just done it visually. But,
that bothered me as it was too easy to mess it up. Anyway, I learned
two methods that work nicely, and have an article - which - had I
imagined it existed - could likely have fount with Google. Maybe not.
Takes the right search terms and a bit of luck sometimes.

This group is great.

John McKee

On Sat, Jan 21, 2012 at 12:16 AM, Vern Hamberg <vhamberg@xxxxxxxxxxx> wrote:
John

Maybe the article Alan pointed you to uses this technique - I wasn't
able to open it, so please forgive any redundancy.

QM queries can be directed to an *OUTFILE - and the OUTFILE can be the
file that was queried. This is potentially very dangerous - imagine
replacing a million-record table with one summary record!!

But it is ideal for eliminating duplicate records.

CRTSRCPF YOURLIB/QQMQRYSRC RCDLEN(91)

Add a member named DLTDUP.

Put in the statement others have shown you.

     select distinct * from yourdtalib/yourtable

Save the member and run CRTQMQRY over that source member.

Execute the query with

     strqmqry yourlib/dltdup output(*outfile) outfile(yourdtalib/yourtable)

Of course, make a copy of yourtable first - and others have asked about
whether the records are unique from start to end - but this makes for a
pretty cool tool. You can even put in substitution variables and wrap a
command around it, to specify different tables over which to run the
query - come to my session at COMMON this year for more info!

HTH
Vern

On 1/20/2012 8:44 AM, John McKee wrote:
During testing, I was real careful to clear this work file before each
run.  Managed to have one of "those" moments and failed to clear it
prior to a production run.  Result is that every record that should be
in the file once is in the file twice.

Is there a way to remove duplicate records with SQL?  They are
completely identical.  However, if it makes the process any simpler,
the account number is guaranteed to not be used twice.  Now, of
course, had that field been described with DDS keyword UNIQUE, there
wouldn't be an issue.

I can see possible ways to fix it, but would rather not have to wonder
if I actually made it worse.

Thanks,

John McKee
--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.