× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



John

In this case, you get your single non-duplicated records using this statement


SELECT FLD1, FLD2, FLD3, MIN(LOGNUM)
FROM YOURLIB/YOURFILE
GROUP BY FLD1, FLD2, FLD3


I did not know what your real column names are, so made them up. That's a very simple statement, and you can use that with a couple of the techniques offered here. The only issue I can think of is that LOGNUM might be in records in some other file that are related to this one. But I addressed that, if I recall - it was early this morning.

HTH
Vern

jmmckee wrote:
The log number makes the rows unique. Other than that, the rows are repeated three times.
John
-----Original message-----
From: Vern Hamberg vhamberg@xxxxxxxxxxx
Date: Thu, 24 Sep 2009 08:08:42 -0500
To: Midrange Systems Technical Discussion midrange-l@xxxxxxxxxxxx
Subject: Re: Remove duplicate records

John

Is the entire row duplicated, including the log number? Or does the log number make it unique? Is the log number the only thing that makes it unique? In other words, is it duplicated in every field, except for the log number?

In this case, you can get a SELECT that has no duplicates as defined - say you have fields like FLD1, FLD2, LOGNUM. FLD1 and FLD2 are duplicated and LOGNUM causes each row to be distinct. You want to eliminate the duplicates of FLD1 and FLD2.

SELECT FLD1, FLD2, MIN(LOGNUM)
FROM YOURLIB/YOURFILE
GROUP BY FLD1, FLD2

Basically, you make a GROUP BY of the fields you want to be unique and apply an aggregate function on the remaining fields. In this case, you could use MIN or MAX.

With this, you can either use the QM solution or one of the solutions that had you create a different file with the result, then either rename or copy into the original.

If this fits, it's done in one swell foop. No programming needed. Of course, your situation may be more complicated.

Is the log number related to any other table? Then you might need to do a DELETE FROM THATFILE WHERE THATFILE.LOGNUM NOT IN (SELECT LOGNUM FROM YOURLIB/YOURFILE) after eliminating duplicates.

Enjoy!!

HTH
Vern

jmmckee wrote:
The file as a unique key. The last field is a log number. That allows duplicates, since the program that updates this file allows multiple users to write records at the same time.
I just wrote an RPG IV program. For testing, I just wanted to print the records. I got a 17 page report that "cannot be displayed, copied, or sent". Reason code is 2.

Tomorrow is another day.....

John McKee


-----Original message-----
From: "Evan Harris" spanner@xxxxxxxxxx
Date: Wed, 23 Sep 2009 16:02:11 -0500
To: "'Midrange Systems Technical Discussion'" midrange-l@xxxxxxxxxxxx
Subject: RE: Remove duplicate records

Hi John

As opposed to the uncertainty of a one-shot SQL statement ? Seems to me that
you should fix it using the toolset you are most confident and comfortable
with if re-use is not a factor and not worry whether it's SQL or RPG.

For what it's worth my first thought would be to restore the file and then
re-run the process if it is practical, that way it's a known starting point
and the process to update is a proven one.
It would be better if the file was designed in such a way that you couldn't
create duplicates, so that might be an area worth of further exploration.

Regards
Evan Harris


-----Original Message-----
From: midrange-l-bounces@xxxxxxxxxxxx
[mailto:midrange-l-bounces@xxxxxxxxxxxx] On Behalf Of jmmckee
Sent: Thursday, 24 September 2009 6:13 a.m.
To: Midrange Systems Technical Discussion
Subject: Re: Remove duplicate records

I think I see the logic to do this with a relatively simple RPG program. I
have managed to create a base file that has ACCT, CYC#, DATE, TIME, and
poster id. Seems like I have to do is read a record from this file, chain
to the multiformat logical, check which record format was read, check the
poster id (not in an index) and, if it matches, delete the record. Seems
too simple. Then, when done, repost the original records.

I had wondered if SQL could do that to remove the uncertainty of a one shot
program.

John McKee


midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.

--
This is the Midrange Systems Technical Discussion (MIDRANGE-L) mailing list
To post a message email: MIDRANGE-L@xxxxxxxxxxxx
To subscribe, unsubscribe, or change list options,
visit: http://lists.midrange.com/mailman/listinfo/midrange-l
or email: MIDRANGE-L-request@xxxxxxxxxxxx
Before posting, please take a moment to review the archives
at http://archive.midrange.com/midrange-l.


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.