× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



On 09-Dec-2013 10:03 -0800, Koester, Michael wrote:
<<SNIP>>

No mention of which row to choose if there are multiple
equidistant values found. Or maybe there is a unique index on
ColumnA?

Forgive if I wasn't clear, but I thought in my original post,
... If there were two rows where the difference from the target
value is the same (like if row 7 was added where ColumnA was
1560), I'd prefer the larger ColumnA value.

... covered the equidistant values scenario (where 1580 was
equidistant between 1560 (row# 7) and 1600 (row# 5). I did not
mention whether the values would be unique. There is no index to
ensure that they would be unique.

Apologies. Rather than /multiple/ I perhaps should have used the word /duplicate/ because I meant to imply _beyond the simple-case_ of having just two equidistant in what was presented. In my re-reading before posting, I must have thought the intention was sufficiently clarified with the supporting context ["if not unique, then there could be many possible matches"], with the sample data [i.e. multiple rows with 1876], and with the expected results for the test-cases referencing the duplicate values of columnA=1876 [i.e. examples showing where more than two Eligible Rows might be possible].

My intention is allude to consideration being made for, for example: If there are two rows with the value 1600, and just the one row with 1580, and the search criteria is 1580, then *which row* of those multiple [err... duplicate] rows with the value 1600 is preferred?

For the purpose of simplifying my question, I presented a table that
was stripped down to something much more basic than the actual table
I'll be applying this to. In that, I did not consider the efficiency
of the solution, because (unstated in my post) I would have the
potential rows in the set to be searched limited to a dozen or so by
other criteria.

The solution suggested by Eric, ...
SELECT my.*, abs( DBVAL - :TestVal ) as delta
FROM MYILE my
ORDER BY delta
fetch first row only
... handled the small set of rows quite elegantly.

With the "ORDER BY delta, DBVAL desc" modification, I was able to
ensure the desired larger value would be returned when two rows had
ColumnA values equidistant from the target value. And the "fetch
first row only" takes care of duplicate values of ColumnA.

Except which row will be selected by the FETCH FIRST 1 ROW ONLY still may be unpredictable, when there are either duplicate /larger value/ [or duplicate /smaller value/ while having no larger value] value, for which there are multiple rows with the matching /delta/ from the input-value.

Translated back to my first post, that solution is:

select a.*, abs(columnA - 1580) as delta
from mkoester/mkjunk a
order by delta, columnA desc
fetch first row only;

From the data...
ROW# COLUMNA
1 2,016
2 1,527
3 1,874
4 4,769
5 1,600
6 158
7 1,560
...the result is 1600 from Row 5, as desired.

Again, I was not concerned about what the impact would be from a
huge table, and did not anticipate the more complex proposals
offered. But thanks for those as well.


With the /more complex proposals/ aside...

Perhaps duplicate values is safely assumed to be outside the scope of the requirements, even without a unique constraint or index to ensure that. But if not, then adding the ROW# column to the ORDER BY may be desirable to produce consistent\predictable results; e.g. "order by delta, columnA desc, row# desc" if the duplicate match with the highest row number [ROW# column; i.e. not RRN] is preferable to any other.

Appreciate the help, as always!

And with this reply, to be sure, I am just trying to be helpful... that *maybe* something was overlooked, something that might be relevant and for which the current solution may not take into account, and thus might suffer from undesirable results if neither prevented [e.g. unique key on the data] nor otherwise given full consideration [after which possibly just dismissed as a non-concern due to...]. FWiW, with the hopes that knowing about and avoiding a possible undesirable result is better than finding out that a result was undesirable but only sometime in the future [and possibly with negative consequences].


As an Amazon Associate we earn from qualifying purchases.

This thread ...

Follow-Ups:
Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.