× The internal search function is temporarily non-functional. The current search engine is no longer viable and we are researching alternatives.
As a stop gap measure, we are using Google's custom search engine service.
If you know of an easy to use, open source, search engine ... please contact support@midrange.com.



Kurt Anderson wrote:

Different languages have different evaluation orders for
processing "and" and "or" logic in a condition? I always
thought the left to right process of handling a condition
was standard across the board. I'm curious what these are
(and really, why is it different). One nice thing about
being a programmer is that logic is logic, and all we need
to move from one language to another is to pick up syntax
(not a new method of logic). This has me curious.

Pete Hall wrote:
In addition to being more readable, I figure if
something gets changed in the compiler, my code
will still work. It's good defensive programming
and eliminates the need to remember the evaluation
order for different languages.


It is the "logic is logic" that enables the tests to be performed in any order, or even simultaneously, and still achieve the same result irrespective of the order of evaluation. Why it would be different between languages and even compiles within a language, is most likely for optimization. For example, one value as part of a predicate to be evaluated could already be in a register, thus perform better if it were tested first. A language that compiles into a multiprocessor environment could decide that concurrent evaluations are faster in some cases, and for an /and/ grouping could continue processing upon receipt of any /false/; i.e. no wait upon return of the evaluation of the others.

Where the unordered evaluations are problematic is when any predicate might be undefined [i.e. evaluation can not be determined, e.g. a null basing pointer or divide by zero], such that the error may need to be suppressed. For example a left to right evaluation of two expressions, one being A_OK and the other B_BAD, where the former evaluates fine as either true or false, but the latter can not be evaluated:

A_OK & B_BADD "If X=7 AND Y=21/(X-7) Then ...":
T *ERR -> Can not evaluate, expose *ERR
F *ERR -> Evaluates False, suppress *ERR

A_OK | B_BADD "If X=7 OR Y=21/(X-7) Then ...":
T *ERR -> Evaluates True, suppress *ERR
F *ERR -> Can not evaluate, expose *ERR

With debug enabled and active, the evaluation order may be more important to correlate with the debugger actions. I really doubt however that the optimizer will restrict itself to the same ordered evaluations rules, and I suspect that is one of the reasons that higher levels of optimization disallow debug; i.e. the actual logic being performed would no longer align directly\correctly anymore, with the source as it is being /stepped/ through.

The only way the comment from Pete makes any sense _to me_, is with respect to the *ERR cases. If the compiler were to change in a manner such that the outcome of the /logic/ itself effected different results, there would be a horrible problem, so what was originally coded would presumably be moot. If however the order of evaluation changed such that now the *ERR was exposed when previously it was not, then the so-called /defensive/ technique of ordering enforced at some level using parentheses could have value.

Regards, Chuck

As an Amazon Associate we earn from qualifying purchases.

This thread ...

Replies:

Follow On AppleNews
Return to Archive home page | Return to MIDRANGE.COM home page

This mailing list archive is Copyright 1997-2024 by midrange.com and David Gibbs as a compilation work. Use of the archive is restricted to research of a business or technical nature. Any other uses are prohibited. Full details are available on our policy page. If you have questions about this, please contact [javascript protected email address].

Operating expenses for this site are earned using the Amazon Associate program and Google Adsense.