On Thu, Jul 13, 2017 at 4:59 PM, Nathan Andelin <nandelin@xxxxxxxxx> wrote:
My gut feel is that IBM is likely parsing the entire document under the
covers, and using call-back event handlers in order to "find" the "xpath"
that's specified by the developer.
I think the point of callbacks is to avoid having to parse the entire
document. Once you hit what you're looking for, processing can stop.
This is pretty much the fundamental premise of SAX-style parsing.
If you are going to parse the entire document regardless, then you
don't need callbacks. You'll have built the whole tree, and can then
just traverse the tree using normal, classical tree-traversing
methods. This is the approach exemplified by DOM.
I guess there isn't anything preventing you from building a full tree
AND using callbacks, but that combination seems odd to me.
I imagine XPath might afford the *potential* for even faster
processing than SAX, because a given XPath expression might allow the
parser to skip over a lot of "junk" and jump more directly to the
target. (I'm imagining a preliminary search phase where it's
efficiently working with meaningless text rather than semantic nodes;
it doesn't have to "stop and understand" what it's looking at until it
finds the raw text it needs.) Of course, my idea might be unworkable
after all, or no better than SAX; and regardless, any given actual
XPath implementation might simply build upon SAX or DOM.
John Y.
As an Amazon Associate we earn from qualifying purchases.