On Wed, Nov 24, 2010 at 01:40:58AM -0800, Peter Flynn wrote:
> Quoting Peter Davis <[log in to unmask]>:
>> I'm trying to build a prototype as a proof of concept for a workflow
>> that requires parsing two XML files. In coding this (C++), it occurred
>> to me that having to conditionally test an element's name *after* the
>> parser has already scanned it is redundant.
> I'm not clear what the difficulty is here: this would be the normal way
> of doing it. A parser checks for well-formedness, and then hands the
> resulting tree to the application so that the application can do
> whatever it's supposed to do.
Sorry I wasn't more clear. I'll try to explain further. I realize the
normal order of business is to do a purely syntactic parse, and then
hand off either events or a DOM tree to the application. However, this
results in the application's having to make a second pass over the same
textual input in order to make sense of it. A compiler doesn't usually
make that clean a separation between syntactic and semantic analysis,
because the two require a lot of the same steps.
With existing parsers, I have to write code like:
if (!strcmp(elementName, "document")
... do document-y things ...
else if (!strcmp(elementName, "page")
... do page-y things ...
This is very cumbersome code, and not at all object-oriented. Even with
DOM parsing, I have to examine each element in the tree, and do some
kind of cascading 'if' statement like the above to handle each case.
So instead of setting up beginElement and endElement handlers, I want to
setup a beginDocumentElement handler, and a beginPageElement handler,
etc. Since the parser is already scanning the input to determine when
to call the callbacks (in the SAX case), it could almost as easily just
call different callbacks for each interesting element type.
This is why I thought of it as somewhere between parsing and data binding.
> Ah. I am probably being dense here. Are you trying to locate or isolate
> one specific element from within each file (ie as opposed to needing to
> handle all the elements)?
No, I'm trying to handle all (or most) of the elements. I just want the
code to be straightforward.
> Can you explain a little more about what you are actually trying to do? I
> tend to build workflows at the scripting level rather than in a single
> language because the facilities that can be plugged into the pipeline are
> much more extensive.
In this particular case, I'm trying to convert XML to TeX or LaTeX for
publishing. I think the potential solution is pretty general, though.
I've worked with XML on other solutions in the past, and run into this
> There are a number of tools which can be used to extract specific
> individual elements, but by design they are mostly limited to handing
> you the whole element, not the isolated start-tag or end-tag.
That would work, as long as I could continue to recurse into the element
to fetch its children, grandchildren, etc. in a similar fashion.
> XSLT is probably the most common, but I assume you have already looked
> at this. XQuery may also be useful if you are looking to identify an
> individual element.
XSLT is close to what I'm thinking of, but is not powerful enough. For
example, in the course of processing my document, I might run across
something like <img src="..."/>. So in processing that element for
conversion, I want to actually retrieve that image, and inspect the file
to determine what size it is, what resolution, etc. I don't think
there's a way to do that in XSLT. I know there are some extended XSLT
models like Saxon and Xalan, but I don't know how widely they're
I guess I need to learn more about Xquery. In general, I want the
conciseness of expression of XSLT, but the power of a full blown
> There are also lxgrep and lxprintf (part of the ltxml2 package at
> http://www.cogsci.ed.ac.uk/~richard/) which are very fast at pulling
> stuff out of XML documents.
Thank you! I'll look into these.
> The old onsgmls parser (part of OpenSP at
> http://sourceforge.net/projects/openjade/files/opensp/) can still
> produce an ESIS stream, which is a line-by-line decomposition of the
> document, and is very useful for creating trigger conditions because it
> *does* expose the start-tags and end-tags as separate objects.
Thanks again! I'll investigate.
> There are many libraries for Perl, Python, Tcl, and other scripting
> languages which may do the same, but I have no in-depth experience of
> them. Ditto for C, C++, Java, etc.
Ultimately, I'm going to be integrating with a fairly large body of C++
code, so that's the language of choice. I've been Googling for tools,
but not finding much.
> I try to avoid using non-XML solutions, because they are prone to
> misinterpret the markup in certain conditions (inside a CDATA marked
> section, for example), but depending on the nature of your documents, it
> may be possible to use nasty tricks like turning all newlines into
> spaces and then turning all < characters into newlines and all <
> characters into spaces; this leaves the element type name at the start
> of every line, followed by a space, which can then be isolated, eg
> cat myfile.xml | tr '\012<>' '\040\012\040' | grep '^page\ '
> but without knowing what you want to do with the result it isn't
> possible to know if that approach is useful.
Interesting idea, but you're right ... there are pitfalls there.
The Tech Curmudgeon - http://www.techcurmudgeon.com
Ideas Great and Dumb - http://www.ideasgreatanddumb.com