LISTSERV mailing list manager LISTSERV 16.5

Help for XML-L Archives


XML-L Archives

XML-L Archives


XML-L@LISTSERV.HEANET.IE


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

XML-L Home

XML-L Home

XML-L  November 2010

XML-L November 2010

Subject:

Re: Somewhere between parsing and data binding?

From:

Peter Davis <[log in to unmask]>

Reply-To:

General discussion of Extensible Markup Language <[log in to unmask]>

Date:

Wed, 24 Nov 2010 18:01:33 -0500

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (197 lines)

Hi, Peter,

I hope you're at least enjoying this exchange, as I am.  I'd hate to be
wasting your time.

On Wed, Nov 24, 2010 at 10:30:14PM +0000, Peter Flynn wrote:
> On 24/11/10 13:22, Peter Davis wrote:
> > On Wed, Nov 24, 2010 at 01:40:58AM -0800, Peter Flynn wrote:
> >> Quoting Peter Davis <[log in to unmask]>:
> >>> I'm trying to build a prototype as a proof of concept for a workflow
> >>> that requires parsing two XML files.  In coding this (C++), it occurred
> >>> to me that having to conditionally test an element's name *after* the
> >>> parser has already scanned it is redundant.
> >>
> >> I'm not clear what the difficulty is here: this would be the normal way 
> >> of doing it. A parser checks for well-formedness, and then hands the 
> >> resulting tree to the application so that the application can do  
> >> whatever it's supposed to do.
> > 
> > Sorry I wasn't more clear.  I'll try to explain further.  I realize the
> > normal order of business is to do a purely syntactic parse, and then
> > hand off either events or a DOM tree to the application.  However, this
> > results in the application's having to make a second pass over the same
> > textual input in order to make sense of it.
> 
> Umm. I'm not sure it does, but I think it depends on how the parser is
> integrated with the application. I don't think that a system where the
> parse tree is just pointers back to the original text of the XML
> document would be very efficient, but I may be wrong. In my ignorance of
> the internals, I would guess that the parser hands the entire tree to
> the application, consisting of every node it has identified, text and
> all; and that any time the application needs an element and its content,
> it gets it from the tree via whatever index the parser has provided, not
> by re-reading the XML source text all over again. But this is largely
> guesswork on my part: you may want to check this with someone who knows
> the architecture of these things.

Basically, the parser has to look at every character in the input XML
data, to determine where "<" and ">" occur.  So in the process of doing
that, it could also be identifying pre-determined strings, like element
names.  Otherwise, whether it's SAX or DOM, all it's tell the calling
application that there are elements.  The application than has to scan
the strings again to figure out what the element names are.  That the
primary inefficiency that troubles me.

 
> > A compiler doesn't usually
> > make that clean a separation between syntactic and semantic analysis,
> > because the two require a lot of the same steps.
> > 
> > With existing parsers, I have to write code like:
> > 
> > void HandleBeginElement(...)
> > {
> >     if (!strcmp(elementName, "document")
> >     {
> >         ... do document-y things ...
> >     }
> >     else if (!strcmp(elementName, "page")
> >     {
> >         ... do page-y things ...
> >     }
> >     ... etc.
> > }
> 
> Oh dear. I think there is a misunderstanding going on here, either me
> and C++ (not my language) or you and XML, or us both. This is a
> procedural approach, and I stopped doing procedural stuff (except at the
> trivial level) a long time ago, when XSLT replaced Omnimark.

I'm not sure what you mean by "procedural" here.  This is pretty much
the approach that Expat and other SAX parsers require you to take.  For
DOM parsers, the code is similar, but it's in the context of traversing
the tree.


> > This is very cumbersome code, and not at all object-oriented.  Even with
> > DOM parsing, I have to examine each element in the tree, and do some
> > kind of cascading 'if' statement like the above to handle each case.
> 
> I think you might seriously want to re-examine XSLT and get well away
> from C++ for this.

Yeah, I've spent today looking at XSLT as a solution, and it looks very
promising ... except for the problem of dealing with images.

But I still think it would be useful to have a parser that does
element-name callbacks.


> > So instead of setting up beginElement and endElement handlers, I want to
> > setup a beginDocumentElement handler, and a beginPageElement handler,
> > etc.  Since the parser is already scanning the input to determine when
> > to call the callbacks (in the SAX case), it could almost as easily just
> > call different callbacks for each interesting element type.
> > 
> > This is why I thought of it as somewhere between parsing and data binding.
> > 
> >> Ah. I am probably being dense here. Are you trying to locate or isolate 
> >> one specific element from within each file (ie as opposed to needing to 
> >> handle all the elements)?
> > 
> > No, I'm trying to handle all (or most) of the elements.  I just want the
> > code to be straightforward.
> > 
> >> Can you explain a little more about what you are actually trying to do? I 
> >> tend to build workflows at the scripting level rather than in a single 
> >> language because the facilities that can be plugged into the pipeline are 
> >> much more extensive.
> > 
> > In this particular case, I'm trying to convert XML to TeX or LaTeX for
> > publishing.  I think the potential solution is pretty general, though.
> > I've worked with XML on other solutions in the past, and run into this
> > repeatedly. 
> 
> In that case you definitely need to use XSLT, IMHO. I use it all the
> time for exactly this kind of application.

Yes, XSLT is looking good.  I've also found some example XSLT files for
generating TeX/LaTeX, so that's a good head start.


> >> There are a number of tools which can be used to extract specific  
> >> individual elements, but by design they are mostly limited to handing  
> >> you the whole element, not the isolated start-tag or end-tag.
> > 
> > That would work, as long as I could continue to recurse into the element
> > to fetch its children, grandchildren, etc. in a similar fashion.
> 
> XSLT does that.
> 
> >> XSLT is probably the most common, but I assume you have already looked  
> >> at this. XQuery may also be useful if you are looking to identify an  
> >> individual element.
> > 
> > XSLT is close to what I'm thinking of, but is not powerful enough.  For
> > example, in the course of processing my document, I might run across
> > something like <img src="..."/>.  So in processing that element for
> > conversion, I want to actually retrieve that image, and inspect the file
> > to determine what size it is, what resolution, etc.  I don't think
> > there's a way to do that in XSLT.  I know there are some extended XSLT
> > models like Saxon and Xalan, but I don't know how widely they're
> > supported. 
> 
> Ah. I'd do this as a preparatory pass, and write the results to a file
> that XSLT can read while processing. Yes, it does mean a pass through
> the file, but dog is very fast, and so is ImageMagick's identify:
> 
> echo \<images\> >images.xml
> for f in `dog --images myfile.html`; do
>   wget -O - $f | identify -verbose - |\
>   awk -v file=$f -F: 'BEGIN {print "<image name=\"" file "\""}
>     /Resolution:/ {print "res=\"" $2 "\""}
>     ...etc...}' >>images.xml
> done
> echo \</images\> >>images.xml
> This will process very fast (you'll have a network delay no matter what
> you do, unless all the files are local), andit gives you all the data
> you need in a form that an XSLT script can use as a lookup table. If the
> input is not HTML, or as an alternative, you could use
>   lxgrep -w images '//graphic' filename.xml >images.xml
> to extract the names, and then something like the above to run identify
> over them.
> 
> Basically I'd separate the extraction of image data from the business of
> converting XML to LaTeX.

Unfortunately, I can't pre-process the images in this application.  I
don't know what or where they are until I process the XML.  I could
process the XML twice, but that's ugly.  It would mean processing the
XML to make a list of image URIs, then running through the list with
some other process to get all the image info, and finally processing the
XML again to generate the markup.


> > Ultimately, I'm going to be integrating with a fairly large body of C++
> > code, so that's the language of choice.  I've been Googling for tools,
> > but not finding much.
> 
> It's going to take a lot of reinventing of wheels. I generally prefer to
> use existing wheels :-)

Yes, so do I.  As I said, XSLT seems to be *almost* there, but I need to
solve the image problem. 

Thanks!

-pd



-- 
--------
Peter Davis
 The Tech Curmudgeon - http://www.techcurmudgeon.com
Ideas Great and Dumb - http://www.ideasgreatanddumb.com

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

February 2018
February 2017
August 2016
June 2016
March 2016
January 2016
July 2014
April 2014
January 2014
July 2013
February 2013
September 2012
August 2012
October 2011
August 2011
June 2011
January 2011
November 2010
October 2010
July 2010
June 2010
March 2010
February 2010
January 2010
November 2009
September 2009
August 2009
July 2009
May 2009
March 2009
December 2008
October 2008
August 2008
May 2008
March 2008
February 2008
January 2008
December 2007
October 2007
August 2007
June 2007
March 2007
January 2007
December 2006
September 2006
July 2006
June 2006
April 2006
February 2006
January 2006
November 2005
September 2005
August 2005
July 2005
June 2005
May 2005
March 2005
January 2005
October 2004
August 2004
July 2004
June 2004
May 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
December 1997
November 1997
October 1997

ATOM RSS1 RSS2



LISTSERV.HEANET.IE

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager