LISTSERV mailing list manager LISTSERV 16.5

Help for XML-L Archives


XML-L Archives

XML-L Archives


XML-L@LISTSERV.HEANET.IE


View:

Message:

[

First

|

Previous

|

Next

|

Last

]

By Topic:

[

First

|

Previous

|

Next

|

Last

]

By Author:

[

First

|

Previous

|

Next

|

Last

]

Font:

Proportional Font

LISTSERV Archives

LISTSERV Archives

XML-L Home

XML-L Home

XML-L  December 1998

XML-L December 1998

Subject:

Re: Record Ends, Mixed Content, and storing XML documents on rela tional database

From:

Mark Birbeck <[log in to unmask]>

Reply-To:

General discussion of Extensible Markup Language <[log in to unmask]>

Date:

Sat, 19 Dec 1998 20:00:03 -0000

Content-Type:

text/plain

Parts/Attachments:

Parts/Attachments

text/plain (264 lines)

Brave enough to create a new subject - well done Sam.

First, I want to stress that in some cases I was using XML-style tags to
represent alternative database storage techniques not my own proposals
for a sub-XML that didn't have mixed content!! Peter and Sam should have
perhaps read more closely - or I should have explained more clearly -
before 'gagging' as follows:

[Peter]
> Gag, Mark. But it resolves to the same stream.

[Sam]
> <gag type="intense"/>Agreed!

Both were rightly offended by the thought of devising a DTD that helped
get round the difficulties of storing mixed content data. But if you
re-read all my correspondence on this you'll see that no-one suggested
such a thing.

So, to be clear on how all this started:

1. We use a relational database to store objects in a tree structure.
2. Objects - with children - can be represented extremely easily in an
XML format.
3. We therefore export all the objects in our database as XML ...
4. ... and import all data into the database as XML.
5. The relationships between the elements at the level of XML, can be
derived from the database definition ...
6. ... which means the DTD can be generated automatically, too.
7. ... and XSL ...
8. ... and Xas-yet-undefined-but-we-can-do-that-aswell

And if you think about it, it's pretty obvious why. XML's 'revolution'
is to come up with a consistent way of representing data and meta data.
That's it. Nothing more than that. It doesn't 'mark up text', despite
what everyone seems to want to say. (Start extra thread if you want to
debate that one!)

It simply says, if you want to represent some information, then here is
a way to represent it that uses the same foundations as someone else
representing some other information. What that information means is down
to an application - and perhaps a human - to understand. So, it doesn't
take a great leap in the imagination to see that if you can come up with
a system of storing elements, attributes, and their hierarchy - which
sounds like objects to me - you can store anything that anyone decides
to put into XML. You'll all have seen announcements about Office 2000,
Quark XPress, Shockwave and others.

Now, all of our XML documents contain mixed content - if they want to -
that is no problem. Nothing I have written at any time suggests having
XML documents that do not have mixed data (see below for why I'm
stressing this point). What ARE problems though, are:

1. How you actually represent that in a database - which is where we
came in on this debate.
2. How the user interface manipulates that.

On the first point, despite what Sam says that is not a relational issue
for us, because we have already abstracted from the relational data to
our object-oriented layer. Therefore we either have an object called
PCDATA that can appear all over the place, multiple times in a child,
etc., or we have the gag-inducing PRE and POST attributes. Just to make
it clear though, whilst people are saying how difficult it is to deal
with mixed-content, we can do either of these two methods VERY easily.
And to emphasise, the output XML looks EXACTLY the same.

However, although I say it is easy to represent, at the moment our
user-interface is - dare I say it - pretty crude. Therefore, although
the first solution - a PCDATA object - is nicer, it is laborious for a
user to keep going 'add child', 'add child', etc. Therefore, as a
stop-gap we have taken the other approach. But, as I said, the produced
XML looks EXACTLY the same (not the rendered stream, but the XML!!)

To stress again, everything I was discussing related to the
IMPLEMENTATION of that storage, not the XML itself.


So, to Sam's specific points:

> Three issues are raised for the database world: (1) Is a given
relational
> database a compliant XML processor, from the standpoint of whitespace
> handling?

No. It's not a 'compliant XML processor' from ANY standpoint. It's a
multi-purpose storage medium, onto which you might build an XML I/O
layer if you want, but that's your business. We use an XML processor to
interpret incoming XML so that we can get it into our database, but that
doesn't make our application an XML processor - it makes it one
application amongst many that uses one.


> (2) If we are storing XML elements as "objects" in a relational
> database, how do we know what the declared values of the elements'
> attributes are?

You could look them up in the database.

Actually, your question is not clear, but you might be referring to how
you know what attributes an element has. If you mean, what attributes
are present, then see my other emails for simple queries that yield
this. If you mean what attributes COULD it have, then you need a schema
table, or two. (Sorry, but I did say in my first email that there was a
bit more to it than I was describing.)


> (3) Is it really necessary, as the examples given suggest,
> that mixed content must be eliminated from XML documents in order to
treat
> them as "objects" in relational databases?

No - as I have consistently said. The question was how to IMPLEMENT
mixed content, not eliminate it - none of the examples given by me
suggest otherwise.


> Leaving aside browsers, isn't it the case that with a conformant XML
> processor, the whitespace will simply be passed through to the
application?
> (By XML clause 2.10).  Thus, spaces at [a] and [b] will and must
"intrude"
> and it's up to the composition program to deal with them.

You are right here. As I said in a separate email, Peter has 'created' a
white-space problem in XML, when in reality it is a problem of how
certain browsers process a stream of XML.


> Is the database a compliant XML processor? In particular, does it
handle
> whitespace according to XML 1.0's clause 2.10?

As I said at the beginning ... no. Is NTFS a compliant XML processor?
No, of course not - it simply stores files, but can be used as a medium
for storing XML documents. That's what we do with relational databases,
but get more for our money than from old-fashioned file systems.
(Old-fashioned? What can that mean? I'd be extremely interested to hear
from anyone who can see where we're heading and wants to share ideas
...)


> (Assuming the declared value of the PRE attribute is CDATA, since
> otherwise, by clause 3.3.3, the trailing space is normalized away.
> Hmmm, guess we need the DTD after all, even in the relational world.)

You're still mixing your levels of abstraction. DTDs exist for XML
documents, but do not relate to the implementation of the storage
medium. Do you know of the DTD that defines how many cylinders there are
on my hard disk, or that specifies that file names must be 8.3 under
DOS? All we're doing is using relational databases as a very efficient
and easy to implement storage medium.


> Possibly join is the wrong word -- this is a conversation about XML
markup
> and database engineering, and I come from the markup world, not the
> database world, so perhaps my usage was not on point.

Now this IS interesting. From a database point of view, XML is amazing.
It is far more significant than mere text mark-up. IMHO it is worth
broadening your notion of what a 'document' is, after all, there is
nothing in the XML spec. that says a document is a play by Shakespeare,
or an insurance claim. In fact it could be a list of historical GDP
figures for a country.


> Mark Birkbeck originally wrote:
>
>    The attribute table has a join on the element table
>    to say what element the attribute belongs to, whilst the element
has
>    joins to itself to say who the parent of an element is. This allows
us
>    to store an object-like tree structure, and so generate XML
documents
>    from any point in the tree.
>
> I was concerned to know how this "join" approach handled mixed
content.
> The answer: it doesn't.

Hopefully, from what I have described above you can now see that it has
nothing at all to do with it! Does NTFS handle mixed content? Does paper
and pen handle mixed content? Does scratching on your school desk with a
rusty compass handle mixed content? XML handles mixed content, and we
handle XML.

To re-iterate, I suggested a way of storing objects in a relational
database that only uses about three or four tables, yet can store
elements, attributes, and - very importantly - their hierarchical
relationship. Once established, that 'object database' can be used to
store data that can be exported as XML. And it is very, very easy to
export XML that contains mixed content. (As well as DTDs, XSL and
whatever else we feel like.)


> Only element content is allowed, either (a) through the gag-inducing
> attribute approach used above, or (b) by having an element or
pseudo-element
> (say, <pcdata>) that contains only #PCDATA. Both approaches sound to
me
> like <ironic>optimizations</ironic> driven by the constraints of an
> installed base, rather than being driven, as the XML specification is,
by
> the requirement that "XML documents should be human-legible and
reasonably
> clear".

Firstly, it should now be clear why this is completely wrong. The XML is
always the same, only its low-level storage format is changing.

But secondly - although I hate being pedantic, you guys bring it out of
me - what it human-legible? No-one has defined it, but everyone loves to
throw it around. Is a bitmap human legible? Yes, if a program renders it
as the picture it originally was, or no, if you look at it as bits and
bytes. Yet has the bitmap changed between the two situations? Further,
is an XML file stored on your hard disk but with no copy of notepad on
the computer human legible?

In fact, an XML document in most forms is stored in an unhuman-legible
format, whether you like it or not. However, the SPIRIT of the
specification is that we want humans to be able to view this data in a
meaningful way without needing anything more advanced than a simple text
editor. This is not really for humans sake - are you really going to
curl up in bed with Midsummer Nights Dream all tagged up? It is more to
establish a baseline which ensures that software to manipulate XML can
be written very, very easily.


What we are doing is storing XML documents in a way that makes it easy
for us to create other documents from them. If I accept for a moment
that the entire XML universe contains only the complete works of
Shakespeare - one document for each play - then the traditional method
still makes it difficult to find every play which features a Prince.
With our solution of storing the documents in an object structure, with
each node being an element, we can actually export our search results as
a new XML document. Searching for the word "Yorrick" could create a
document on the fly, that contained the name of the play, act, scene and
speaker, where the word occurred.

More than this, with the 'separate document' solution how do you create
a table of contents? You can create a separate document, but what if
someone discovers a new work by Shakespeare? You'd have to add the new
play and then edit your table of contents document. Our solution would
do it automatically, since the table of contents is a 'virtual' XML
document, created as a query on the XML objects. This automation is what
databases are good at.

Regards,

Mark



Mark Birbeck
Managing Director
Intra Extra Digital Ltd.
39 Whitfield Street
London
W1P 5RE
w: http://www.iedigital.net/
t: 0171 681 4135
e: [log in to unmask]

Top of Message | Previous Page | Permalink

Advanced Options


Options

Log In

Log In

Get Password

Get Password


Search Archives

Search Archives


Subscribe or Unsubscribe

Subscribe or Unsubscribe


Archives

February 2018
February 2017
August 2016
June 2016
March 2016
January 2016
July 2014
April 2014
January 2014
July 2013
February 2013
September 2012
August 2012
October 2011
August 2011
June 2011
January 2011
November 2010
October 2010
July 2010
June 2010
March 2010
February 2010
January 2010
November 2009
September 2009
August 2009
July 2009
May 2009
March 2009
December 2008
October 2008
August 2008
May 2008
March 2008
February 2008
January 2008
December 2007
October 2007
August 2007
June 2007
March 2007
January 2007
December 2006
September 2006
July 2006
June 2006
April 2006
February 2006
January 2006
November 2005
September 2005
August 2005
July 2005
June 2005
May 2005
March 2005
January 2005
October 2004
August 2004
July 2004
June 2004
May 2004
March 2004
February 2004
January 2004
December 2003
November 2003
October 2003
September 2003
August 2003
July 2003
June 2003
May 2003
April 2003
March 2003
February 2003
January 2003
December 2002
November 2002
October 2002
September 2002
August 2002
July 2002
June 2002
May 2002
April 2002
March 2002
February 2002
January 2002
December 2001
November 2001
October 2001
September 2001
August 2001
July 2001
June 2001
May 2001
April 2001
March 2001
February 2001
January 2001
December 2000
November 2000
October 2000
September 2000
August 2000
July 2000
June 2000
May 2000
April 2000
March 2000
February 2000
January 2000
December 1999
November 1999
October 1999
September 1999
August 1999
July 1999
June 1999
May 1999
April 1999
March 1999
February 1999
January 1999
December 1998
November 1998
October 1998
September 1998
August 1998
July 1998
June 1998
May 1998
April 1998
March 1998
February 1998
December 1997
November 1997
October 1997

ATOM RSS1 RSS2



LISTSERV.HEANET.IE

Secured by F-Secure Anti-Virus CataList Email List Search Powered by the LISTSERV Email List Manager