Brave enough to create a new subject - well done Sam.

First, I want to stress that in some cases I was using XML-style tags to
represent alternative database storage techniques not my own proposals
for a sub-XML that didn't have mixed content!! Peter and Sam should have
perhaps read more closely - or I should have explained more clearly -
before 'gagging' as follows:

> Gag, Mark. But it resolves to the same stream.

> <gag type="intense"/>Agreed!

Both were rightly offended by the thought of devising a DTD that helped
get round the difficulties of storing mixed content data. But if you
re-read all my correspondence on this you'll see that no-one suggested
such a thing.

So, to be clear on how all this started:

1. We use a relational database to store objects in a tree structure.
2. Objects - with children - can be represented extremely easily in an
XML format.
3. We therefore export all the objects in our database as XML ...
4. ... and import all data into the database as XML.
5. The relationships between the elements at the level of XML, can be
derived from the database definition ...
6. ... which means the DTD can be generated automatically, too.
7. ... and XSL ...
8. ... and Xas-yet-undefined-but-we-can-do-that-aswell

And if you think about it, it's pretty obvious why. XML's 'revolution'
is to come up with a consistent way of representing data and meta data.
That's it. Nothing more than that. It doesn't 'mark up text', despite
what everyone seems to want to say. (Start extra thread if you want to
debate that one!)

It simply says, if you want to represent some information, then here is
a way to represent it that uses the same foundations as someone else
representing some other information. What that information means is down
to an application - and perhaps a human - to understand. So, it doesn't
take a great leap in the imagination to see that if you can come up with
a system of storing elements, attributes, and their hierarchy - which
sounds like objects to me - you can store anything that anyone decides
to put into XML. You'll all have seen announcements about Office 2000,
Quark XPress, Shockwave and others.

Now, all of our XML documents contain mixed content - if they want to -
that is no problem. Nothing I have written at any time suggests having
XML documents that do not have mixed data (see below for why I'm
stressing this point). What ARE problems though, are:

1. How you actually represent that in a database - which is where we
came in on this debate.
2. How the user interface manipulates that.

On the first point, despite what Sam says that is not a relational issue
for us, because we have already abstracted from the relational data to
our object-oriented layer. Therefore we either have an object called
PCDATA that can appear all over the place, multiple times in a child,
etc., or we have the gag-inducing PRE and POST attributes. Just to make
it clear though, whilst people are saying how difficult it is to deal
with mixed-content, we can do either of these two methods VERY easily.
And to emphasise, the output XML looks EXACTLY the same.

However, although I say it is easy to represent, at the moment our
user-interface is - dare I say it - pretty crude. Therefore, although
the first solution - a PCDATA object - is nicer, it is laborious for a
user to keep going 'add child', 'add child', etc. Therefore, as a
stop-gap we have taken the other approach. But, as I said, the produced
XML looks EXACTLY the same (not the rendered stream, but the XML!!)

To stress again, everything I was discussing related to the
IMPLEMENTATION of that storage, not the XML itself.

So, to Sam's specific points:

> Three issues are raised for the database world: (1) Is a given
> database a compliant XML processor, from the standpoint of whitespace
> handling?

No. It's not a 'compliant XML processor' from ANY standpoint. It's a
multi-purpose storage medium, onto which you might build an XML I/O
layer if you want, but that's your business. We use an XML processor to
interpret incoming XML so that we can get it into our database, but that
doesn't make our application an XML processor - it makes it one
application amongst many that uses one.

> (2) If we are storing XML elements as "objects" in a relational
> database, how do we know what the declared values of the elements'
> attributes are?

You could look them up in the database.

Actually, your question is not clear, but you might be referring to how
you know what attributes an element has. If you mean, what attributes
are present, then see my other emails for simple queries that yield
this. If you mean what attributes COULD it have, then you need a schema
table, or two. (Sorry, but I did say in my first email that there was a
bit more to it than I was describing.)

> (3) Is it really necessary, as the examples given suggest,
> that mixed content must be eliminated from XML documents in order to
> them as "objects" in relational databases?

No - as I have consistently said. The question was how to IMPLEMENT
mixed content, not eliminate it - none of the examples given by me
suggest otherwise.

> Leaving aside browsers, isn't it the case that with a conformant XML
> processor, the whitespace will simply be passed through to the
> (By XML clause 2.10).  Thus, spaces at [a] and [b] will and must
> and it's up to the composition program to deal with them.

You are right here. As I said in a separate email, Peter has 'created' a
white-space problem in XML, when in reality it is a problem of how
certain browsers process a stream of XML.

> Is the database a compliant XML processor? In particular, does it
> whitespace according to XML 1.0's clause 2.10?

As I said at the beginning ... no. Is NTFS a compliant XML processor?
No, of course not - it simply stores files, but can be used as a medium
for storing XML documents. That's what we do with relational databases,
but get more for our money than from old-fashioned file systems.
(Old-fashioned? What can that mean? I'd be extremely interested to hear
from anyone who can see where we're heading and wants to share ideas

> (Assuming the declared value of the PRE attribute is CDATA, since
> otherwise, by clause 3.3.3, the trailing space is normalized away.
> Hmmm, guess we need the DTD after all, even in the relational world.)

You're still mixing your levels of abstraction. DTDs exist for XML
documents, but do not relate to the implementation of the storage
medium. Do you know of the DTD that defines how many cylinders there are
on my hard disk, or that specifies that file names must be 8.3 under
DOS? All we're doing is using relational databases as a very efficient
and easy to implement storage medium.

> Possibly join is the wrong word -- this is a conversation about XML
> and database engineering, and I come from the markup world, not the
> database world, so perhaps my usage was not on point.

Now this IS interesting. From a database point of view, XML is amazing.
It is far more significant than mere text mark-up. IMHO it is worth
broadening your notion of what a 'document' is, after all, there is
nothing in the XML spec. that says a document is a play by Shakespeare,
or an insurance claim. In fact it could be a list of historical GDP
figures for a country.

> Mark Birkbeck originally wrote:
>    The attribute table has a join on the element table
>    to say what element the attribute belongs to, whilst the element
>    joins to itself to say who the parent of an element is. This allows
>    to store an object-like tree structure, and so generate XML
>    from any point in the tree.
> I was concerned to know how this "join" approach handled mixed
> The answer: it doesn't.

Hopefully, from what I have described above you can now see that it has
nothing at all to do with it! Does NTFS handle mixed content? Does paper
and pen handle mixed content? Does scratching on your school desk with a
rusty compass handle mixed content? XML handles mixed content, and we
handle XML.

To re-iterate, I suggested a way of storing objects in a relational
database that only uses about three or four tables, yet can store
elements, attributes, and - very importantly - their hierarchical
relationship. Once established, that 'object database' can be used to
store data that can be exported as XML. And it is very, very easy to
export XML that contains mixed content. (As well as DTDs, XSL and
whatever else we feel like.)

> Only element content is allowed, either (a) through the gag-inducing
> attribute approach used above, or (b) by having an element or
> (say, <pcdata>) that contains only #PCDATA. Both approaches sound to
> like <ironic>optimizations</ironic> driven by the constraints of an
> installed base, rather than being driven, as the XML specification is,
> the requirement that "XML documents should be human-legible and
> clear".

Firstly, it should now be clear why this is completely wrong. The XML is
always the same, only its low-level storage format is changing.

But secondly - although I hate being pedantic, you guys bring it out of
me - what it human-legible? No-one has defined it, but everyone loves to
throw it around. Is a bitmap human legible? Yes, if a program renders it
as the picture it originally was, or no, if you look at it as bits and
bytes. Yet has the bitmap changed between the two situations? Further,
is an XML file stored on your hard disk but with no copy of notepad on
the computer human legible?

In fact, an XML document in most forms is stored in an unhuman-legible
format, whether you like it or not. However, the SPIRIT of the
specification is that we want humans to be able to view this data in a
meaningful way without needing anything more advanced than a simple text
editor. This is not really for humans sake - are you really going to
curl up in bed with Midsummer Nights Dream all tagged up? It is more to
establish a baseline which ensures that software to manipulate XML can
be written very, very easily.

What we are doing is storing XML documents in a way that makes it easy
for us to create other documents from them. If I accept for a moment
that the entire XML universe contains only the complete works of
Shakespeare - one document for each play - then the traditional method
still makes it difficult to find every play which features a Prince.
With our solution of storing the documents in an object structure, with
each node being an element, we can actually export our search results as
a new XML document. Searching for the word "Yorrick" could create a
document on the fly, that contained the name of the play, act, scene and
speaker, where the word occurred.

More than this, with the 'separate document' solution how do you create
a table of contents? You can create a separate document, but what if
someone discovers a new work by Shakespeare? You'd have to add the new
play and then edit your table of contents document. Our solution would
do it automatically, since the table of contents is a 'virtual' XML
document, created as a query on the XML objects. This automation is what
databases are good at.



Mark Birbeck
Managing Director
Intra Extra Digital Ltd.
39 Whitfield Street
t: 0171 681 4135
e: [log in to unmask]