GET and POST aren’t verbs

Calling HTTP GET and POST ‘verbs’ is a gross misnomer; they really are URI metadata in disguise.

REST is centered around the idea that we should use the way the web works when we do things on the Web – fair enough – and that REST is the architectural style of the Web. RESTful applications – like HTTP – use POST, GET, PUT and DELETE to CREATE, READ, UPDATE and DELETE resources.

The problem is, this is not how the current Web works.Real verbs can be applied to many nouns, and a single noun can take many verbs: the cat walks, the cat talks, the cat whines, the cat shines. Some combinations make no sense, but a lot do. This is not the case for GET and POST and their friends. In principle, it is possible to apply both GET and POST to a single resource; GET an EMPLOYEE record, POST to the same record.

In practice GET and POST do not often apply to the same resource. POST on the Web is used in HTML forms. A form has a method (GET or POST) and an URI (which points to a resource). Usually, POST forms have unique URI’s; they don’t share them with GETS. Amazon uses artificial keys to make the POST URI’s unique. More surprising, they even do the same thing when I GET the URI instead of POST it (which the Firefox Web Developer toolbar supports with a single menu choice). Amazon doesn’t care whether I GET or POST a book to my shopping cart; it (understandably) lands in my shopping cart either way. The same applies to del.icio.us and numerous other – I suspect most – sites.

The Web does not work through applying a small set of verbs to a large number of resources: the resources do all the work. GET and POST aren’t real verbs; they just signal what the URI owner intends to do when the URI is dereferenced; as such, they are URI metadata. The ‘GET’ metadata in a HTML form’s method just informs your browser, and all intermediaries that this URI is supposed to have no side-effects on the server. Of course no browser can know what actually happens when a URI is dereferenced: maybe a document is returned, maybe the missiles are fired. GET and POST are just assertions made by the URI owner about the side effects of the URI.

The ‘old’ Web wouldn’t be one bit different in appearance if both GET and POST weren’t even allowed on the same URI. It’s possible to use them as real verbs, as REST advocates; but the merits of this approach do not derive from the way the current Web works.

Don’t get me wrong: GET and POST are brilliant – especially GET. Returning the GET and POST metadata back to the server with the URI is what makes the Web tick: it allows intermediaries to do smart caching stuff, based on the GET and POST metadata. But however good – GET and POST are not verbs, they are metadata.

The #referent Convention

Update: I learned from the TAG list that Dan Connolly already proposed using #it or #this for the same purpose, and  Tim Berners-Lee proposed using #i to refer to oneself in a similar way. My idea therefore was not very original, and since I regularly read the TAG list and similar sources, it’s even possible I read the idea somewhere and (much) later thought of it as one of my own – though if this happened it was certainly unintentional.

There is a very simple solution to the entire hash-versus-slash debate: whenever you would want to identify anything with a hashless URI, suffix it with #referent. The meaning of x#referent is: I identify whatever x is about. And x is simply an information resource (about x#referent).

The httpRange-14 debate is about what hashless URI’s (without a #) refer to: can they refer only to documents (information resources) or to anything, i.e. persons or cars or concepts. Is it meaningful to say https://www.marcdegraauw.com/marcdegraauw/ refers to ‘Marc de Graauw’. Or does it now identify both a web page and a person, and is this meaningful and/or desirable?

Hash URI’s aren’t thought to be much of a problem in this respect. They have some drawbacks however. It may be desirable to retrieve an entire information resource which describes what the referent of the URI is. And putting all identifiers in one large file makes the file large. Norman Walsh did this: http://norman.walsh.name/knows/who#norman-walsh identifies Norman Walsh, and the ‘who’ file got big. So Norm switched to hashless URI’s: http://norman.walsh.name/knows/who/norman-walsh identifies Norman Walsh. The httpRange-14 solution requires Norm to answer to a GET on this URI with a 303 redirect, in this case to http://norman.walsh.name/knows/who/norman-walsh.html, which does not identify Norman, but simply is an information resource.

If we use the #referent convention, I can say: https://www.marcdegraauw.com/marcdegraauw.html#referent identifies me. And https://www.marcdegraauw.com/marcdegraauw.html is simply an information resource, which is about me. Problem solved.

If I put https://www.marcdegraauw.com/marcdegraauw.html#referent in a browser, I will simply get the entire https://www.marcdegraauw.com/marcdegraauw.html resource, which is a human readable resource about https://www.marcdegraauw.com/marcdegraauw.html#referent. Semantic Web software which understands the #referent convention will know https://www.marcdegraauw.com/marcdegraauw.html#referent refers to a non-information resource (except when web pages are about other web pages) and https://www.marcdegraauw.com/marcdegraauw.html is simply an information resource. Chances of collision of the #referent fragment identifier are very small (Semantic Web jokers who do this intentionally apart) and even in the case of collision with existing #referent fragment identifiers the collision seems pretty harmless. The only thing the #referent convention does not solve is all the existing hashless URI’s out there which (are purported to) identify non-information resources.
In Semantic Web architecture, there is no need ever for hashless URI’s. The #referent convention is easier, more explicit about what is meant, retrieves a nice descriptive human-readable information resource in a browser, along with all necesssary rdf metadata for Semantic Web applications.

Validation Considered Essential

I just ran into a disaster scenario which Mark Baker recently described as the way things should be: a new message exchange without schema validation. He writes: “If the message can be understood, then it should be processed” and in a comment “I say we just junk the practice of only processing ‘valid’ documents … and let the determination of obvious constraints … be done by the software responsible for processing that value.” I’ll show this is unworkable, undesirable and impossible (in that order).

I’ve got an application out there which reads XML sent to my customer. The XML format is terrible, and old – it predates XML Schema. So there is no schema, just an Excel file with “AN…10” style descriptions and value lists. It is built into my software, and works pretty well – my code does the validation, and the incoming files are always processed fine.

Now a second party is going to send XML in the same format. Since there is no schema, we started testing in the obvious way – entering data on their website, exporting to XML, send it over, import in my app, see what goes wrong, fix, start over. We have had an awful lot of those cycles so far, and no error-proof XML yet. Given a common schema, we could have a decent start the first cycle. Check unworkable.

So I wrote a RelaxNG schema for the XML. It turned out there where hidden errors which my software did not notice. For instance, there is a code field, and if it has some specific value, such as ‘D7’, my customer must be alerted immediately. My code checks for ‘D7’ and alerts the users if it comes in. The new sender sent ‘d7’ instead. My software did not see the ‘D7’ code and gave no signal. I wouldn’t have caught this so easily without the schema – I would have caught it in the final test rounds, but it is so much easier to catch those errors early, which schema’s can do. Check undesirable.

Next look at an element with value ‘01022007’. According to Mark, if it can be understood, it should be processed. And indeed I can enter ‘Feb 1, 2007’ in the database. Or did the programmer serialize the American MM/DD/YYYY format as MMDDYYYY and is it ‘Jan 2, 2007’? Look at the value ‘100,001’ – perfectly understandable, one hundred thousand and one hundred – or is this one hundred and one thousandth, with a decimal comma? Questions like that may not be common in an American context, but in Europe they arise all the time – on the continent we use DD-MM-YYYY dates and decimal comma’s, but given the amount of American software MM/DD/YYYY dates and decimal points occur everywhere. The point is the values can apparently be understood, but aren’t in fact. One cannot catch those errors with processing logic because the values are perfectly acceptable to the software. Check impossible.

In exchanges, making agreements and checking if those agreements are met is essential. Schema’s wouldn’t always catch the third kind of errors either, but they provide a way to avoid the misinterpretations. The schema is the common agreement – unless one prefers to fall back to prose – and once we have it, not using it for validation seems pointless. Mark Baker makes some good points on over-restrictive validation rules, but throws out the baby with the bathwater.

The trouble with PSI’s

Published Subject Identifiers face a couple of serious problems, as do all URI-based identifier schemes. A recent post of Lars Marius Garshol reminded me of the – pardon the pun – subject. I was pretty occupied with PSI’s some time ago, maybe now is the moment to write down some of my reservations about PSI’s. PSI’s are URI’s which uniquely identify something, which is – ideally – described on the web page you see when you browse to the URI – read Lars’ post for a real introduction.

First, PSI’s solve the wrong problem. The idea of using some schema to uniquely identify things is hardly novel. Any larger collection of names or id’s for whatever sooner or later faces the problem of telling whether two names point to the same whatever or not. So we’ve got ISBN numbers for books, codes for asteroids and social security numbers for people. They are supposed to designate a single individual or single concept. The problem with all those identifier schemes is not the idea, but the fact that they get polluted over time. Through software glitches and human error and – mostly – intentional fraud a single social security number may point to two or more persons and two or more social security numbers may point to a single person. I can’t speak for non-Dutch realms, but the problem here is very real, and I assume given human nature it is not much different elsewhere. So the real problem is not inventing a identification scheme, the problem is avoiding pollution. This may seem like unfair criticism – no, PSI’s don’t solve famine and war either – but it does set PSI’s in the right light – they are not a panacea for identity problems.

Second, PSI’s are supposed to help identification for both computers – they will compare URI’s, and conclude two things are the same if their URI’s are equivalent – and for humans, through an associated web page. The trouble is what to put on the web page. Let’s make a PSI for me, using my social securitiy number: http://www.sofinummer.nl/0123.456.789. Now what can we put on the web page? We could say “This page identifies the person with the Dutch social security number 0123.456.789” – but that is hardly additional information. If we elaborate – “This page identifies Marc de Graauw, the son of Joop de Graauw and Mieke Hoendervangers, who was born on the 6th of March 1961 in Tilburg, the Netherlands, the person with the Dutch social security number 0123.456.789” we get into trouble. I could find out for instance I was not actually born in Tilburg, but my that my parents for some reason falsely reported this as my birthplace to the authorities. Now even if this were the case, 0123.456.789 would still be my social security number, and it would identify me, not someone else. But if we look at the page, we have to conclude http://www.sofinummer.nl/0123.456.789 identifies nobody, since nobody fits all the criteria listed. The same goes for any other fact we could list – I could find out I was born on another day, to other parents et cetera. The only truly reliable information, the one piece we cannot change, is “This page identifies the person who has been given the Dutch social security number 0123.456.789 by the Dutch Tax Authority”, which hardly is no information at all beyond the social security number itself. All we’ve achieved is prepending my social security number with http://www.sofinummer.nl/, and this simple addition won’t solve any real-world problem. The problem is highlighted by Lars’ example of a PSI, the date page for my birthday, http://psi.semagia.com/iso8601/1961-03-06. This page has no information whatsoever which could not be conveyed with a simple standardized date format, such ‘1961-03-06’ in ISO 8601.

Third, in a real-world scenario, establishing identity statements across concepts from diverse ontologies is the problem to solve. Getting everybody to use the same single identifier for a concept is not feasible. Take an example such as Chimezie Ogbuji’s work on a Problem-Oriented Medical Record Ontology, where it says about ‘Person’:

cpr:person = foaf:Person and galen:Person and rim:EntityPerson and dol:rational-agent

In FOAF a person is defined with:”Something is a foaf:Person if it is a person. We don’t nitpic about whethet they’re alive, dead, real or imaginary.”

HL7’s RIM defines Person as:

“A subtype of LivingSubject representing a human being.”

and defines LivingSubject as:
“A subtype of Entity representing an organism or complex animal, alive or not.”

In FOAF persons can be imaginary and ‘not real’, in the RIM they cannot. Now Chimezie wisely uses and which is an intersection in OWL, so his cpr:Person does not include imaginary persons. And for patient records, which are his concern, it doesn’t matter: we don’t treat Donald Duck for bird flu, so for medical care the entire problem is theoretical. But what about PSI’s: could we ever reconcile the concepts behind foaf:Person and rim:EntityPerson? Probably not: there are a lot of contexts where imaginary persons make sense. So if we make two PSI’s, foaf:Person and rim:EntityPerson, our subjects won’t merge, even when – in a certain context such as medical care – they should. Or we could forbid the use of foaf:Person in the medical realm, but this seems to harsh: the FOAF approach to personal information is certainly useful in medical care.

Identity of concepts is context-dependent. The definitions behind the concepts don’t matter much. Trying to find a universal definition for any complex concept such as ‘person’ will only lead to endless semantic war. Usually natural language words will do for a definition (but you do need disambiguation for homonyms). Way more important than trying to establish a single new id system with new definitions, are ways to make sensible context-dependent equivalences between existing id systems.

15 Sep 2008: Comments are closed

Validate for Machines, not Humans

Mark Baker misses an important distinction in “Validation Considered Harmful” when he writes:

“Today’s sacred cow is document validation, such as is performed by technologies such as DTDs, and more recently XML Schema and RelaxNG.

Surprisingly though, we’re not picking on any one particular validation technology. XML Schema has been getting its fair share of bad press, and rightly so, but for different reasons than we’re going to talk about here. We believe that virtually all forms of validation, as commonly practiced, are harmful; an anathema to use at Web scale.”

Dare Obasanjo replied in “Versioning does not make validation irrelevant“:

“Let’s say we have a purchase order format which in v1 has a element which can have a value of "U.S. dollars" or "Canadian dollars" then in v2 we now support any valid currency. What happens if a v2 document is sent to a v1 client? Is it a good idea for such a client to muddle along even though it can't handle the specified currency format?"

to which Mark replied:

“No, of course not. As I say later in the post; ‘rule of thumb for software is to defer checking extension fields or values until you can’t any longer'”

With software the most important point is whether the data sent ends up with a human, or ends up in software – either to be stored in a database for possible later retrieval, or is used to generate a reply message without human intervention. Humans can make sense of unexpected data: when they see “Euros” where “EUR” was expected, they’ll understand. Validating as little as possible makes sense there. When software does all the processing, stricter validation is necessary – trying to make software ‘intelligent’ by enabling it to process (not just store, but process) as-yet-unknown format deviations is a road to sure disaster. So in the latter case stricter validation makes a lot of sense – we accept “EUR” and “USD”, not “Euros”. And if we do that, the best thing for two parties who exchange anything is to make those agreements explicit in a schema. If we “defer checking extension fields or values until you can’t any longer” we end up with some application’s error message. You don’t want to return that to the partner who sent you a message – you’ll want to return “Your message does not validate against our agreed-upon schema”, so they know what to fix (though sometimes you’ll want your own people to look at it first, depending on the business case).

Of course one should not include unnecessary constraints in schema’s – but whether humans or machines will process the message is central in deciding what to validate and what not.
Another point is what to validate – values in content or structure, and Uche Ogbuji realistically adds:

“Most forms of XML validation do us disservice by making us nit-pick every detail of what we can live with, rather than letting us make brief declarations of what we cannot live without.”

Yes, XML Schema and others make structural requirements which impose unnecessary constraints. Unexpected elements often can be ignored, and this enhances flexibility.

The Semantics of Addresses

There has been a lot of discussion over the past 10-something years on URI’s: are they names or addresses? However, there does not appear to have been a lot of investigation into the semantics of addresses. This is important, since while there are several important theories on the semantics of names (Frege, Russell, Kripke/Donnellan/Putnam et. al.), there have been little classical accounts of the semantics of addresses. A shot.

What are addresses? Of course, first come the standard postal addresses we’re all accustomed to:

Tate Modern
Bankside
London SE1 9TG
England

 

Other addresses, in a broad sense, could be:

52°22’08.07” N 4°52’53.05” E (The dining table on my roof terrace, in case you ever want to drop by. I suggest however, for the outdoors dining table, to come in late spring or summer.)

e2, g4, a8 etc. on a chess board

The White House (if further unspecified, almost anyone would assume the residence of the President of the United States)

(3, 6) (in some x, y coordinate system)

Room 106 (if we are together in some building)

//Myserver/Theatre/Antigone.html

128.30.52.47

Addresses are a lot like names – they are words, or phrases, which point to things in the real world. They enable us to identify things, and to refer to things – like names. ‘I just went to the van Gogh Museum‘ – ‘I was in the Paulus Potterstraat 7 in Amsterdam‘ – pretty similar, isn’t it?
So what makes addresses different from names, semantically? The first thing which springs to mind is ordinary names are opaque, and addresses are not. Addresses contain a system of directions, often but not always, hierarchical. In other words: there is information in parts of addresses, whereas parts of names do not contain useful information. From my postal address you can derive the city where I live, the country, the street. From chess notations and (geo-)coordinates one can derive the position on two (or more) axes. So addresses contain useful information within them, and names for the most part do not.

This is not completely true – names do contain some informative parts – from ‘Marc de Graauw’ you can derive that I belong to the ‘de Graauw’ family, and am member ‘Marc’ of it, but this does not function the way addresses do – it is not: go to the collection ‘de Graauw’ and pick member ‘Marc’. On a side note, though ‘de Graauw’ is an uncommon last name even in the Netherlands, I know at least one other ‘Marc de Graauw’ exists, so my name is not unique (the situation could have been worse though). I don’t even know whether my namesake is part of my extended family or not, so ‘looking up’ the ‘de Graauw’ family is not even an option for me.

Unique names or identifiers are usually even more opaque than natural names – my social security number does identify me uniquely in the Dutch social security system, but nothing can be derived from its parts other than a very vague indication of when it was established. So even when names contain some information within their parts, it is not really useful in the sense that it doesn’t establish much – not part of the location, or identity, or reference. The parts of addresses do function as partial locators or identifiers, the parts of names provide anecdotal information at best.

Names and addresses are fundamentally different when it comes to opacity. What else? Ordinary names – certainly unique names – denote unmediated, they point directly to an individual. Addresses denote mediated, they use a system of coordination to move step-by-step to their endpoint. Addressing systems are set up in such a way they provide a drilling-down system to further and further refine a region in a space until a unique location is denoted. Addresses are usually unique in their context, names sometimes are, and sometimes not. So, e4 denotes a unique square on a chess board, and my postal address a unique dwelling on Earth. The name ‘Amsterdam’ does denote a unique city if the context is the Netherlands, but my name does not denote a unique individual. So addresses pertain to a certain space, where a certain system of directives applies.

Addresses do not denote things, they denote locations. My postal address does not denote my specific house: if we tear it down and build another, the address does not change. e4 does not denote the pawn which stands there, it denotes a square on a chess board, whatever piece is there. So addresses do not denote things, but slots for things. Addresses uniquely denote locations, in a non-opaque, mediated way. If we use ‘name’ in a broad sense, where names can be non-opaque, we could say: addresses are unique names for locations in a certain space.

Names Addresses
Can identify identify
Can refer refer
Denote directly mediated
Point into the world a space
Denote things slots
Are opaque not opaque

Where does this leave us with URI’s? It’s quite clear URL’s (locator URI’s) are addresses. Looking at a URL like http://www.w3.org/2001/tag/doc/URNsAndRegistries-50.html#loc_independent , this tells us a lot:

1) this is the http part of uri space we’re looking at,

2) this is on host www.w3.org

3) the path (on this host) to the resource is 2001/tag/doc/URNsAndRegistries-50.html

4) and within this, I’m pointing to fragment #loc_independent

So URL’s fulfill all conditions of addresses. They are not opaque. Their parts contain useful information. Their parts – schema, authority, path etc. – provide steps to the URL’s destiny – the resource it points to. The identify, they refer, like names. No, URL’s are not addresses of files on file systems on computers, not addresses in this naive sense. But URL’s are addresses in URI space. HTTP URI’s are names of locations in HTTP space. Semantically, URL’s are addresses – at least. Whether URL’s can be names too is another question.

Do we have to know we know to know?

John Cowan wrote ‘Knowing knowledge‘ a while ago, about what it means to know something. His definition (derived from Nozick) is:

‘The following four rules explain what it is to know something. X knows the proposition p if and only if:

  1. X believes p;
  2. p is true;
  3. if p weren’t true, X wouldn’t believe it;
  4. if p were true, X would believe it.’

This raises an interesting question. A common position of religious people (or at least religious philosophers) is: ‘I believe in the existence of God, but I cannot know whether God exists’. God’s existence is a matter of faith, not proof. I don’t hold such a position myself, but would be very reluctant to denounce it on purely epistemological grounds.

Now if we suppose for the sake of the argument that God does in fact exist, and that the religious philosopher, X, would not have believed in the existence of God in case God would not have existed (quite coherently, since typically in such views nothing would have existed without God, so no one would have believed anything). Our philosophers’ belief would satisfy the above four criteria. Yet, could we say ‘X knows p’, when X himself assures us he does not know whether p is true? In other words: doesn’t knowing something presuppose the knower would be willing to assert knowing his or her knowledge?

More Compatibility Flavours

See also my previous posts on this issue.

So we’ve got backward and forward compatibility, and syntactical and semantical compatibility. (Quick recapture: Backward compatibility is the ability to accept data from older applications, forward compatibility the ability to accept data from newer applications. Syntactical compatibility is the ability to successfully (i.e. without raising errors) accept data from other version applications, semantical compatibility the ability to understand data from other version applications.)

So what else is there?

Noah Mendelsohn made clear to me one has to distinguish language and application compatibility.

Let’s see what this means when looking at syntactical and semantical compatibility. A language L2 is syntactically backward compatible with an earlier language L1 if and only if every L1 document is also an L2 document. Or to rephrase it: if and only if an application built to accept L2 documents also accepts L1 documents. Or (the way I like it): if and only if the set of L1 documents is a subset of the set of L1 documents:

L1 is a subset of L2

And of course L2 is forwards compatible with respect to L1 if, and only if, every L2 document is also a L1 document:

L2 is a subset of L1

This makes it quite clear that if L2 is both backward and forward compatible with respect to L1, both of the above diagrams apply, so L2 = L1:

L2 is L1

But this flies in the face of accepted wisdom! Of course both backward and forward compatibility is possible! HTML is designed to be forward compatible, is it not, through the mechanism of ‘ignore unknown tags’. And two HTML versions of course can be backward compatible, if HTMLn+1 supports everything HTMLn does. Yet the above diagrams speak for themselves as well. The distinction between language and application compatibility offers the solution. The diagrams only are about syntactical language compatibility. The HTML forward compatibility mechanism is about applications as well: HTML instructs browsers to ignore unknown markup. So the HTML compatibility mechanism is about browser, ergo application behavior.

HTML tells HTMLn browsers to accept all L2 (HTMLn+1) markup (and ignore it), and to accept all L1 (HMTLn) markup, and – not ignore, but – process it. (“If a user agent encounters an element it does not recognize, it should try to render the element’s content.” – HTML 4.01) Now this sounds familiar – that’s syntactical versus semantical compatibility, isn’t it? So HTML makes forward compatibility possible through instructing the application – the browser – to syntactically accept future versions, but semantically ignore them. The n-version browser renders n+1 element content, but has no idea what the tags around it mean (render bold and blue? render indigo and italic? render in reverse?).

Summing up: there is no such thing as two (different) languages L2 and L1 which are both back- and forward compatible. There is such a thing as two applications A1 (built for language L1) and A2 (built for L2) which are both back- and forward compatible: A1 must ignore unknown L2 syntax, and A2 must accept and process all L1 syntax and semantics:

A2 back- and forward compatible wrt A1

(Yes, and to be complete, this is a rewrite of an email list submission of mine, but vastly improved through discussions with Noah Mendelsohn and David Orchard, who may or may not agree with what I’ve said here…)

The URI Identity Paradox

Norman Walsh wrote: “(some) fall into the trap of thinking that an “http” URI is somehow an address and not a name”. It’s an opinion expressed more often, for instance in the TAG Finding “URNs, Namespaces and Registries”, where is says: http: URIs are not locations“. URNs, Namespaces and Registries” However, for everyone who believes an http URI is not an address but instead a name, there is a paradox to solve.

Suppose I’ve copied the ‘ Extensible Markup Language (XML) 1.0’ document from www.w3.org to my own website:
https://www.marcdegraauw.com/REC-xml-20060816

Now is the following identity statement true?

http://www.w3.org/TR/2006/REC-xml-20060816 is https://www.marcdegraauw.com/REC-xml-20060816

Certainly not, of course. If you and I were looking at https://www.marcdegraauw.com/REC-xml-20060816, and some paragraph struck you as strange, you might very well say: ‘I don’t trust https://www.marcdegraauw.com/REC-xml-20060816, I want to see http://www.w3.org/TR/2006/REC-xml-20060816 instead’. That’s a perfectly sensible remark. So of course the identity statement is not true: the document at www.w3.org is from an authorative source. It’s the ultimate point of reference for any questions on the Extensible Markup Language (XML) 1.0. The copy at www.marcdegraauw.com is – at best – a copy. Even if the documents which are retrieved from the URI’s are really the same character-for-character, they carry a very different weight. Next time, you would consult http://www.w3.org/TR/2006/REC-xml-20060816, not https://www.marcdegraauw.com/REC-xml-20060816.

So for http URI’s which retrieve something over the web, two URI’s in different domains represent different information resources. Let’s take a look at names next. I might set up a vocabulary of names of specifications of languages (in a broad sense): Dutch, English, Esperanto, Sindarin, C, Python, XML. In line with current fashion I would use URI’s for the names of those languages, and https://www.marcdegraauw.com/REC-xml-20060816 would be the name for XML. If the W3C had made a similar vocabulary of languages, it would probably use “http://www.w3.org/TR/2006/REC-xml-20060816” as the name for XML. And for names the identity statement

http://www.w3.org/TR/2006/REC-xml-20060816 is https://www.marcdegraauw.com/REC-xml-20060816

is simply true: both expressions are names for XML. So the statement is as true as classical examples as “The morning star is the evening star” or “Samuel Clemens is Mark Twain”. This shows we’ve introduced a synonym: http://www.w3.org/TR/2006/REC-xml-20060816 behaves different when used to represent a (retrievable) information resource and when used as a name. In itself, synonyms are not necessarily a problem (though I’d maintain they are a nuisance at all times). The problem can be solved when one knows which class a URI belongs to. If I choose to denote myself with https://www.marcdegraauw.com/, I can simply say in which sense I’m using https://www.marcdegraauw.com/:

https://www.marcdegraauw.com/ is of type foaf:Person

or

https://www.marcdegraauw.com/ is of type web:document

In the first sense, https://www.marcdegraauw.com/ may have curly hair and be 45 years of age and have three sons, in the second sense it may be retrieved as a sequence of bits over the web. Now, when we try to apply the same solution to the XML example, a paradox emerges. Of course we can talk about http://www.w3.org/TR/2006/REC-xml-20060816 as a name and qualify it:”http://www.w3.org/TR/2006/REC-xml-20060816 is of type spec:lang” or whatever. This is the sense in which the identity statement is true: http://www.w3.org/TR/2006/REC-xml-20060816 is a name for a language specification, and https://www.marcdegraauw.com/REC-xml-20060816 is another name for the same specification. Now try to come up with a proper classification for http://www.w3.org/TR/2006/REC-xml-20060816 for the sense where the identity statement is not true, and find a classification which does not secretly introduce the notion of address. I say it cannot be done. One can classify as “type address” or “type location” et cetera, but every classification which allows the identity statement to be false carries the notion of “address” within it. Whoever maintains that URI’s are not addresses or locations, will have to admit the identity statement is both true and false at the same time.

URI’s are addresses or locations (though not in the simplistic sense of being files in directories under a web root on some computer on the Internet). And when URI’s are used as names as well, every information resource which is named (not addressed) with its own URI is a synonym with the URI used as an address of itself. The URI-as-name and the URI-as-address will behave different and have different inferential consequences: for the URI-as-address, cross-domain identity statements will never be true, for the URI-as-name they may be true or may be false. If you want to avoid such synonyms, you’ll have to use URI’s such as http://www.w3.org/nameOf/TR/2006/REC-xml-20060816. IMO, it can’t get much uglier. If you accept the synonyms, you’ll have to accept a dual web where every http URI is an address and may be a name of the thing retrieved from this address as well – and those two are not the same.

Update: Norman Walsh has convinced me in private email conversation this post contains several errors. I will post a newer version later.

Syntactical and Semantical Compatibility

In a previous post I summarized some concepts from David Orchard’s W3C TAG Finding ‘Extending and Versioning Languages‘. Now I’ll make things complicated and talk about syntactical and semantical compatibility.

When I send you a message we can distinguish syntactical compatibility and semantical compatibility. We have syntactical compatibility when I send you a message and you can parse it – there is nothing in the content which makes you think: I cannot read this text. Semantical compatibility is about more than just reading: you need to the understand the meaning of what I’m sending you. Without syntactical compatibility, semantical compatibility is impossible. With syntactical compatibility, semantical compatibility is not guaranteed, it comes as an extra on top of syntactical compatibility.

Semantical compatibility is kind of the Holy Grail in data exchanges. Whenever two parties exchange data, there is bound to be a moment when they find they haven’t truly understood each other. To give just one example from real life: two parties exchanged data on disabled employees who (temporarily) could not work. (In the Netherlands, by law this involves labour physicians as well as insurance companies.) After exchanging data for quite a while, they found out that when they exchanged a date containing the end of the period of disability, one party sent the last day of disability, while the other expected the first working day. Just one day off, but the consequences can significantly add up when insurance is involved…

There is something funny about the relation between syntactical/semantical compatibility and backward/forward compatibility. Remember, backward compatibility is about your V2 word processor being able to handle V1 documents, or your HTML 8.0 aware browser being able to read HTML 7.0 documents. Now if this new application reads and presents the old document, we expect everything to be exactly as it was in the older application. So a HTML 7.0 <b> tag should render text bold; if the HMTL 8.0 browser does not display it this way, we do not consider HTML 8.0 (or the browser) truly backward compatible. In other words, of backward compatible applications we expect both syntactical and semantical backward compatibility: we expect the newer application not just to read the old documents, but we expect new applications to understand the meaning of old documents as well.
Forward compatibility is different. Forward compatibility is the ability of the n-th application to read n+1 documents. So a HTML 7.0 browser, when rendering a HTML 8.0 document, should not crash or show an error, but show the HTML 8.0 as far as possible. Of course, no one can expect HTML 8.0 tags to be processed by the HTML 7.0 browser, but all HTML 7.0 tags should be displayed as before, HTML 8.0 tags should be ignored. In other words, of forward compatible applications we expect syntactical, but not semantical forward compatibility.

This brings to light the key characteristic of forward compatibility: it is the ability to accept unknown syntax, and ignore its semantics. It is reflected in the paradigm: Must-Ignore-Unknown. There is a well-known corollary to this: Must-Understand. Must-Understand flags are constructs which force an application to return an error when they do not understand the thus ‘flagged’ content. Where Must-Ignore-Unknown is a directive which forces semantics of unknown constructs to be ingnored, Must-Understand flags do the reverse: they force the receiver to either understand the semantics (get your meaning) or reject the message.

When we make applications which accept new syntax (to a degree) and ignore their semantics, we make forward compatible applications. Of backward compatibility, we expect it all.