The last W3C working group I participated in, Multimodal Interaction (MMI), is at the periphery of the Web and is unlikely to make much of an impact on it in the foreseeable future. However they have produced a few interesting specs (and a few uninteresting frameworks), one of which I will return to in much greater detail later.
The most obscure one may be InkML. The name might imply a language for tagging with paint, but is really describing the set of movements registered by a touch-sensitive tablet or screen so that the scribbles you make can be processed and enhanced by someone more clever than the tablet driver. Unfortunately this specification is made by a tablet-maker subgroup that like Schrödinger's cat is living or dead depending on your perception, and the spec is progressing at a less than vital speed. …
Coming from a browser background InkML was exciting. Phone screens were getting increasingly touchy-feely (haptic to use the lingo), and it is a matter of time and cost reductions before the same happens with computer screens. This would allow users to pass on drawings, handwriting, signatures, mouse finger gestures and so on. With a network event model this could even be done interactively and collaboratively, or using something like <input type=scribble>
a collection could be uploaded as is. Since then nothing much seems to have happened. The group's last meeting was cancelled due to fear of flu, and the ink seems to be running dry.
I did ask the obvious question, "Why not use SVG?" and there were a number of technical requirements that SVG didn't fulfill. The real reason would probably be more like "include SVG processing in tablet drivers, you must be kidding me", but the differences in the InkML spec makes this spec worth reading (which is mercifully short, but then again, after HTML5 most specs are). It describes the path with more information than SVG can. Neither can Canvas.
What is the path?
An SVG path could look something like this:
<path d="M 100 100 L 300 100 L 200 300"/>
The same InkML path would like:
<trace>100 100,300 100,200 300</trace>
For comparison the Canvas instructions would be:
p.moveTo(100, 100);
p.lineTo(300,100);
p.lineTo(200,300);
Simple paths like this have the same syntax with fairly trivial differences. Both languages have additional constructs, like Bezier curves for SVG. The InkML additions are path descriptions like velocity and acceleration and pen angle, position over the plane, rotation and so on.
This kind of additional path data leads to the idea of the enhanced path, data to accompany a path not directly related to the more stylistic properties of SVG.
Now web standards are finally getting to support geolocations, but what about geopaths? That is a connected collection of locations (strictly speaking a location doesn't have to be a point, but in a HTML5 context they would be) with adherent information like speed, angle, or heartbeats per second. What way could such applied paths be coded without cluttering up the fact that these are paths describing a certain shape?
SVG doesn't support an absolute Earth-based coordinate system, like a path from New York to London, should it? What about paths through time?