One common situation when registering a new account with a service (say my.opera.com) is that it requires email confirmation from you to activate that account. This is part of a handshake, where both parties present their credentials and confirm who the other one is. It is also a neat way for the service to make sure that the user has a valid email (to be blocked if a troll or spammer) which is also a universally unique identification. Two independent parties will never have the same email address while they could have the same username on different services (I am "jax" here, but on other sites someone else could have taken that username).
Unfortunately as often as not the confirmation request message to make sure that the user is not a spammer will itself end up in the user's spam folder since the mail program or service can't know that the email isn't from a spammer. …
This two-way handshake isn't working and it may seem that the spammers of the world have won. However the client (that is you, or to be exact your browser) knows something the spammer can't easily know. It knows that you have requested an account. If it has told the service (in this case it would be my.opera.com) a secret and asked the service to bring it back, the client would know that the one showing the secret would do so on behalf of the server.
Say the browser sent the secret "My shiny code ring is set to owl", the service could then send a message with the header
X-HTML5-Token: My shiny code ring is set to owl
The email app could then ask the browser "I got this strange message telling me 'My shiny code ring is set to owl'. Is it spam?", to which the browser can reply, "no, no, it is meant for me". Furthermore the clever browser can, if it wishes, make the sender a contact, linking it with the web service where it sent the secret in the first place.
This example involved an email client. It is simple in the case that the browser is also an email client. It could work with web mail if some appropriate microformat was defined so that the browser could declassify suspected spam without bothering the user. If the email client was fully isolated from the brower, or if the browser didn't speak HTML5, this wouldn't work and the user would have to rummage through the spam folder on his own.
Other web applications wouldn't use multiple protocols and thus wouldn't require multiple clients or roles. What it would require is that the browser could generate a random string of arbitrary length, send a such string when requested to by the server, and store this string until the handshake is completed or until timeout (and the handshake failed).
The one part of infrastructure missing is a way for the server to tell the browser it is expecting (and is able to handle) such a secret. This could easily be done with adding a new input type "token" (which would be a less confusing name than "secret").
A token form value would be the converse of a password field. As a username and password identifies a client to a server, an URL and token identifies a service to a client. The same security restriction for browser handling and DOM access that applies to passwords should apply to tokens.
Strictly speaking the URL would be redundant assuming that the token was long enough and unique.
The client would need to store a record with the last token for a given URL and a time stamp for when that token was submitted.
The service will need to propagate the newest token. In the above example the service would need to email a new confirmation request for each token submitted.
The client will pick the length of the token to send. 32 bytes unencoded sounds like a nice number. It shall not be smaller than 8 bytes and not larger than 4096 bytes unless explicitly allowed to by the server. The token shall be to the best ability random and not contain recognizable substrings (e.g. there shall be no "opera" or "jax" in the string).
The server can further constrain the length by setting the 'maxlength' attribute to any value (larger than 7 bytes), including larger than 4096.
If maxlength is set to lower than the smallest acceptable length for the browser the special "ERR_TOKENLENGTH" token is sent instead and no record is stored.
The values 8, 32, and 4096 are somewhat arbitrarily selected, but are to set predictable lower and upper bounds.
It is possible to go one better. Giving passwords or tokens are fine as a way to identify yourself, but they can be intercepted by a third party. Indeed the token is often intended to be used by a third party. In the example above the web mail service or email client would know the token being passed on. For some services it would be desirable to make something safer.
Instead of just a static token it may allow the client to issue a challenge at will that only the true service would be able to answer correctly.
This is the basis of public key encryption where the party wanting to identify itself has two keys, one public and one private. By locking a secret with a public key (which anyone can access) you would need the private key (which only the owner should have) to unlock it. Or you could lock a secret with the private key and anyone with access to the public key could verify that it was indeed locked with the private key.
A challenge is just a token of arbitrary length only this time the token is locked with the public key. For this to work the browser would need the URL and type of the public key and the capability to lock it with this key. If the server can return the token unlocked it has access to the private key.
The HTML5 standard wouldn't need to define this infrastructure, as long as there is a regular way for the client to upload a random token the application could do the rest.