Why Does the IETF Hate Me?

I've been following the discussion on the HTTPbis WG, particularly with respect to the development of HTTP/2.0. I even contributed in a small way to some threads, and had a pull-request applied to the draft spec. I was really getting into it, and enjoying contributing to the betterment of the internet.

Granted, things got a bit boring when a couple of people started getting really pedantic about the number of bits saved when using a particular header compression algorithm (yes, bits), but at least it was a technical discussion about an aspect of the new protocol.

However on November 13 Mark Nottingham (the working-group chair) announced that HTTP/2.0 will only work for https:// URIs [Twitter, W3 Archive]

Everything kind of blew up then. Especially after slashdot ran it. The whole conversation got derailed, I won't bother repeating it here. The important bit (to me) was this:

Just to be clear, I'm a browser vendor speaking here, representing my own personal views, but those generally align with the Chromium project. And no, we don't have plans to support HTTP/2.0 in the clear. Firefox developers like Pat have said similar things.

This is bad. James Snell put it well; my (possibly hyperbolic) summary is: Google and Mozilla don't care about me. They want to do what they want to do. What I want doesn't matter.

This comment was a real kicker:

On 11/13/2013 03:09 PM, Karl Dubost wrote:
> (trimming the cc)
> Le 13 nov. 2013 à 15:41, Mike Belshe  a écrit :
>>      c) otherwise actively leveraging plaintext HTTP today for
>>         business or pleasure
> I'm one of this (indeed rare) person who is having a Web site, do
> not have analytics, do not have comments, or anything, do not set
> any cookies of any sort, etc. Plain HTTP works for me.

And plain HTTP/1.1 will continue to work for you, and that's a good, 
fine thing. Your simple site is unlikely to benefit much from the 
latency/multiplexing/etc improvements that HTTP/2 gives. Sites that do 
are more likely to the ones that carry user identity or other info that 
is better to keep secure.  Hence the carrot approach: use TLS if you 
want the fancy bells and whistles from HTTP/2.

The proposal Mark has laid out sounds like a reasonable compromise, and 
I suspect the other networking module peers at Mozilla feel similarly.

In other words: you aren't important. You don't get to use HTTP/2.0. You can keep using HTTP/1.1 until you're important enough to be able to afford the overheads of running HTTPS with properly signed certificates. You don't get to have a faster, more responsive site; you don't get to cut down on bandwidth costs; you don't get to play with the New Big Thing™. We don't hate you; in fact, we don't think of you at all. You are nothing.

Willy Chan's response to James Snell's question just adds to it:

... my default inclination is to tell IPP folks to stick with HTTP/1.X if they only want to support cleartext. If they want HTTP/2, then they should solve the blockers to adopting a secure transport.

In other words: you don't get to play, even if you're a big boy like HP or Apple, because there's a technical difficulty with our proposal. Oh and by the way we don't wanna fix it so we'll phrase it this way to make it your problem.

Aside: here's a pertintent response to above.

Oh yeah, and then there's this:

On Wed, Nov 13, 2013 at 7:01 PM, Frédéric Kayser wrote:

> This also means HTTP/2 is not for everyone, it's only for big business,
> and you cannot get the speed benefit without some hardware investments.
> It also means that speed consciousness webdesigners will still have to
> continue using the awful CSS sprites trick when their target server is
> still HTTP/1.1 based.
> HTTP/2 sounded like a magical speed promise… that would be quickly
> adopted, but now it just looks like an alternative solely made for the big
> guys.

As far as I've seen, most small businesses get little enough traffic that
they wouldn't notice any difference w.r.t CPU usage.
.. and if it bothers them, they'd use HTTP/1.1 for web stuff, or are
already doing so.

Fortunately Microsoft cares. We are one browser vendor who is in support of HTTP 2.0 for HTTP:// URIs. The same is true for our web server. [WG Archive] I don't care why they care, or how much it may or may not be about me personally; but they say they're going to do a thing that will benefit me, and they don't have to do that thing.

One more quote from the discussion:

Le Dim 17 novembre 2013 23:12, Mike Belshe a écrit :

> There are a million apps in the app store, and every one of them had to go
> get a cert and keep it up to date.  Why is it harder for the top-1million
> websites to do this?

Because you're not designing for the to-1million websites, you're
designing for everyone including people who think green text on pink
background is pretty and don't want their web site go down every year
because their cert expired.

Yeah! What he said! I'm one of those people!

So in summary thus far: two of the three big browser vendors really don't care about me (as a website owner) at all. They want me to pay more money so I can continue to serve my website, and until I can afford that, I don't matter.


So, I titled this article "Why Does the IETF Hate Me?" and so far I've only really complained about Google/Chrome and Mozilla, although it was Mark (representing the IETF) that started it. Here's something that, yes, came out of Google, but was ratified by the IETF and is now a Proposed Standard with an RFC number and everything:

RFC 7033 WebFinger

WebFinger is used to discover information about people or other entities on the Internet [...]. For a person, the kinds of information that might be discoverable via WebFinger include a personal profile address, identity service, telephone number, or preferred avatar.

In other words, it's the old UNIX finger command, but running over the web. Remember when your email signature included references to your .profile and .plan? Remember "finger me for my public key"?

Well, WebFinger is that, again, using the web.

Except that it isn't.

RFC 7033, Section 4, paragraph two, reads:

A WebFinger request is an HTTPS request to a WebFinger resource. A WebFinger resource is a well-known URI using the HTTPS scheme constructed along with the required query target and optional link relation types. WebFinger resources MUST NOT be served with any other URI scheme (such as HTTP).

Wha wha hey!? But the.. I mean.. why the hells not? Yes, I'm very late to the party complaining about this since it's already ratified, but dude, seriously. Yes, a webfinger profile might include authoritative information that a consumer might use to authenticate my identity (?)... I guess (??)... if you absolutely depend on fingering me to discover my "identity service."

But hey, here's an idea: why not just tell those consumers to not necessarily trust anything served over an insecure connection? The same way we do for the entire rest of the web.

Because "they" (and I suspect Google here, but have nothing to substantiate that) have an agenda (to make everything on the web secure) I'm now unable to play with interesting and fun protocols without paying extra money to a) a CA, to sign a certificate for me*, and b) my host, to install the cert for me (and/or upgrade my hosting package to include a https:/:443 option).

Well you know what? Screw you guys. I don't care about your stupid MUSTs. They're dumb! I'll implement a non-compliant webfinger service, that looks exactly like a compliant one, but doesn't use HTTPS.

Oh wait, I already did. Let's see how well overly-restrictive specs stand up against people just doing what they want. And let's see how that affects the sanctity of standard-defining RFCs, and the authority of the IETF itself.

* Free Class 1 certificates notwithstanding.

Matthew Kerwin

CC BY-SA 4.0
development, web

Comments powered by Disqus