HTTP/2 changes the way HTTP trafic flows over the web.
It changes how TCP connections are established and maintained, how requests and responses are correlated, and how metadata and payload bytes are encapsulated.
One of the driving ideals was that the semantics of HTTP (which have remained essentially unmodified for more than 25 years) would not change. Requests and responses would work the same way and carry the same information; headers and status codes would keep the same meanings; etc.
Ideally this would mean the HTTP/2 specification would update and/or replace the HTTP/1.1 message syntax and routing specification (RFC 7230) but not have any impact on semantics and content (RFC 7231, 7232, 7233, 7234, 7235, etc.) From a web application’s point of view it should make no difference whether a particular request-response transaction is transported over HTTP/1.1 or HTTP/2.
However, as with all things, there are exceptions.
By changing network behaviour, HTTP/2 changes the way HTTP messages should be formulated and delivered.
HTTP/2 makes better use of the TCP/IP protocol – it uses a single, long-lived connection, so connect times and slow-startup and Nagling and the like are all but solved; and it supports interleaved messages on the one TCP connection, so head-of-line blocking is gone – so as such several application-level strategies and hacks are no longer required. In fact, they may even be detrimental to efficient use of HTTP/2.
With no connection costs or H-o-L blocking, concatenation tricks like spriting and inlining are made redundant. In fact, because cached subresources may become stale at different times it may even be more efficient to separate the subresources, to let them carry their own freshness metadata.
By eliminating H-o-L blocking, resource ordering at the application level will benefit from a new strategy.
Interleaved messages mean that application-level bandwidth hacks, like parallel TCP connections and sharding, are no longer desirable.
While not a requirement for running a HTTP/2 server, these changes do mean that applications should be reconfigured or rewritten to take advantage of (or not be worse off for) running over HTTP/2 transport. It also suggests that the same application should probably not be run on both HTTP/1.1 and HTTP/2 transport stacks – not if you want it to run well on both.
HTTP/2 also introduces a number of knobs and dials meant to be twiddled by the application – not just server-wide settings, but dials that tune the way pages, messages, even individual headers are transmitted.
As these priorities are just hints there is no requirement on either application to support them, but to be useful they have to be understood by both applications.
A much bigger deal, HTTP/2 introduces the ability for a server to send a response to a client without first receiving a request for it. This means, for example, that a server can notify a cache (on the network or inside a browser) that a resource has been updated – and send the updated version – before the browser tries to (re)load it.
This invokes clearly new semantics, and replaces existing application hacks like long-polling. It requires a server application to decide when a resource should be pushed through the HTTP/2 transport machinery, and it requires the client application to know how to deal with the resulting pushed resource.
Server push can be automatically disabled by the HTTP/2 transport machinery (even though it defaults to “enabled”) so the transport can act as the application’s advocate in saying that it is not supported, but the transport machinery needs to offer the application an “on-off” toggle as well as the interfaces necessary for pushing/receiving resources if server push is to be used.
One transparent improvement HTTP/2 adds is the ability to remember and “replay” headers from request to request. This means oft-repeated and bulky headers (for example ‘user-agent’ and ‘cookie’) only need to be sent in full once per session (until they change.) It is a powerful and efficient compression strategy.
The header compression specification (RFC 7541) discusses potential security vulnerabilities this strategy might introduce; to help reduce the risks it allows for specific headers to be transmitted uncompressed. However it’s the application that knows what headers might be at risk, and which are safe to compress; so the HTTP/2 transport machinery needs to provide dials to let the application tell it which headers to/not to compress.
There are other proposals in the works as well, like cache digests and compression dictionaries, which transport application-level information about cache state and content-types using transport-level machinery.
So while HTTP/2 was meant to be a drop-in replacement for HTTP/1.1’s transport, realistically we have to rethink how our applications are structured and redesign them to take advantage of what HTTP/2 has to offer.