mark nottingham

HTTP

Yet More New HTTP Specs

The HTTP “core” documents were published on Monday, including a revision of HTTP semantics, caching, HTTP/1.1, HTTP/2, and the brand-new HTTP/3. However, that’s not all that the HTTP community has been up to.

A New Definition of HTTP

Seven and a half years ago, I wrote that RFC2616 is dead, replaced by RFCs 7230-5.

Server-Sent Events, WebSockets, and HTTP

The orange site is currently discussing an article about Server-Sent Events, especially as compared with WebSockets (and the emerging WebTransport). Both the article and discussion are well-informed, but I think they miss out on one aspect that has fairly deep implications.

On RFC8674, the safe preference for HTTP

It’s become common for Web sites – particularly those that host third-party or user-generated content – to make a “safe” mode available, where content that might be objectionable is hidden. For example, a parent who wants to steer their child away from the rougher corners of the Internet might go to their search engine and put it in “safe” mode.

How Multiplexing Changes Your HTTP APIs

When I first learned about SPDY, I was excited about it for a number of reasons, but near the top of the list was its potential impact on APIs that use HTTP.

Designing Headers for HTTP Compression

One of the concerns that often comes up when someone creates a new HTTP header is how much “bloat” it will add on the network. This is especially relevant in requests, when a little bit of extra data can introduce a lot of latency when repeated on every request.

How (Not) to Control Your CDN

In February, Omer Gil described the Web Cache Deception Attack.

How to Think About HTTP Status Codes

There’s more than a little confusion and angst out there about HTTP status codes. I’ve received more than a few e-mails (and IMs, and DMs) over the years from stressed-out developers (once at 2am, their time!) asking something like this:

Ideal HTTP Performance

The implicit goal for Web performance is to reduce end-user perceived latency; to get the page in front of the user and interactive as soon as possible.

Alternative Services

The IESG has approved “HTTP Alternative Services” for publication as a Proposed Standard.

Why 451?

Today, the IESG approved publication of “An HTTP Status Code to Report Legal Obstacles”. It’ll be an RFC after some work by the RFC Editor and a few more process bits, but effectively you can start using it now.

Will there be a Distributed HTTP?

One of the things that came up at the HTTP Workshop was “distributed HTTP” — i.e., moving the Web from a client/server model to a more distributed one. This week, Brewster Khale (of Archive.org fame) talked about similar thoughts on his blog and at CCC. If you haven’t seen that yet, I’d highly suggest watching the latter.

HTTP/2 Implementation Status

RFC7540 has been out for about a month, so it seems like a good time for a snapshot of where HTTP/2 implementation is at.

HTTP/2 is Done

The IESG has formally approved the HTTP/2 and HPACK specifications, and they’re on their way to the RFC Editor, where they’ll soon be assigned RFC numbers, go through some editorial processes, and be published.

Why Intermediation is Important

A few months ago I went to the Internet Governance Forum, looking to understand more about the IGF and its attendees. One of the things I learned there was a different definition of “intermediary” — one that I think the standards community should pay close attention to.

RFC2616 is Dead

Don’t use RFC2616. Delete it from your hard drives, bookmarks, and burn (or responsibly recycle) any copies that are printed out.

If You Can Read This, You're SNIing

When TLS was defined, it didn’t allow more than one hostname to be available on a single IP address / port pair, leading to “virtual hosting” issues; each Web site (for example) now requires a dedicated IP address.

Trying out TLS for HTTP:// URLs

The IETF now considers “pervasive monitoring” to be an attack. As Snowden points out, one of the more effective ways to combat it is to use encryption everywhere you can, and “opportunistic encryption” keeps on coming up as one way to help that.

Nine Things to Expect from HTTP/2

HTTP/2 is getting close to being real, with lots of discussions and more implementations popping up every week. What does a new version of the Web’s protocol mean for you? Here are some early answers:

Strengthening HTTP: A Personal View

Recently, one of the hottest topics in the Internet protocol community has been whether the newest version of the Web’s protocol, HTTP/2, will require, encourage or indeed say anything about the use of encryption in response to the pervasive monitoring attacks revealed to the world by Edward Snowden.

Exploring Header Compression in HTTP/2.0

One of the major mechanisms proposed by SPDY for use in HTTP/2.0 is header compression. This is motivated by a number of things, but heavy in the mix is the combination of having more and more requests in a page, and the increasing use of mobile, where every packet is, well, precious. Compressing headers (separately from message bodies) both reduces the overhead of additional requests and of introducing new headers. To illustrate this, Patrick put together a synthetic test that showed that a set of 83 requests for assets on a page (very common these days) could be compressed down to just one round trip – a huge win (especially for mobile). You can also see the potential wins in the illustration that I used in my Velocity Europe talk.

"Why Don't You Just…"

A proposal by John Graham-Cumming is currently doing the rounds:

HTTP Status: 101 Switching Protocols

The HTTPbis Working Group met in Atlanta last month; here’s how things are going.

OPTIONS is Not the Method You're Looking For

Once in a while, people ask me whether they should use the OPTIONS HTTP method, and whether we should try to define formats for discovering resource capabilities with it.

HTTP in Vancouver

The HTTPBIS Working Group is in a transitional phase; we’re rapidly finishing our revision of the HTTP/1.1 specification and just getting steam up on our next target, HTTP/2.0.

What's Next for HTTP

We had two great meetings of the HTTPbis Working Group in Paris this week — one to start wrapping up our work on HTTP/1.1, and another to launch some exciting new work on HTTP/2.0.

RFC6266 and Content-Disposition

HTTPbis published RFC6266 a little while ago, but the work isn’t finished.

Distributed Hungarian Notation doesn't Work

It used to be that when you registered a media type, a URI scheme, a HTTP header or another protocol element on the Internet, it was an opaque string that was a unique identifier, nothing more.

HTTP Pipelining Today

Last week, Blaze.io highlighted how mobile browsers use HTTP pipelining.

What Proxies Must Do

The explosion of HTTP implementations isn’t just in clients and servers. An oft-overlooked but important part of the Web ecosystem is the intermediary, often called just a “proxy”*.

On HTTP Load Testing

A lot of people seem to be talking about and performing load tests on HTTP servers, perhaps because there’s a lot more choice of servers these days.

HTTP POST: IETF Prague Edition

htracr in Two Minutes

I made a quick and dirty screencast to show off some of the newer features in htracr.

Last Call: Content-Disposition

The IESG has received a request from the Hypertext Transfer Protocol Bis WG (httpbis) to consider the following document:

Digging Deeper with htracr

There’s a lot of current activity on the binding between HTTP and TCP; from pipelining to SPDY, the frontier of Web performance lives between these layers.

HTTP Roundup: What’s Up with the Web’s Protocol

I’m going to try to start blogging more updates (kick me if I don’t!) about what’s happening in the world of HTTP.

Thou Shalt Use TLS?

Since SPDY has surfaced, one of the oft-repeated topics has been its use of TLS; namely that the SPDY guys have said that they’ll require all traffic to go over it. Mike Belshe dives into all of the details in a new blog entry, but his summary is simple: “users want it.”

Thoughts on Archiving HTTP

Steve Souders and others have been working for a while on HAR, a HTTP Archive format.

Are Resource Packages a Good Idea?

Resource Packages is an interesting proposal from Mozilla folks for binding together bunches of related data (e.g., CSS files, JavaScript and images) and sending it in one HTTP response, rather than many, as browsers typically do.

Will HTTP/2.0 Happen After All?

A couple of nights ago, I had a casual chat with Google’s Mike Belshe, who gave me a preview of how their “ Let’s make the Web faster” effort looks at HTTP itself.

RED gets a blog

Just FYI, for those interested: RED now has a blog detailing news and other developments. I’ll still post about it here occaisionally, but most RED-related things are going over there…

The Resource Expert Droid

A (very) long time ago, I wrote the Cacheability Engine to help people figure out how a Web cache would treat their sites. It has a few bugs, but is generally useful for that purpose.

Opera Turbo

HTTP performance is a hot topic these days, so it’s interesting that Opera has announced a “turbo” feature in Opera 10 Beta;

Stop it with the X- Already!

UPDATE: RFC6648 is now the official word on this topic.

The Pitfalls of Debugging HTTP

Some folks at work were having problems debugging HTTP with LWP ’s command-line GET utility; it turned out that it was inserting Link headers — HTTP headers, mind you — for each HTML <link> element present.

DAV WTF?

Not many people that I know outside of IETF circles realise that a new *DAV effort has started up; CardDAV.

POST and PATCH

It’s 7am, I’m sitting in the Auckland Koru Club on my way home and reading the minor kerfuffle regarding PATCH with interest.

Another Kind of HTTP Negotiation

Here’s one that I’ve been wondering about for a while, for the LazyWeb (HTTP Geek Edition);

Why Revise HTTP?

I haven’t talked about it here much, but I’ve spent a fair amount of time over the last year and a half working with people in the IETF to get RFC2616 — the HTTP specification — revised.

ETags, ETags, ETags

I’ve been hoping to avoid this, but ETags seem to be popping up more and more often recently. For whatever reason, people latch onto them as a litmus test for RESTfulness, as the defining factor of HTTP’s caching model, and much more.

httperf rev

Martin Arlitt makes an exciting announcement;

Friday Fun: I Hate Cookies

There are plenty of reasons to hate HTTP Cookies, but there’s one thing that especially annoys me; their syntax.

Thoughts on Declarative Ajax

Dave Johnson writes up a nice summary of the issues of adding new elements to HTML for declarative Ajax, something that I ran into when doing HInclude.

Putting the Web back in Web 2.0

Timbl has this great term “ Webizing” that he uses to talk about giving existing systems the benefits of the Web architecture. Despite the first part of “Web 2.0”, I think AJAX is in severe need of some serious Webizing.

DOM vs. Web

Back at the W3C Technical Plenary, I argued that Working Groups need to concentrate on making more Web-friendly specifications. Here’s an example of one such lapse causing security problems on today’s Web.

Bug Syncronicity

I’ve had a lyric running through my head for the last day or so, thanks to a couple of bugs.

Web Authentication

There’s some excitement out there about “ Cookie-less HTTP Authentication.”

WS-Transfer, WAKA and the Web

Microsoft and friends (of the keep your enemy closer variety, I suspect) have submitted WS-Transfer to the W3C. I found the Team comment interesting; e.g.,

How Web-Ready is XMLHttpRequest?

I’ve been playing around with some ideas that use XMLHttpRequest recently, but I keep on bumping up against implementation inconsistencies on IE vs. Safari vs. Opera vs. Mozilla. Although the interface exposed is pretty much the same, what it does in the background is very different, especially with regards to HTTP.

Making headway on OPTIONS

On the heels of mod_cgi, PHP now does the right thing (at least in 5.1) when setting the Allow header. mod_dav is still broken, though.

RFC 4229: HTTP Header Field Registrations

The useful end of RFC 3864 (at least regarding HTTP) is finally* here. When you need to know where a particular header is defined there’s now one place to do it; IANA’s Message header registry and repository have been filled with HTTP-related headers by RFC 4229.

OPTIONS Getting Better

Roy Fielding has just closed a bug that’s been around since 1996, and which I’ve previously lamented here;

A Call to OPTIONS

Web metadata discovery is not a new topic, and one on which the final word has not been spoken. However, one of the most basic means of discovering something about a resource, the HTTP OPTIONS method, is not widely enabled by current implementations.

HTTP Header Registries

Ever wonder where the heck a particular HTTP header is defined?

HTTP Authentication and Forms

It’s no secret that HTTP authentication isn’t used as often as it should be. When I talk to Web developers, there are usually a few reasons for their use of cookies for authentication;

WebDAV Access Control Protocol

RFC 3744 has been published:

Go PATCH Go

It looks like the HTTP PATCH method proposal might be based on Delta Encoding, which is IMO one of the cooler and lesser-known HTTP technologies.

HTTP Performance

I’ve heard several people in the industry assert that HTTP fundamentally limits the performance of applications that use it; in other words, there’s a considerable disadvantage to using it, and that therefore other protocols (usually proprietary or platform-specific systems that those same people happen to sell) are needed to “unleash the power of Web services.”

Subversion

Ted Leung points out that caching PUT (and other WebDAV methods) would suit Subversion - probably the most interesting WebDAV application under open development - quite well. The only thing he says that I disagree with (and it might just be a misunderstanding) is in regard to a need for a Subversion-specific client cache; the whole point of doing this with Web protocols it to avoid application-specific infrastructure. A well-designed WebDAV cache should work equally well for any application, not just Subversion.

httpRange-14

Mark Baker is the latest in a series to weigh in on the TAG issue regarding what a HTTP URI can identify.

Profiling HTTP

Mark Pilgrim is starting to think about issues surrounding the transport, transfer and general moving around of the Format Formerly Known as Echo (nee Pie).

Tarawa

I’ve finally gotten sick enough of a project that I’ve been working on for waaaay too long to release it to the unsuspecting^H^H^H general public.

It's alive

For those who have been helping, it’s alive, has been for almost a week, but I still want to do a bit more documentation, hunt down a few bugs, and get some more unit tests down.

ETags

It’s not necessary to lament the lack of ETags on generated Web pages; cgi_buffer automagically generates and validates them for Perl, Python and PHP scripts.

HTTP header sniffing

LiveHTTPHeaders for Mozilla is the best HTTP header sniffer I’ve seen yet; up till now, I’ve been using WebTee, but for most purposes, this is much better. Enjoy.