This article aims to compare differences in performance, with regards to current "real-time" behaviors in a Web browser. For purposes of this article, "real-time" refers to server pushing notifications to a client, removing the need for a page refresh. HTTP long-polling, HTTP short-polling, WebSockets, and server-sent events are avenues to accomplish this behavior and will be compared in different scenarios to measure data transfer costs. With the advent of HTTP/2 support in Node.js core, these comparisons will be done in both HTTP/1.1 and HTTP/2.

The inspiration for this comparison is to investigate the viability of using server-sent events as opposed to WebSockets without impacting performance. In real-world applications, performance can be a major factor. Reducing bytes across the wire has implications in financial tech as well as social applications. However, a balancing act exists between developer cost, application maintenance, and performance. Utilizing existing HTTP methods such as server-sent events may reduce overall project cost while meeting "real-time" behaviors.


Browser: Chrome, version 60.0.3112.113


  • 2013 iMac OS 10.12.6
  • 3.5 GHz Intel Core i7
  • 16 GB 1600 MHz DDR3 RAM

Server: Node v8.4.0


  • 5 updates sent at a 1-second interval.
  • 5 updates sent at a 1-second interval with 2 subscribers in parallel.

Parallel subscriber tests are not used for the WebSocket implementations in this experiment. Parallel tests are only used to illustrate HTTP/2's built-in multiplexing. For WebSocket multiplexing, there are a number of libraries to emulate this functionality.


  • Manual page refresh (control)
    After each 1 second interval, the page is refreshed. No real-time behaviors are observed with this method.
  • HTTP short-polling (250ms interval)
    Short-polling continuously opens and closes HTTP requests seeking new updates on an interval. The interval ideally is relative to expected updates.
  • HTTP long-polling
    Long-polling opens an HTTP request and remains open until an update is received. Upon receiving an update, a new request is immediately opened awaiting the next update.
  • Server-sent events
    Server-sent events(SSE) rely on a long-lived HTTP connection, where updates are continuously sent to the client.
  • WebSockets (via WS) The WebSocket protocol allows for constant, bi-directional communication between the server and client. For this test, Primus is used to abstract multiple implementations of the protocol. This method uses the WS implementation.
  • WebSockets (via Engine.IO) Same as the WS method, powered by the Engine.IO implementation.
  • WebSockets (via Faye) Same as the WS method, powered by the Faye implementation.
  • WebSockets (via SockJS) Same as the WS method, powered by the SockJS implementation.

The interval chosen for short-polling is arbitrary. To achieve greater performance, the interval should be as close as possible to when updates may be expected.

All tests are done via HTTPS and WSS for an accurate comparison between HTTP/1.1 and HTTP/2.

For the purpose of this test, the browser cache has been disabled.

Measurements will include XHRs, document, and WebSocket requests. This excludes the 87 KB for the Primus library as well as favicon.ico requests.

Code used for these metrics can be found here.


Bandwidth cost of real time methods with a single subscriber

In the individual subscriber test, the cost of headers becomes apparent. The difference in performance, for each HTTP method, is the compression of headers. For WebSocket methods, headers are not passed upon every request, further reducing bandwidth cost. However, the underlying library facilitating a WebSocket connection comes with varying levels of overhead. Engine.IO is the most expensive in this test as a number of ancillary requests are made in parallel.

Bandwidth cost of real-time methods with parallel subscribers

For parallel connections, the bandwidth is expected to go up, however, it is important to note that HTTP/1.1 will create extra TCP connections. In the corresponding *.HAR files, note the ConnectionID property. This identifies when a new TCP connection is made. For this test the overhead of a new TCP handshake is minimal, however, an application that makes many parallel requests will spawn multiple TCP connections. HTTP/2 multiplexing allows for multiple HTTP requests to take place over a single TCP connection, reducing bandwidth cost.

HTTP/1.1 snapshot from the long-polling test illustrating multiple TCP connections HTTP/1.1 snapshot from the long-polling test illustrating multiple TCP connections.

HTTP/2 snapshot from the long-polling test illustrating a shared TCP connections HTTP/2 snapshot from the long-polling test illustrating a shared TCP connections.


In the current state of the web, short and long-polling have a much higher bandwidth cost than other options, but may be considered if supporting legacy browsers. Short-polling requires estimating an interval that suits an application's requirements. Regardless of the estimation's accuracy, there will be a higher cost due to continuously opening new requests. If using HTTP/1.1, this results in passing headers unnecessarily and potentially opening multiple TCP connections if parallel polls are open. Long-polling reduces these costs significantly by receiving one update per request.

Server-sent events are able to take performance a step further. Rather than having an open request per update, such as long-polling, server-sent events provide a single, long-lived request to allow for the streaming of updates. Benefits of this method include only passing headers once, when the request is made, limiting data across the wire to necessary information. An EventSource will attempt to reconnect if the current connection is interrupted. Server-sent events are native to most modern browsers and reside within the existing HTTP spec. Meaning, if a legacy application has an existing RESTful layer, security, and authentication, the modification to use server-sent events is minimal. Server-sent events are uni-directional, allowing only for updates to travel from the server to the client. Additional client messages to the server, such as a POST request, would require an additional HTTP request.

WebSockets allow for full-duplex communication over a single connection. To compare, uni-directional communication(SSE) is akin to a radio. A single source broadcasts information to a listener. Half-duplex communication is similar to using a walkie-talkie. Communication may travel in both directions, however, only one party may broadcast at a time. Full-duplex is similar to using a phone. Information may freely travel in both directions, simultaneously. WebSockets is a different protocol and as such, security must be considered with implementation. Authentication and security concerns are similar to HTTP communication, however, may need to be duplicated for a secure WebSocket channel. An example of a common scenario is to authenticate a user, providing them with a token to be sent for HTTP communication. However, if a user is authenticated, an application may create a WebSocket connection without validating the token. This allows for direct access to the WebSocket API, bypassing any HTTP security measures.

In the past, server-sent events have had a higher bandwidth cost due to HTTP/1.1's handling of headers. In the scenario of uni-directional, push updates with HTTP/2 are now almost as cheap as a WebSocket transfer in terms of bandwidth. However, there is a consideration of development overhead when using WebSockets. The handling of WebSocket reconnects or "heartbeat" mechanisms, authorization, and/or including a WebSocket library result in development costs. The WebSocket API is able to be used natively, as opposed to differing libraries behind Primus in this test, however, a real-world implementation benefits from the existing libraries(WS, Engine.IO, etc) to handle reconnects and fallback methods.

For the development of most form-based web applications, server-sent events provide a low development overhead while benefiting from existing HTTP practices. If an application requires a high rate of upstream communication, a WebSocket implementation would provide a much lower bandwidth cost and may be worth the development overhead.

The current state of browser support at the time of this writing is as follows:

Server-sent events

Current browser support of server-sent events Current browser support via


Current browser support for websockets Current browser support via


  • There was a prior issue regarding streaming responses in Node v8.4.0. This did not affect our metrics, however, the issue has been resolved in v8.5.0.