How Telemetry Gets From Field to Screen

How Telemetry Gets From Field to Screen

This past week, if you opened up the app or website during a game, you would have seen a real-time visualization of the players' movements on the field. You wouldn't be able to tell unless you were in the stadium, but player positions typically arrive at your device within a few hundred milliseconds of latency. Here's how that happens and a new open source addition to our GitHub org.

Field to API

Our players' gear has tiny tracking devices. Our balls have tiny tracking devices. These devices send player and ball locations to a box onsite. Once the data is there, and once we've associated tracking devices with the players and balls, that data is then forwarded to our GraphQL API.

API to Screen

When you open the visualization in your browser or mobile device, a WebSocket connection to the API is established. When this connection is established, your device informs the backend that it wants all telemetry samples for the game you're watching.

When the API receives samples from the field, it then simultaneously commits them to the database and forwards them out to all of the subscribed clients.

The WebSocket Service

Today, we're open sourcing the code responsible for distributing real-time subscription data via the graphql-ws WebSocket protocol. You can find it at

Using that service, here's what the data flow from API to screen actually looks like when we have two API servers, two WebSocket servers, and a single subscribed user:


  1. Telemetry is sent to a random API server.
  2. a. That API server looks up the list of WebSocket API servers that have one or more connections subscribed to telemetry.
    b. The telemetry is then sent to a random WebSocket server with a list of subscribed servers.
  3. That randomly selected WebSocket server forwards the telemetry to all of the other subscribed WebSocket servers.
  4. Each subscribed WebSocket server sends the telemetry and the GraphQL queries that its connected clients requested to an API server.
  5. The API server evaluates all GraphQL queries and sends the GraphQL responses back to the WebSocket servers.
  6. The WebSocket servers forward those GraphQL responses to their clients.

For such a simple case, this might look like a lot of hops for the data to travel through. But there are two important things to note:

  1. The servers all have very high throughput and low latency links between them, so the latency added by the extra hops is negligible compared to the latency added in the "last mile".
  2. This scales extremely well horizontally. For the first week we launched 500 c4.xlarge API servers, 500 c4.xlarge WebSocket servers, and served a very large number of users with minimal latency. We had a few opening day hiccups, but the WebSocket service itself had none.
How Telemetry Gets From Field to Screen
Share this