Storing F1 telemetry data live during a race


I would like to capture F1 livetiming data during a race (the data which you see which is also sent in json objects through telemetry API). I have my python script which subscribes to appx. 16 streams/channels and receives all these json messages with data.

I currently save them to disk, but would like to create my own live tracker I can use during the race to look a the data that is captured in these messages.

I am pretty new to Redis and looking for the right setup to send these messages into Redis.

I was thinking about 2 alternatives:

  1. RedisJSON - but I have no clue how to set the key name then (or can this just be a random generated uuid from my python code?)
  2. RedisStreams - which I think don’t accept JSON (unless serialized string)

or is there another way to ingest JSON objects (with no clear identifiers themselves)?

we are talking about 30-40k json msgs / hour roughly (for 14 of the 16 channels). I did not capture the last 2 channels yet.



Hi there from Redis Developer Relations - sounds like a fun project!

You’re correct to say that streams don’t take JSON payloads unless you serialize them into strings. The streams payload is a series of name/value pairs where everything is a string. This is like a Redis hash, with one strange exception… you can have multiple instances of the same name in a stream payload.

Stream entries are also immutable - once you’ve added something to a stream you can’t change the payload.

I think the two things you identify would work well together rather than being alternatives. I’d suggest using a JSON document at its own key for each JSON message. If the JSON you’re getting has a timestamp in it, consider using that as part of the key name. Using JSON documents gives you the ability to retrieve parts of the document that you want to read using JSON paths and the JSON.GET command.

You might want to use different key name prefixes depending on what the data coming in represents. I could give more concrete suggestions if I knew more about what’s in each stream you’re consuming.

You’ll then probably also want to retain the relative order of these documents. Use a stream for this purpose. If your source data has timestamps in it, use those as the stream entry IDs and store name/value pairs in the payload for the fields you’re most interested in. Then you can use XRANGE to get data from the stream for a given time period, or XREAD to consume the stream message by message in order. Make sure there’s something in the stream payload (or the timestamp ID) that helps you identify which key you stored the incoming JSON in, so that you can refer to it when you want to get further detail from it. If you wanted to search these documents by other criteria, consider taking a look at the Search capability of Redis Stack which can index documents stored as JSON for you.

If you’d like to chat with community more about this sort of thing, we’re on the Redis Discord (Redis).