"retentionTime" limits series in memory consumption?

Hello! Thank you for your work.
I would like to use the redistimeseries module to store data from my sensors for a fixed period of time, but I can not figure out the concept of the “retention time” parameter.

When I create a series and set the “retention time” to 6000, I expect the series to contain 6 seconds of data and take up some amount of memory.

But when I request information about the series, I see that it is much longer than 6 seconds.

In fact, the series is inflated until it reaches the memory limit in redis.conf.

I want to understand if there is a way to limit the memory consumption for a series.

I am using python 3.6 + redistimeseries-0.4 to add data to a series.

I am calling

rts.add (redis_key_test, timestamp, float_point, retention_msecs = 6000, labels = {‘time’: ‘series’})


every second expecting it to create a new series. Then I believe that new points will be added to this series.

I think I’m doing something wrong :slight_smile:

ts_info.log (117 Bytes)

ts_range.log (12.3 KB)

I did some additional testing using python 3.6 + redistimeseries-0.4.

The trimming mechanism really works and the series is cleared. I probably misunderstood the documentation, or it is not correct.

command description (https://oss.redislabs.com/redistimeseries/commands/#tsadd) says:

retentionTime - Maximum age for samples compared to last event time (in milliseconds)

I figured I should pass milliseconds in the retentionTime parameter.

Therefore, when I tried to save 6 seconds of data for the experiment, I multiplied 6 * 1000 and passed it to the “retentionTime” parameter.

And I watched the series for much longer than 6 seconds. Then it reduced the value passed to the “retentionTime” parameter to 6 and then the series really began to store 6 seconds of data, deleting older values.

Here is the code i use to test data insertion (6 seconds for single key):

from time import time as now_ts

from time import sleep

from redistimeseries.client import Client

from random import choice

redis connection

redis_host = ‘localhost’

redis_port = 6379

rts = Client(host = redis_host, port = redis_port)
def add_with_retention(count, retention): # timestamp 10 digit

num = 0

while num < count:

   rand_float_value = choice(range(0, 1000)) / 100

   timestamp = int(now_ts())

   rts.add('test_key_1', timestamp, rand_float_value, retention_msecs=retention, labels={'type':'analog'})

   num += 1


if name == “main”:


add_with_retention(count=20, retention=6) # 6 seconds of data


I would still like clarification on what should be passed in the retentionTime parameter (seconds or milliseconds).

I realized what was my problem.

The timestamp should contain milliseconds - 13 digits, while I was passing a timestamp of 10 digits.

Hello Alexander,

The only place where seconds or milliseconds are coming into play is when you insert data using ‘*’ for a server timestamp. The module then stores the UNIX time of the system in milliseconds.

From that point on, or if you provided your own timestamp, the time stamp is like any other integer. So ‘RETENTION’ of 6000 is actually a range of 6000 timestamps without the knowledge of what these numbers actually represent.

Hope this helps,



Thank you for confirming my assumptions.
Now I figured out how to limit the size of the series in memory and everything works fine.

Hi Ariel,

Can you please elaborate more on this point? What I understood was that if I pass a timestamp using the TS.ADD command with a RETENTION parameter then the retention will be number of samples and not time?


Hello Arturo,

Retention isn’t the number of samples kept but the delta to the timestamp of the latest sample. Trimming data happens whenever a new sample is inserted. All chunks whose oldest sample isn’t within (latest->timestamp - retention) are removed.


I just want to summarize for those who face the same issues as me.

When I first started using the redistimeseries library, the first thing I wanted to do was limit the size of the series.

The only parameter I found in the documentation was “retentionTime”.

Command description says:

“retentionTime - Maximum age for samples compared to last event time (in milliseconds)”

The indication “in milliseconds” was taken literally by me.

In fact, you just need to agree on the dimension of your timestamps and the time passed to the “retentionTime” parameter.

So, if your timestamps contain milliseconds, then “retentionTime” must be specified in milliseconds, or if your timestamps contain only seconds, then “retentionTime” should be in seconds.

I’m right?

Hi Alexander,
You are generally correct, but let me explain why we explicitly always say milliseconds and not just timestamp.
When we look at a timestamp there’s no definitive answer if its a millisecond or seconds based timestamp, for this reason we decided to coverage on top of millis and make sure everything is aligned on milliseconds.

I highly recommend that you should use millis (simply by multiplying your timestamp by 1000) if you use other features like aggregations and downsampling rules since they are all based on millis.