-
Notifications
You must be signed in to change notification settings - Fork 48
Closed
Description
We have like 30 consumers which are processing gcp pubsub data and some 4-5 web apps which are talking to redis. Everything works fine but whenever load spikes on consumers and in between if a key is expired then after setting the missed key(which is used in every consumer) it started setting _lock_key a lot which remains there until its ttl (after one expired another is set for the same key) even when required key is present. We have set expiration_time=-1 for the region.
region = make_region(function_key_generator=kwarg_function_key_generator).configure(
'dogpile.cache.redis',
arguments={
'redis_expiration_time': 60 * 60 * 8, # 8 hours,
'distributed_lock': True,
'thread_local_lock': False,
'lock_timeout': 60,
'connection_pool': redis.BlockingConnectionPool(max_connections=50, timeout=20,
queue_class=LifoQueue).from_url(os.environ['REDIS_HOST'] + '?
health_check_interval=30&retry_on_timeout=Yes&socket_timeout=20&socket_connect_timeout=10')
if gevent.monkey.is_anything_patched() else redis.BlockingConnectionPool(max_connections=50,
timeout=20).from_url(os.environ['REDIS_HOST'] + '?
health_check_interval=30&retry_on_timeout=Yes&socket_timeout=20&socket_connect_timeout=10')
},
expiration_time=-1
)
@region.cache_on_arguments()
def test(data=None):
return data
Setting bulk lock keys also spiking redis memory, isn't it some kind of thunderherd. Is it a bug or i am doing something in wrong way, please help me in understanding this behaviour.
Metadata
Metadata
Assignees
Labels
No labels