-
Notifications
You must be signed in to change notification settings - Fork 197
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update to v4 results in "RuntimeError: Event loop is closed" #332
Comments
I’m seeing the exact same thing. Seem like the same issue as reported in #312 |
There was lots of discussion on the other issue. If you could investigate that would be handy. A minimal reproduce would help too. It's hard to say anything without one. 🤔 |
The application is inherited, so I'm not really sure how the whole channels thing works yet or what connects where. At best so far I can narrow it down to the only task which uses |
If I use ahaltindis's workaround detailed here, my code works. So it definitely seems to be a conflict/issue with
Perhaps channels, or channels_redis documentation could be improved, to provide an example of how users should trigger a |
I'm pretty sure the issue we have is very similar to what's reported above. We do this |
Since we are using channels-redis 4, we have connection issues to redis. We must downgrade it to version <4 and block its upgrade, also django-channels upgrade, while this issue exists. An issue is open on channels-redis repo lookink what we are experiencing django/channels_redis#332
Since we are using channels-redis 4, we have connection issues to redis. We must downgrade it to version <4 and block its upgrade, also django-channels upgrade, while this issue exists. An issue is open on channels-redis repo lookink what we are experiencing django/channels_redis#332
I also get this issue after I upgrade to 4.0.0 |
Same here and sometimes Task exception was never retrieved
future: <Task finished name='Task-378' coro=<Connection.disconnect() done, defined at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:819> exception=RuntimeError("Task <Task pending name='Task-378' coro=<Connection.disconnect() running at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:831>> got Future <Future pending> attached to a different loop")>
Traceback (most recent call last):
File "/home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py", line 831, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/usr/lib/python3.10/asyncio/streams.py", line 344, in wait_closed
await self._protocol._get_close_waiter(self)
RuntimeError: Task <Task pending name='Task-378' coro=<Connection.disconnect() running at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:831>> got Future <Future pending> attached to a different loop
Task exception was never retrieved
future: <Task finished name='Task-381' coro=<Connection.disconnect() done, defined at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:819> exception=RuntimeError("Task <Task pending name='Task-381' coro=<Connection.disconnect() running at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:831>> got Future <Future pending> attached to a different loop")>
Traceback (most recent call last):
File "/home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py", line 831, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/usr/lib/python3.10/asyncio/streams.py", line 344, in wait_closed
await self._protocol._get_close_waiter(self)
RuntimeError: Task <Task pending name='Task-381' coro=<Connection.disconnect() running at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:831>> got Future <Future pending> attached to a different loop
Task exception was never retrieved
future: <Task finished name='Task-382' coro=<Connection.disconnect() done, defined at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:819> exception=RuntimeError("Task <Task pending name='Task-382' coro=<Connection.disconnect() running at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:831>> got Future <Future pending> attached to a different loop")>
Traceback (most recent call last):
File "/home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py", line 831, in disconnect
await self._writer.wait_closed() # type: ignore[union-attr]
File "/usr/lib/python3.10/asyncio/streams.py", line 344, in wait_closed
await self._protocol._get_close_waiter(self)
RuntimeError: Task <Task pending name='Task-382' coro=<Connection.disconnect() running at /home/elabbasy/Desktop/BOT_TOOL/webserver/server/venv/lib/python3.10/site-packages/redis/asyncio/connection.py:831>> got Future <Future pending> attached to a different loop |
Any chance of a minimal reproduce? It's hard to say anything with just a traceback |
asgiref==3.5.2
channels==4.0.0
channels-redis==4.0.0
daphne==4.0.0 views.py # passing request.data to serializer
if serializer.is_valid():
serializer.save(interpreter=request.user)
channel_layer = get_channel_layer()
async_to_sync(channel_layer.group_send)(
"admins", {"type": "chat_message", 'message': {"command": "admin_new_ticket","ticket": serializer.data}})
return Response(serializer.data, status=status.HTTP_200_OK) and just normal consumer import json
from channels.generic.websocket import AsyncWebsocketConsumer
class AdminsConsumer(AsyncWebsocketConsumer):
async def connect(self):
self.room_group_name = 'admins'
# Join room group
await self.channel_layer.group_add(
self.room_group_name,
self.channel_name
)
await self.accept()
async def disconnect(self, close_code):
# Leave room group
await self.channel_layer.group_discard(self.room_group_name, self.channel_name)
# Receive message from WebSocket
async def receive(self, text_data):
text_data_json = json.loads(text_data)
message = text_data_json["message"]
# Send message to room group
await self.channel_layer.group_send(
self.room_group_name, {"type": "chat_message", "message": message}
)
# Receive message from room group
async def chat_message(self, event):
message = event["message"]
# Send message to WebSocket
await self.send(text_data=json.dumps({"message": message})) |
same issue here,
dependencies:
code:
this comment did not worked either, I'm getting:
|
Any update this issue. I'm still face this issue |
as a workaround Downgrades this work for me |
me too |
@carltongibson You can also reproduce the error using this repo: https://github.com/realsuayip/zaida/tree/490e0c5a49a750bc56a63f9cba5c9514ed91eee4 Steps to reproduce: 1 - Clone the repo Hope it helps. |
Hoping we can resolve this soon |
Can we stop with the "me too" and "any update" comments please. If you have significant new information to add then please do. (Great!) Otherwise it's just noise. I'm planning new releases over the new year, and looking into this is part of that. |
@carltongibson this a major breaking issue so I'm just trying to bring it more attention. Tomorrow I'll go through the source and try make a contribution. |
I am experimenting this issue within a daphne instance which only runs consumers for websockets using a customized (derived) version of The code which is execute does (shall?) not have any calls to Here are my requirements:
The produced error message is very similar to that reported in aio-libs-abandoned/aioredis-py#1103 (for what I have seen the fix for that problem was ported also in redis-py). Maybe the problem is caused by the way this layer tries to close/clear the connection pool. |
For anyone else with this issue, it doesn't exhibit when using the newer CHANNEL_LAYERS = {
"default": {
"BACKEND": "channels_redis.pubsub.RedisPubSubChannelLayer",
"CONFIG": {
"hosts": ["rediss://***:***@***.com:6379"],
},
},
} |
Refs django/channels_redis#332 Refs ansible#13313 Signed-off-by: Rick Elrod <[email protected]>
I have the same issue when sending messages over channels. I tried the solution of @carltongibson but implies that I need to re-create the connection each time before sending the message. Is this correct? Thanks for help! |
The channel layer encapsulates the connection handling, but connections are per event loop, and (It shouldn't really be an issue... If it's performance critical it's likely you're already looking at avoiding sync_to_async at all.) |
Thanks for quick reply. So if I understood correctly, to send a message through the channel outsider a consumer I should do something like: async def closing_send(channel, message):
channel_layer = channels.layers.get_channel_layer()
await channel_layer.send(channel, message)
await channel_layer.close_pools()
async_to_sync(closing_send)('test-channel", {"type": "hello"}) but the connections returned from I tested this and it seems to work fine: async def closing_send(channel, message):
channel_layer = channels.layers.channel_layers.make_backed(DEFAULT_CHANNEL_LAYER)
await channel_layer.send(channel, message)
await channel_layer.close_pools()
async_to_sync(closing_send)('test-channel", {"type": "hello"}) Did I get it right? |
@btel That looks like the example yes. As per #332 (comment) I want to look into encapsulating that for the next set of releases |
This should be mentioned in the docs until an official fix is released. Edit. Until this is resolved I wouldn't consider 4.0 a stable release |
I've been running into this too. I see this issue is marked as "documentation" - but I don't think this is a request to update the docs; it's a bug, right? A pretty important one too. |
It's a lack of correctly dealing with the connection shutdown, when using the asgiref.sync helpers to integrate with sync code. I marked it documentation because if you just but call it understanding that the connection needs to be closed before the event loop exits then everything works as expected. Nonetheless @sevdog is looking at whether we can have this handled automatically for you as well. If that pans out it'll be in the next set of releases, which I'm working towards already. |
I was looking at this just yesterday, refactoring the two layers to share the same connection-handling codebase is a bit complex. The two implementations have a lot of differences and share very few elements. So it may take more time (or at least more concentration) to handle that. In the meantime I have prepared #347 to address this issue in |
The indexer was broken due to (django/channels_redis#332). This resulted in the indexer constantly erroring out, and not having a way to recover because it was constantly stuck searching for the object in the database. While there are still a few nuances with this, the API is accessible, is not blocked by the indexer, and "can" be independently scaled if that need arises; although is not implemented today. There may still be a few bugs here, but we **cannot** move to version 4 of `channels-redis` otherwise everything will implode and leave you with no idea what is going on wrong (the package handles things on the backend that you don't think about and results in you debugging absolutely your entire project only to realize the library is unstable and usage should be delayed until stable v5 is released.)
The bug that initially caused the upgrade block has been resolved django/channels_redis#332
* unpin channels-redis The bug that initially caused the upgrade block has been resolved django/channels_redis#332 * replace aioredis Exception with a redis Exception Version 4.0.0 of channel-redis migrated the underlying Redis library from aioredis to redis-py. The Exception has been changed to an equivalent * remove unused license * remove UPGRADE BLOCKER in README * remove hiredis it was an indirect dependency from aioredis which was removed * remove unused license * add back hiredis it's potentially providing a performance boost. install explicitly as a part of redis. upgrade to more recent version * remove UPGRADE BLOCKER for hiredis it was also addressed as a part of this PR
* unpin channels-redis The bug that initially caused the upgrade block has been resolved django/channels_redis#332 * replace aioredis Exception with a redis Exception Version 4.0.0 of channel-redis migrated the underlying Redis library from aioredis to redis-py. The Exception has been changed to an equivalent * remove unused license * remove UPGRADE BLOCKER in README * remove hiredis it was an indirect dependency from aioredis which was removed * remove unused license * add back hiredis it's potentially providing a performance boost. install explicitly as a part of redis. upgrade to more recent version * remove UPGRADE BLOCKER for hiredis it was also addressed as a part of this PR
* unpin channels-redis The bug that initially caused the upgrade block has been resolved django/channels_redis#332 * replace aioredis Exception with a redis Exception Version 4.0.0 of channel-redis migrated the underlying Redis library from aioredis to redis-py. The Exception has been changed to an equivalent * remove unused license * remove UPGRADE BLOCKER in README * remove hiredis it was an indirect dependency from aioredis which was removed * remove unused license * add back hiredis it's potentially providing a performance boost. install explicitly as a part of redis. upgrade to more recent version * remove UPGRADE BLOCKER for hiredis it was also addressed as a part of this PR
After upgrading to
channels-redis==4.0.0
, our celery tasks are all reporting the following traceback:Downgrading the image to
channels-redis==3.4.1
resolves the issue, so I'm starting out here. This seems probably related to #312.Image OS is Debian Bullseye, amd64. The django application is running with gunicorn.
Probably related packages:
Full Pipfile.lock: https://github.com/paperless-ngx/paperless-ngx/blob/dev/Pipfile.lock
The text was updated successfully, but these errors were encountered: