Trio strategy to handle incoming connections in serve_tcp

Hi folks,
I was curious about trio networking api so I dig in the code, and I saw this

async def _serve_one_listener(listener, handler_nursery, handler):
    async with listener:
        while True:
            try:
                stream = await listener.accept()
            except OSError as exc:
                if exc.errno in ACCEPT_CAPACITY_ERRNOS:
                    LOGGER.error(
                        "accept returned %s (%s); retrying in %s seconds",
                        errno.errorcode[exc.errno],
                        os.strerror(exc.errno),
                        SLEEP_TIME,
                        exc_info=True
                    )
                    await trio.sleep(SLEEP_TIME)
                else:
                    raise
            else:
                handler_nursery.start_soon(_run_handler, stream, handler)

It seems that trio doesn’t attempt to limit the number of concurrent connections as it is the case with other libraries (like gevent) that propose ways to limit concurrency.
I’d like to know what makes you confident that this strategy won’t break the system or the trio event loop.

Best regards

That’s correct, Trio doesn’t have any built-in mechanism to limit the number of concurrent connections.

That’s interesting that gevent has a feature like that – it’s not something I’ve seen in other libraries. (I know that least asyncio, Twisted, Tornado, and libuv are similar to Trio, and don’t have anything like that built-in.)

So of course this does mean that if you blast too many incoming connections at a program written with one of those frameworks, then they’ll eventually fall over. But… I don’t think adding a maximum connection limit is very effective at solving that. There’s some more discussion of this here: https://github.com/python-trio/trio/issues/636#issuecomment-589437971

2 Likes

Thank you for the reference, I will check the discussion :slight_smile: