When are 'callbacks' acceptable?

From reading Nathaniel’s blog posts, I understand that avoiding callbacks is a central concept of structured concurrency. The Trio design docs back this up:

Task spawning is always explicit. No callbacks, no implicit concurrency, no futures/deferreds/promises/other APIs that involve callbacks.

However, there are APIs in trio which use what I would call callbacks - functions passed in which will be called when something happens. Specifically, serve_tcp() takes a handler function to be called for a new incoming connection.

I understand that this doesn’t break structured concurrency: the new tasks are either children of the task running serve_tcp, or belong to a nursery explicitly passed in. And the docs have a warning that uncaught errors in handlers will crash the server. But is there a good explanation for why callbacks are a sensible choice in this specific case? Or do other people not consider the handler argument a callback?

For context: I’m playing with some code using Trio, and a contributor likes a design where each incoming message triggers a callback in a newly started task. I’ve got a feeling that this is best avoided, even if it’s technically possible with Trio. But I can’t articulate why exactly it’s different from serve_tcp’s callbacks on new connections.

3 Likes

I’m not familiar with serve_tcp() specifically, but it’s likely similar to the trio-websocket API where you supply a handler for new connections.

Certainly we’re not saying that functions should never be passed into API’s-- witness nursery start_soon() etc.

The type of callbacks synonymous with “callback hell”, and which are addressed by async/await, are specifically when you request an action and are supplying a function that will be called when the action is done/ready. That’s quite different than handlers meant to manage events whose origin are truly asynchronous, and may require a context that may live indefinitely (e.g. a connection handler).

each incoming message triggers a callback

If the messages need to be consumed sequentially, a more typical pattern is for the API to provide an async generator.

You could use an async generator for new connections or unrelated messages too, but then you have to jump through hoops to ensure that simultaneous connections / messages are processed concurrently.