Concise definition of structured concurrency

… would be helpful. The main articles on the topic are long and don’t have the principles enumerated clearly. Typically when structured concurrency is pitched to some language mailing list, the response is “oh, that’s fork & join” and the discussion devolves or gets side tracked.

It would be nice to have a concise definition that encompasses cancellation and error propagation (even if multiple sets of semantics are valid).

1 Like

If I had to distill it down to one sentence, then maybe: “Have you ever leaked a [thread/task/goroutine/…]? Then it’s not structured concurrency.” (And if they start explaining how their environment has great tools to help you cope with these leaks, then you say “that sounds great, but wouldn’t it be even more great if they just never happened in the first place?”)

I guess the limitation is that this phrasing doesn’t really work for callback-based systems, since folks aren’t used to thinking of callback chains as “tasks”. Maybe: “Have you ever had a leftover callback fire when you weren’t expecting it?”

1 Like

Have you ever leaked a [thread/task/goroutine]?

It’s not quite sufficient because you can have a system that doesn’t leak tasks yet neglects propagation of errors to the parent.

Maybe it should be formulated in terms of ownership/RAII then? That each process has a parent process and has a lifetime which is bound to a scope in the parent process.

Which as a bonus, also makes it easy for exceptions to propagate to parent processes.

Another idea : formulate it as a mathematical property.
What is that structure and what does it guarantee.
And verify it via model checker / theorem prover.

Maybe not for most people at first, but it would surely help structuring the discourse around the topic :wink: