Trio future library - feedback welcome!

Hi folks! I made a little utility with trio over the weekend that I thought I’d share with you.

When I wish to use asynchronous code, it’s frequently to execute multiple io-bound operations in parallel. When I do this, I typically also want to gather and recombine the results of those operations. From what I could tell, the idiomatic way to do this in trio would be to start tasks that do the various pieces of work that I want to parallelize, and have some sort of “receiver” function to gather their results. I tried this out a couple of times and was able to get things to work, but it led to a lot of channel bookkeeping intermingled with my “application code”.

So, I decided to make a little utility to manage some of those details. I modeled it after the scala Future class, which I’ve worked with a bunch in the past. In brief, it allows you to take a trio async function, schedule it to ‘run soon’, and synchronously get back a container object. That will eventually be filled with the result of the operation, and code can await the container being filled. Here’s a very basic example:

async def my_fn():
    await trio.sleep(3)
    return 7

async def test_the_future():
    async with trio.open_nursery() as nursery:
        fut = Future.run(my_fn, nursery)
        func_outcome = await fut.outcome()
        assert func_outcome == Value(7)

That Value is from the outcome util; exceptions will captured as an outcome.Error. I’ve also implemented functionality to join results of multiple futures together, similar to asyncio.gather. You can see more usage examples at https://github.com/danielhfrank/trio-future/blob/master/tests/test_future.py and https://github.com/danielhfrank/trio-future/blob/master/scripts/concurrent_sleep.py

Full disclosure, I only learned about trio relatively recently. I can’t say that I’ve read all the philosophy documents, and so I’m not sure if this is in the spirit of the library. I’d appreciate any feedback you might have on that, and would love to do what I can to make it useful for others. I’ll probably publish it to pypi after I work out a couple of rough edges. Thanks for looking, and for all your hard work on the project!

I also ended up with a Trio future as part of another codebase, but interestingly, it has almost nothing in common apart from the name. Your implementation is primarily about running a task encapsulated in a future, whereas the one in my project is just a place to hold a single result, set from the outside. You can see the code here:

What you’ve created seems to be more similar to asyncio.gather() - wait for all of several tasks to complete, and return a list of their results.in order.

Trio-util has a wait_all() method which runs several tasks to completion, but their return values are discarded, so you’re just running them for side effects.

I just spotted that Trimeter has a run_all() function which does something quite similar - run several async functions in parallel and return a list of results.

Hey @takluyver, thanks for the links! I did think about implementing my Future class under the hood in the same way you did, but wound up with my own thing basically just due to stylistic preferences. You’re right that the main motivation was to enable an asyncio.gather-like structure - that felt like something that was really missing from the standard trio offerings, and is something that I frequently want when working on concurrent code.

That’s interesting about Trimeter as well - I didn’t know that project existed. Seems like trio royalty was involved too, I wonder if it’ll pick up steam again at some point. I think it’s fascinating that the project directly integrates controls on concurrency along with the run_all function. I had kind of hoped that those things could be expressed separately and then composed, but perhaps it’s smoother to integrate them. I have a project in mind to use my futures library on that does have some resource constraints like those - I’ll see what I’m able to come up with!