Let’s nail down some terminology. In the coroutine approach, there are two separate types: there are async functions, what you get by writing async fn .... They’re a thing you can call. And there are coroutine objects, which represent a call-in-progress.
So at the implementation level, yeah, the two approaches are isomorphic: async functions are like Future-returning-functions, and coroutine objects are like Futures.
But at the user level, the coroutine approach is simpler. With Futures, there are three steps: call the Future-returning-function -> get a Future -> await the Future to get a regular value. With coroutines, the last two steps are collapsed together inside the compiler, so it’s just: call an async function -> get back the value it returned.
It has to be an async function, not a coroutine object. I’m not sure if this is what you meant or not :-). But it’s very straightforward, just like any other higher-order function in Rust, just with some asyncs added:
// Return value simplified, see below for more discussion
async fn apply<T>(fun: AsyncFnOnce() -> T) -> T
OTOH the natural Future-based version would be:
fn apply<T>(fut: Future<T>) -> Future<T>
This is really different – remember that a Future is equivalent to a coroutine object, not an async function.
Here’s a straw man proposal for how this could all fit together:
We define async fn as a new concrete type that’s built into the compiler, just like fn. We also define a concrete type CoroutineCall to represent an in-progress coroutine.
There are two things you can do with an async fn:
-
You can call it, just like a regular function, except there’s a restriction: this can only be done from inside another async function.
-
There’s an operation – with special compiler support – that lets you extract a CoroutineCall object from an async fn. Maybe something like: unsafe fn coroutineify(fun: AsyncFnOnce() -> T) -> CoroutineCall<T>.
As you can see, there are trait versions of async fn, just like for fn. And, just like for fn, these are defined in terms of the concrete type async fn. E.g.:
#[lang = "async_fn_once"]
#[must_use]
pub trait AsyncFnOnce<Args> {
type Output;
extern "rust-call" async fn call_once(self, args: Args) -> Self::Output;
}
(Compare to the definition of FnOnce.)
I’ll be vague about the exact API for CoroutineCalls, but they would provide some sort of mechanism to step them, get back values when they suspend themselves, etc. I guess they could have a corresponding trait, beyond just the concrete object – it wouldn’t cause any particular problem – but I don’t see how it would be useful right now either, so I’ll leave it out.
Not so! This system is flexible enough to handle all that stuff. If you don’t believe me, check out Trio, which actually does it :-). The primitives you need are:
- A way to suspend the current coroutine stack
- while specifying how to wake it up if it’s cancelled
- A way to request the coroutine stack be scheduled again
All I/O operations and “leaf futures” can be implemented in terms of these. Throw in nurseries and cancel scopes (which can be implemented inside the coroutine runner library, no need to be in std), and now you can do anything.
There are different ways to split up this functionality between the language runtime vs libraries. In Python, we just have “suspend with value” and “resume with value”, and then Trio implements these primitives on top of those. In Rust, the static typing, lifetimes, and desire for minimal overhead mean that you’d probably want to iterate a bit to find exactly the right way to encode these core primitives. But as a straw man, maybe a wait-for-control-C primitive might look roughly like this:
async fn wait_for_ctrlc() -> Result<(), CancelRequested> {
suspend!(
// first argument: setup function
// arranges to call request_wakeup(value) at some future point
|request_wakeup| {
ctrlc::set_handler(move || request_wakeup());
},
// second argument: abort function
// arranges that request_wakeup(value) *won't* be called
|| {
ctrlc::unset_handler();
AbortSucceeded
},
)
}
Notes:
-
All the language runtime has to support here is suspending the coroutine stack and passing these two callbacks out to the coroutine executor. Everything else can be done by the coroutine executor. (Subtlety: only the coroutine runner knows how to add a coroutine back to its scheduling queue, so we let it choose how request_wakeup is implemented.)
-
The protocol for handling aborts has to be somewhat complicated, to handle all the cases of (1) abort is easy and can be done immediately, like here, (2) abort is impossible, (3) abort is possible, but happens asynchronously, like with IOCP. This is the simple case, as indicated by the AbortSucceeded return value.
-
Thinking about lifetimes… if we want to avoid heap allocations, then I think we would need to say that if the abort function returns AbortSucceeded, then that invalidates the request_wakeup callback, which would mean we needed some unsafe in here. Maybe this can be avoided by being clever somehow. If not, then I don’t think it’s a big deal? Very few people need to implement their own I/O primitives from scratch using the lowest-level API. And the common cases for this are like, implementing your own async-friendly Mutex, which is an extremely tricky thing to do no matter how you slice it.
This part I do agree with :-). If you want to pass between the old Future style and the new async fn style, you could do that by writing async fn await<T>(fut: Future<T>) -> Result<T, CancelRequested>. It would look something like our wait_for_ctrlc function: use combinators to create a new Future that selects between the user’s future and a second future that gets resolved by the abort function, and then calls request_wakeup when that resolves, and then tokio::executor::spawn this new Future and go to sleep.
But, Futures do become a second-class citizen: they need to be explicitly awaited, they use a different cancellation system, etc. I think people would migrate away from them over time.
What a great question! I don’t know :-).
I guess for apply, being the only thing that can return CancelSucceeded, the double-nested-Result might indeed make sense. But I think you’re right, for the majority of operations, all of which can return CancelRequested, using double-nested-Result would probably suck.
In general, the functions that can return CancelRequested are functions that do I/O. And in general, functions that do I/O already have a ton of different error types. So I was imagining that we’d embed CancelRequested into the various I/O error enums. Maybe impl TryFrom<YourFavoriteErrorType> for CancelRequested? This is a place where my lack of Rust expertise means I really can’t tell for sure what would make sense, so maybe this is where the whole idea founders! I would be interested to hear folks thoughts.