Commit 1bd5186a authored by Xavier Thompson's avatar Xavier Thompson

README.md: Describe propagation of fork exceptions

parent cc008d19
......@@ -62,6 +62,62 @@ inside a `Join` coroutine, which guarantees all forked tasks complete before
the parent `Join` coroutine returns, even when there is no explicit `sync`.
### Exception Propagation with `fork`/`sync`
When an exception escapes out of a `fork`, execution of the body of the `Join`
coroutine may have already resumed concurrently and continued past the point
where the `fork` occured.
Therefore propagating an exception from a `fork` is not as easy as in the case
of a simple function call: first the body of the`Join` coroutine must reach a
point where it can stop executing, and then all concurrent forks must complete.
The `Join` coroutine may stop executing at the site of the original `fork` (if
the continuation has not been resumed concurrently), or at any subsequent call
to `fork` after the exception occurs, or at the latest at the next explicit or
implicit `sync`.
Once all concurrent forks have completed, execution jumps directly to the call
site of the `Join` coroutine, where the exception is immediately rethrown as if
the `Join` coroutine had thrown the exception itself.
There is no way to catch that exception directly inside the body of the `Join`
coroutine.
```C++
using namespace typon;
Task<void> throw_exception() {
// ...
throw std::exception();
co_return; // so that this is a coroutine
}
Join<void> parallel() {
co_await fork(throw_exception());
// 1. execution may resume concurrently here, or jump straight to 6.
for (int i = 0; i < 5; i++) {
// 2. execution may stop abruptly here and jump straight to 6.
co_await fork(some_task());
}
// 3. execution might reach this point
co_await Sync(); // 4. after this, execution will jump to 6.
// 5. execution will never reach here
}
Task<void> caller() {
// ...
co_await parallel(); // 6. exception is rethrown here
// ...
}
```
### `future`, A Primitive for Unbounded Concurrency
```
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment