Let's say I want to execute arbitrary components using wasmtime, and they may do things like long executions in an async task.
One naive way to handle this could be to only execute components with spawn_blocking. Or, more explicitly, spawn a new thread for every component execution and give it its own tokio runtime like:
std::thread::spawn(|| {
let rt = tokio::runtime::Builder::new_current_thread().enable_all().build().unwrap();
rt.block_on(async {
possibly_evil_component::execute().await;
});
})
Is this necessary, or does the component model/wasmtime guarantee that components cannot block eachother's tasks?
If/how will this change with p3?
Any advice on best practices is appreciated. Thanks!
By default wasmtime does not prevent a guest that is actively using the CPU from blocking that thread. You may be interested in the epoch interruption feature: https://docs.rs/wasmtime/36.0.2/wasmtime/struct.Config.html#method.epoch_interruption
EIther epochs or fuel are what we recommend to limit CPU consumption. Memory consumption can be achieved with ResourceLimiter. WASIp3 won't move the needle here at all, it's still the same solutions as before. With fuel or epochs you won't need to spawn a thread, you can use the same host threads
For more details, you can look at the relevant chapters of the docs as well:
https://docs.wasmtime.dev/examples-deterministic-wasm-execution.html
https://docs.wasmtime.dev/examples-interrupting-wasm.html
Thanks! Got it that I want to limit the resource consumption :thumbs_up:
Followup question... @Alex Crichton can you please elaborate more on "With fuel or epochs you won't need to spawn a thread" ?
In other words, if I start executing one component, I don't want to wait for it to complete before starting another component... are you suggesting that wasmtime comes with some ability to juggle this out of the box?
When you use call_async + epochs/fuel with Store-level configuration the future returned will periodically yield at a configured interval, meaning that an infinite loop in wasm isn't actually an infinite loop on the host but rather one with defined "await points" where you can work on something else, spawn more futures, drop the computation, etc
oh, that is super cool! So I can just throw them all into a FuturesUnordered and it's fine, they'll all make progress without blocking eachother?
that's the idea yeah
is it okay to have both epochs and fuel yielding? e.g. to handle components that:
?
may be running for a long time but somehow aren't consuming fuel
This depends on what you mean by "running":
tokio::time::timeout wrapping your wasm entrypoint should be able to interrupt asynchronous operations.fair point - but does std::thread::sleep()consume fuel while it sleeps? (I may be doing something wrong but it seems this doesn't block other components either...)
The wasm future won't be running during the sleep, it'd be suspended, so you could drop the future at any time (or do other work)
more of just a curiosity/learning question - with that in mind, is there effectively a difference between std::thread::sleep() and wstd::runtime::block_on(wstd::task::sleep()), or should I think of these as roughly the same idea as "wasmtime sees it as a suspended future and other tasks aren't blocked" ?
those should behave pretty similarly yeah, although std::thread::sleep blocks everything for the guest where wstd::block_on will make progress on other sibling rust tasks while it's waiting for the sleep
any tips on tuning epoch_deadline_async_yield_and_update()?
fwiw, I've setup a test with one component doing a hot loop and seeing that it doesn't block another
when I increment_epoch() every millisecond and epoch_deadline_async_yield_and_update() with a value of 1 or 10 the test passes, but with 100 it fails
in other words, anecdotally, I assume it's not going to actually yield every 100 units of whenever increment_epoch() is called, or in my case - I can't assume it's even _near_ 100, may be some kind of scaling where the change from 1 to 10 to 100 isn't a multiple, drift is felt more strongly at the higher number?
oh... I think maybe I get what's going on... when it's epoch and not fuel based, it has to insert checks in certain segments, not everytime an instruction is run - so my loop is running and not letting it progress to check the epoch?
though, if that's the case, I don't get why lowering the delta helps...
Epoch instrumentation checks for a new epoch on loop backedges as well; as mentioned above there should be no circumstance in which Wasm can run indefinitely when an epoch change should interrupt it. I haven't read all the thread details above but I would suspect some other deadlock in the way your system is combining futures and blocking...
Interesting... though, merely changing the epoch_deadline_async_yield_and_update() duration from 10 to 100 could cause a deadlock?
the test is pretty straightforward - I have been tinkering with this in different forms today, so I don't have something I can cleanly cut and paste right now, but it's really just spawning the component executions onto a single-threaded tokio runtime and sending into oneshot channels when they finish.
if there isn't a known explanation for why the tuning would change things so drastically, I could try to reproduce in a standalone repo tomm... wrapping up for the day
is increment_epoch happening in a separate thread?
Alex Crichton said:
is
increment_epochhappening in a separate thread?
yes:
let engine_ticker = engine.weak();
std::thread::spawn(move || loop {
if let Some(engine_ticker) = engine_ticker.upgrade() {
engine_ticker.increment_epoch();
} else {
break;
}
std::thread::sleep(Duration::from_millis(1));
});
I made a standalone repo, bumping to latest version of dependencies too, problem is reproducible: https://github.com/dakom/debug-wasmtime-concurrency
(though, of course, it may be a bug on my side - and I'd appreciate the insight if so :pray: )
Looks like you're running into this issue which you can fix with:
store.epoch_deadline_callback(move |_| {
Ok(wasmtime::UpdateDeadline::YieldCustom(
yield_period_ms,
Box::pin(tokio::task::yield_now()),
))
});
instead of epoch_deadline_async_yield_and_update. This has to do with how Wasmtime's implementation of yielding the future doesn't play well with Tokio's scheduling. We're not really sure of much which can be done about that other than using UpdateDeadline::YieldCustom to specifically integrate with tokio
Awesome, that did the trick! Updated the repo, thanks again!!
Last updated: Dec 06 2025 at 05:03 UTC