alexcrichton opened issue #10931:
Currently we have a number of release binaries produced as part of Wasmtime's CI and are intended to be the default way to run/use Wasmtime in a wide variety of scenarios. As a result we try to make these binaries as compatible as possible with as wide a range of OSes as we can, for example:
- Linux - these targets use glibc by default and are built in old docker containers to require as low a version of glibc as we can find
- Windows - we statically link CRT bits with
-Ctarget-feature=+crt-staticto have fewer dll dependencies- macOS - we set
MACOS_DEPLOYMENT_TARGETto something told to run ideally on more versions.Otherwise though we also currently have no policy for updating these. In my experience an update inevitably breaks someone so this is mostly a desire to articulate some form of policy ahead of time of when an update is needed. This is also related to https://github.com/bytecodealliance/wasmtime/issues/10930 where I was thinking we may want to use a higher glibc requirement for a better-supported image but that's a separate issue there.
So this issue boils down to: what, if any, should Wasmtime's policy be? The defacto assumption is probably "the oldest thing that's the most compatible that we can get working", which I feel is a reasonable enough thing for macOS and Windows. For Linux it's trickier where we need to actually pick a glibc version and a distro to build in. In some sense, this issue can boil down to what our glibc versioning is.
One answer to what glibc version to use is "none" perhaps by switching to musl. That's also part of https://github.com/bytecodealliance/wasmtime/issues/10930 but while this would solve the
wasmtimebinary problem it wouldn't solve thelibwasmtime.soproblem where we'll still need to provide glibc/musl versions both.Another possible answer is "oldest LTS we can find" which, for example, right now is Debian 11 at glibc 2.31 (as opposed to right now we're using Ubuntu 16.04 at glibc 2.23).
Do others have thoughts on this? Anyone else aware of other glibc versioning strategies that other projects are using? Should we write down more carefully what we're doing on macOS and Windows? Is this all overkill and we should just pick a reasonably old glibc?
alexcrichton added the ci label to Issue #10931.
cfallin commented on issue #10931:
Regarding glibc version: would it be worth looking at what
rustcitself does for its toolchain builds? It seems to require only positively ancient glibc versions in the symbols it references:% readelf -r .rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc | grep GLIBC | awk '{print $5}' | cut -d@ -f2 | sort | uniq -c 1 GLIBC_2.12 1 GLIBC_2.14 1 GLIBC_2.17 42 GLIBC_2.2.5 5 GLIBC_2.3.2 2 GLIBC_2.3.4 2 GLIBC_2.6
cfallin commented on issue #10931:
It seems that the Rust compiler release binaries are built inside a CentOS 7 container (!): link to Dockerfile
I seem to recall we used to use this as well -- it's now EOL and I'm surprised the package installs still find a valid repository in their case...
cfallin edited a comment on issue #10931:
It seems that the Rust compiler release binaries are built inside a CentOS 7 container (!): link to Dockerfile
I seem to recall we used to use this as well -- it's now EOL and I'm surprised the package installs still find a valid repository in their case... (but it seems they still work with an alternative
vault.centos.orghostname!)
alexcrichton commented on issue #10931:
Heh yeah I built much of the original infrastructure there and it's the same idea as Wasmtime of picking something super old. The main difference with Wasmtime is that some containers for Rust take 2+ hours to build (they need to build gcc/clang/etc) so it was absolutely required to implement caching, whereas for us we have no caching of containers because it wasn't really necessary. The way caching effectively works in Rust is that it downloads the previous container from the previous commit (more-or-less) and then runs
docker buildanyway. If nothing actually changed, as is often the case, then Docker's own build caching kicks in and the image builds instantly. At the end of the build the container is then uploaded. That's more mostly in relation to https://github.com/bytecodealliance/wasmtime/issues/10930 though I suppose.In any case AFAIK general practice for building portable glibc binaries is "find a super old container" and then I also am unaware of a canonical solution of how to make said container reliable networking-wise
SingleAccretion commented on issue #10931:
In any case AFAIK general practice for building portable glibc binaries is "find a super old container" ...
FWIW, the dotnet/runtime version of this solution is to use a new container (thus new C compiler and other tools), and then do a cross-build with an older sysroot (also part of this container, so it only needs to be acquired at container build time). This was discussed in https://github.com/dotnet/runtime/issues/83428.
alexcrichton commented on issue #10931:
That's a good point! I've always assumed that building such a sysroot takes multiple hours and/or is extremely difficult to change (e.g. to build for different architectures, etc), but I've not actually investigated such a route myself...
SingleAccretion edited a comment on issue #10931:
In any case AFAIK general practice for building portable glibc binaries is "find a super old container" ...
FWIW, the dotnet/runtime version of this solution is to use a new container (thus new C compiler and other tools), and then do a cross-build with an older sysroot (also part of this container, so it only needs to be acquired at container build time). This was discussed in https://github.com/dotnet/runtime/issues/83428.
Edit: the containers are built here: https://github.com/dotnet/dotnet-buildtools-prereqs-docker, example: https://github.com/dotnet/dotnet-buildtools-prereqs-docker/blob/main/src/azurelinux/3.0/net10.0/cross/amd64/Dockerfile.
Last updated: Dec 06 2025 at 06:05 UTC