Hi all, we haven't had stuff here for a while but a LOT of stuff has been done. Here's a quick update.
the repository for the downstream shims is currently https://github.com/deislabs/containerd-wasm-shims.
at this time, we are now looking forward to wasi preview 2 with component support. As that appears, runwasi shims will update to use that and versioning will clearly mark that for people wanting to try support.
two final things for those who care:
ooh! there was a 3.
that's about it for now.
Ralph said:
- The most recent versions of shims now support "pod agnosticity" -- the ability of containers and modules/components to run in the same pod without any modification of k8s workloads.
Hey Ralph, would you mind clarifying this? RuntimeClass
es apply to pods, so I was wondering how you could get a "regular" container and a Wasm container to run in the same Pod
definition.
Ralph said:
- Runwasi shims are working on moving from a scratch-image delivery package to an OCI Artifact-based delivery package, meaning that only one "image" version is required for any execution platform, whether windows or linux, amd/arm, k8s or standalone (that is, execution of standalone app hosts will work from the same reference).
I'd love to learn more about this, is there a PR/example/docs I can peruse? It seems like what this means is that rather than FROM scratch
(not sure where this was used), runwasi will use the specified base image of the shim (does the shim supply that somehow)?
Hi @Victor Adossi. The answers are:
no more multiarch builds, no custom code in runwasi to handle all possible differences.
BTW, currently we ONLY use FROM Scratch to build minimal images with the application artifact graph inside to push to a repo. at deployment, we can then point the image
field at a familiar "image" without adding funky yaml to kubernetes.
but runwasi does not run that container image; instead, it pulls the image, extracts the contents, and executes those contents with the runtime (like wasmtime). The container is NOT executed; the wasm sandbox executes the contents.
we then brought in the Youki work for OCI support, so that gave us two features in addition: it enables us to execute the wasm sandbox inside OCI features, and it enables us to run a container image and a module in the same pod, per the example above.
Another example using Dapr is at https://dev.to/thangchung/series/24617.
But FROM Scratch still had a problem: Windows won't pull that image!!!! it's an empty image, but the k8s framework on windows doesn't think it will run
so we needed to move to an OCI Artifact approach, as that will enable one component "artifact" to be pulled by anyone who knows how to run it.
that last part now works in code, but the spec work in OCI is still ongoing.
this will also enable RTOSES and standalone application hosts (wasmcloud, spin, lunatic, anyone) to use OCI registries to pull and execute components.
does that help?
Thanks for the detailed explanation, this is fantastic.
I didn't realize that the artifact spec was what was being used to facilitate running the two alongside each other, and I didn't see the wide ranging changes like enabling Windows and RTOSes.
Thanks for the links, will dig in and try to better understand!
Ralph said:
but runwasi does not run that container image; instead, it pulls the image, extracts the contents, and executes those contents with the runtime (like wasmtime). The container is NOT executed; the wasm sandbox executes the contents.
Ah this makes sense -- the packaging that was necessary to get the WASM distributed -- I'd forgotten about that bit. I was wondering how you could get both a binary (or the equivalent "pre-WASM" workload) and a WASM binary out of the one artifact and have the runtime underneath know to run them both.
If I'm understanding right, you'd have one image pushed to a registry with a CMD
set to the binary that needs to actually run, with the WASM file also on disk somewhere, and metadata to specify the artifact (in this case the WASM file) so the runtime knows to start it, as well, right?
I'd like to also recommend this blog: https://dev.to/thangchung/how-to-run-webassemblywasi-application-spin-with-dapr-on-kubernetes-2b8n
Whoops, @Victor Adossi I had forgotten to answer this, though @Mossaka (Joe) might be better here.
"If I'm understanding right, you'd have one image pushed to a registry with a CMD set to the binary that needs to actually run, with the WASM file also on disk somewhere, and metadata to specify the artifact (in this case the WASM file) so the runtime knows to start it, as well, right?"
the image has only the runtime that executes for shims like spin, slight, etc. The app host shims do not use the underlying runtime because they're built with the crate/lib for that wasm runtime. They can execute standalone, in other words, so those contain a) the app host and b) the wasm module(s) to execute, and c) any config files or other artifacts. They're all dropped and executed as-is.
the runwasi shims with merely wasm runtimes in them place the runtimes onto the node; using artifacts, you can just drop the artifact and execute it. So, slightly different paths. With artifacts, you don't push "spin" to the repo -- it's already on the node. You only push the module.
Ah thank you for the detailed explanation! It makes a lot more sense to me now
I was thinking that the image coming in would be basically empty except for the module
Actually, the examples we've shown are two images - one is the usual Linux image and one is wasm image. The difference between these two is that one has the runtime built into the image (e.x. the Python runtime) and the wasm one only have wasm files.
When the shim sees the rootfs, it tries to determine whether the executable is a Linux executable or a Wasm executable. (e.g. here is the source code in runwasi to determine if the exeuctbale is a Linux executable: https://github.com/containerd/runwasi/blob/main/crates/containerd-shim-wasm/src/sys/unix/container/executor.rs#L86-L107)
fancy
@Ralph : I don't know if this is the right place to raise this concern. Please reroute me if not.
Context: At an upcoming event, I am demonstrating ways we can leverage a combination of Wasm + cloud native tech to build sustainable/greener workloads. Toward which, one of the things that I am doing is deploying the containerd shims and associated workloads per this example. In the demo, I'm also using the Kepler & the kube-green projects to demonstrate how much memory/CPU is being consumed & I noticed something odd. The pod running the wws workload, even when it was idle i.e. no curl requests were being sent its way, was consuming ~75% of the assigned CPU. The other pods consumed way way less. I will attach a screencap shortly.
My question: Is this expected behavior?
Divya Mohan said:
Ralph : I don't know if this is the right place to raise this concern. Please reroute me if not.
Context: At an upcoming event, I am demonstrating ways we can leverage a combination of Wasm + cloud native tech to build sustainable/greener workloads. Toward which, one of the things that I am doing is deploying the containerd shims and associated workloads per this example. In the demo, I'm also using the Kepler & the kube-green projects to demonstrate how much memory/CPU is being consumed & I noticed something odd. The pod running the wws workload, even when it was idle i.e. no curl requests were being sent its way, was consuming ~75% of the assigned CPU. The other pods consumed way way less. I will attach a screencap shortly.
My question: Is this expected behavior?
Okay, there are a couple of things I was mistaken about here.
Additionally, it's not the pod, but the wws workload that's consuming around ~49.68 MiB of the assigned 128 MiB when it is idle. That's ~38.81% and not 75%, sorry!. However, in comparison to the other workloads when they are idle, this is significantly higher. (spin 8.25%, slight 9.84%, and lunatic 6.17%). Screenshots attached below.
For wws workload (when idle)
Screenshot-2023-11-29-at-10.41.06AM.png
For spin workload (when idle)
Screenshot-2023-11-29-at-10.43.11AM.png
Morning from Italy, Divya! The wws shim just uses the wws priest, and nothing more
So my assumption is that it's just a large project.... It's apache after all,basically
Let's ping angel at VMware and see what he says...
@Angel M any thoughts?
Thanks, I thought as much! Just wanted to clarify before I jump to conclusions :joy: I'll wait for Angel to answer :)
the shims themselves merely consume the upstream runwasi crate, so anything "different" is what the shim app host brings.....
Hey @Divya Mohan,
This is amazing! It's really cool to see the comparison between containers / Wasm / Wasm runtimes running side by side :smiley:. In the case of wws, the example is running a JS based module, which contains an entire QuickJS interpreter + JS Polyfill code inside the Wasm module. Without checking it yet, I believe it's the main reason for the memory footprint is higher in this example. I believe a Rust example will use less memory, but we wanted to use JS to showcase this specific language.
In addition to that, we spawn the entire wws
server, which contains different libraries as @Ralph mentioned. Would you mind to open a discussion on the GitHub project? The information you provided here is more than enough, just wanted to keep track of it and start digging into the issue: https://github.com/vmware-labs/wasm-workers-server/discussions
And thank you @Ralph for the ping here!
in short, it's more of a scale-out web server than a serverless function host in design, and the footprint indicates that, though I'm sure it can be configured differently.
Thanks gentlemen for your response!
@Angel M : I've opened up a discussion. If you do need anything else from my side for the investigation, I'd be happy to help!
Thank you @Divya Mohan !!
Last updated: Nov 22 2024 at 17:03 UTC