github-actions[bot] commented on issue #4414:
Subscribe to Label Action
cc @peterhuene
<details>
This issue or pull request has been labeled: "wasmtime:api"Thus the following users have been cc'd because of the following labels:
- peterhuene: wasmtime:api
To subscribe or unsubscribe from this label, edit the <code>.github/subscribe-to-label.json</code> configuration file.
Learn more.
</details>
jameysharp commented on issue #4414:
First, some quick things before the complicated notes:
Let's drop the
#[derive]
syntax from the macro. Instead, always derive bothLift
andLower
. I checked with @alexcrichton that there's no reason to derive only one of them for flags. He mentioned someone might want to deriveSerialize
or something, but we can add support for that later if it comes up.I suspect the constants for individual flags are going to be a pain to use in practice, and inefficient for large numbers of flags. In particular, the only way I see to check if a flag is set is
flags & Foo::C != Foo::default()
. I'm thinking I'd prefer a Rustenum Foo
, plus aFooSet
type that has methods likeget(&self, bit: Foo) -> bool
andset(&mut self, bit: Foo)
. But I'm open to other suggestions.So a macro invocation might look like:
flags! { FooSet => enum Foo { #[component(name = "foo-bar-baz")] A, B, } }
Now: After a lot of thought, I suspect that implementing
flags
can be quite a bit simpler than this, given a slightly different perspective.The first observation is that WebAssembly specifies that multi-byte values are stored in linear memory in little-endian order. So even though the canonical ABI specifies flags using 8, 16, or 32-bit quantities, all flags types can be treated uniformly as byte-arrays in
load
andstore
. Flag indexing works out the same way regardless. That in turn means we can implement those methods as pure memory copies, as long as we define the Rust types as minimum-length byte arrays. (For anything more than 2 bytes,store
also should write zeros in the last-n&3
bytes.)The next observation is that wasmtime's
ValRaw
type, used when values arelower
ed to orlift
ed from the wasm stack, doesn't distinguish types smaller than 32 bits. So if the Rust-side definition of a flags type is a byte array, then no matter how many flags there are, it's correct to useu32::from_le_bytes
oru32::to_le_bytes
to map every four-byte chunk into aValRaw
, padding the last chunk with zeros if necessary.I'm wondering if we could go as far as having a single
Flags
type provided in thewasmtime
crate, with type parameters for the array ofValRaw
used as itsComponentType::Lower
type, and for the array of bytes used as its Rust representation. An additional type parameter could refer to a type with a type-level constant containing the list of flag names, for use during type-checking and inDebug
. (It'd be nice to compute some type parameters from others, but as far as I can tell Rust can't currently do that. I'd list what I tried but this note is already too long.)Then all of
ComponentType
,Lift
,Lower
,Eq
,Debug
,BitOr
, etc can be implemented once for the genericFlags
type. Theflags
macro just needs to emit a type alias toFlags
with the appropriate type parameters filled in. I think at that point it might be simple enough that we can write it as amacro_rules
macro.Consistently using byte arrays means there's nothing special about the 1-byte or 2-byte encodings when deciding what mask to use for the last few flags. If number of flags
n
mod 8 is 0, then no mask is necessary; otherwise, the last byte should be masked with!(0xFFu8 << (n % 8))
. Any function that needs to know the number of flags can get it from the length of the list of names, available from the third type-parameter onFlags
.I believe you can also write impls for specific instantiations of
Flags
to declare things like{ const A: FlagType = ...; }
without running afoul of Rust's orphaned impl rules. That said, I think asking the user of theflags!
macro to provide both anenum Foo
type name and aFooSet
type name will work out better.This was a lot, sorry. I hope it made some sense and I want to hear what you think of it; I'm not certain it's better, I just think it has good odds of working out well.
alexcrichton commented on issue #4414:
Personally I don't feel the same way about
Foo
andFooSet
. Theflags
type in the component model I see as basically analagous to theFooSet
idea so I'm not sure if it also justifies the existence ofFoo
. That sort of feels like too much boilerplate to me. That being said I don't really feel too strongly.I'd also caution to avoid going too crazy with the representation just yet. I don't feel like we've got a real handle on what performance pitfalls are, how this will all be used, and how it all integrates into a final experience. In that sense I think it might be premature to sink too much design effort into a small niche of the component model when it's not clear if it'll pay off or be worth it in the end.
dicej commented on issue #4414:
@jameysharp You've clearly put a lot more thought into this than I have, and I think you've got some great ideas. I don't use bit flags much in day-to-day application-level code, so it's not a topic I have any strong opinions about.
My overall goal is to get the component model implemented in
wasmtime
and other key projects ASAP, so I'm biased in favor of going with what we have for now (possibly with some tweaks where appropriate, such as always deriving bothLift
andLower
) and refining it later once we have some real world experience, like Alex suggests.I'll go ahead now and remove the
derive
syntax from the macro, since that's easy.
jameysharp commented on issue #4414:
Quick detail first: I don't think the mask trick I described works unless you guard against the case where
len%bits==0
. In that case you could avoid masking entirely, or there are several other ways to do it, but without bigger changes to code structure it's a little tricky. Perhaps there should be tests for exactly 8/16/32 flags, as I believe the current version will panic in debug builds for those cases. Feel free to go back to the loops if you don't want to think about it.Okay, I can see I dove into technical details without making my motivations clear. I was trying to address two high-level concerns: one is API design, and the other is ease of maintenance of the implementation.
At the moment I'm mostly concerned about implementation complexity. Procedural macros are complicated. I don't see any way around using them for
#[deriving]
. But this PR adds half as much code tocomponent-macro
to support flags as that crate has for all the other cases put together. Since we aren't using#[deriving]
for flags, I think we can put most of the complexity in library code instead, and either avoid proc-macros entirely or at least add very little code there.Even if we stick entirely with proc-macros, much of the new code is due to duplication around the size 1/2/4+ split, as well as trying to reuse the deriving implementation for records. My motivation for suggesting byte arrays is that it eliminates a bunch of special cases, as well as minimizing the amount of generated code since you can just loop over the flags instead.
I also want to be clear that I hadn't come up with an alternative until I spent time understanding what you've done in this PR and why. I think the effort you put into this was important for moving component model implementation forward, regardless of whether we merge it as-is—which we may do.
Finally, I don't mind dropping the API design questions. Yes, we should bias toward getting something merged. The syntax in the
flags!
macro isn't particularly important because we expect it to be auto-generated bywit-bindgen
, not maintained by hand. I'm a little more concerned about API design for how the host can interact with these flag-sets, but sure, we're not promising any stable API here so we can futz with it later.So with all that said: @dicej, please fix the mask generation; and @alexcrichton, I'd appreciate your take on implementation complexity.
alexcrichton commented on issue #4414:
Personally I agree that we don't want the implementation complexity too high right now in the abstract. The component model is still very early-days and we should be focusing primarily on correctness rather than speed for now (until we get to the point where we have a concrete thing to benchmark). Having skimmed this over this seems fine to me complexity-wise, but I'm also coming from the other 10k+ lines of code related to the component model elsewhere so my calibration is not always the same as others'
jameysharp commented on issue #4414:
Yeah, you're both right: I'll go ahead and merge this. I might go try implementing my proposal afterward, but yes, it's better to merge this than not.
Also, thanks for the comprehensive mask fix and additional tests, @dicej! This is all strong work, like your earlier PRs.
Last updated: Dec 23 2024 at 12:05 UTC