saulecabrera opened issue #12561:
In the following snippet:
let mut accept = sock.listen().unwrap(); futures::join!( async { let next = accept.next().await.unwrap(); assert_eq!(next.get_address_family(), sock.get_address_family()); assert_eq!(next.get_keep_alive_enabled(), sock.get_keep_alive_enabled()); assert_eq!( next.get_keep_alive_idle_time(), sock.get_keep_alive_idle_time() ); assert_eq!( next.get_keep_alive_interval(), sock.get_keep_alive_interval() ); assert_eq!(next.get_keep_alive_count(), sock.get_keep_alive_count()); assert_eq!(next.get_hop_limit(), sock.get_hop_limit()); // The following asserts fail. assert_eq!( next.get_receive_buffer_size(), sock.get_receive_buffer_size() ); assert_eq!(next.get_send_buffer_size(), sock.get_send_buffer_size()); }, async { client.connect(local_addr).await.unwrap(); } );The values for
get_receive_buffer_size()andget_send_buffer_size()are different between the listener and the handler socket.According to the spec:
The following properties are inherited from the listener socket:
address-familykeep-alive-enabledkeep-alive-idle-timekeep-alive-intervalkeep-alive-counthop-limitreceive-buffer-sizesend-buffer-sizePlatform specific information:
saulecabrera added the bug label to Issue #12561.
saulecabrera added the wasi label to Issue #12561.
saulecabrera commented on issue #12561:
The inconsistency only seems to be happening when the values are not explicitly set by calling the respective setters i.e., if explicitly set, it works as expected, according to this test https://github.com/bytecodealliance/wasmtime/blob/main/crates/test-programs/src/bin/p3_sockets_tcp_sockopts.rs#L113
alexcrichton commented on issue #12561:
cc @badeend
badeend commented on issue #12561:
Heh, nice find ;)
Let me find out why this is happening
badeend commented on issue #12561:
I looked at the
SO_RCVBUF&SO_SNDBUFimplementations of Linux & MacOS.By default, both platforms use a dynamic buffer capacity feature, which is disabled when the user explicitly sets
SO_RCVBUForSO_SNDBUF. On Linux the relevant search terms areSOCK_SNDBUF_LOCK&SOCK_RCVBUF_LOCK. On MacOS this is controlled bySB_AUTOSIZE.That explains the behavior observed in this issue.
Also relevant:
- On MacOS, the socket options reflect the _current_ capacity of the buffer. If not explicitly set,
getsockoptmay return different values over the lifetime of the connection as the buffer fills & drains.- On Linux, the socket options reflect the _maximum_ buffer capacity. If not explicitly set, Linux computes a maximum based on connection statistics. This currently occurs only once when the connection is established. This causes the one-time jump in the value returned by getsockopt, which afterwards remains stable.
Importantly, this behavior is not specific to listener sockets or inheritance. The same effect occurs on client sockets. Example:
let recv_before = sock.get_receive_buffer_size().unwrap(); let send_before = sock.get_send_buffer_size().unwrap(); sock.connect(addr).await.unwrap(); let recv_after = sock.get_receive_buffer_size().unwrap(); let send_after = sock.get_send_buffer_size().unwrap(); println!("Recv {recv_before} -> {recv_after}"); println!("Send {send_before} -> {send_after}");Prints:
Recv 65536 -> 65536 Send 8192 -> 43520
With this in mind, the underlying issue seems less about inheritance and more about the fact that
getsockopt(SO_RCVBUF or SO_SNDBUF)returns meaningless values until either:
- the option has been explicitly set, or
- a connection has been established.
It's unclear to me how WASI should handle this situation. Automatically calling
setsockopton every accepted socket just to satisfy a single test seems undesirable, as it would add runtime overhead and disable the kernel's dynamic buffer sizing.
- One option is to leave the behavior as-is and simply document these platform-specific quirks.
- Another possibility is to have
get_send/receive_buffer_sizereturn an error or 0 until the buffer size has been explicitly set or the socket has been connected. From my current understanding, there is rarely a meaningful use case for reading these values beforehand anyways. But I can't predict how many existing applications might break because of that.In summary, I understand the problem now but I don't know the best way forward yet.
badeend edited a comment on issue #12561:
I looked at the
SO_RCVBUF&SO_SNDBUFimplementations of Linux & MacOS.By default, both platforms use a dynamic buffer capacity feature, which is disabled when the user explicitly sets
SO_RCVBUForSO_SNDBUF. On Linux the relevant search terms areSOCK_SNDBUF_LOCK&SOCK_RCVBUF_LOCK. On MacOS this is controlled bySB_AUTOSIZE.That explains the behavior observed in this issue.
Also relevant:
- On MacOS, the socket options reflect the _current_ capacity of the buffer. If not explicitly set,
getsockoptmay return different values over the lifetime of the connection as the buffer fills & drains.- On Linux, the socket options reflect the _maximum_ buffer capacity. If not explicitly set, Linux computes a maximum based on connection statistics. This currently occurs only once when the connection is established. This causes the one-time jump in the value returned by getsockopt, which afterwards remains stable.
Importantly, this behavior is not specific to listener sockets or inheritance. The same effect occurs on client sockets. Example:
let recv_before = sock.get_receive_buffer_size().unwrap(); let send_before = sock.get_send_buffer_size().unwrap(); sock.connect(addr).await.unwrap(); let recv_after = sock.get_receive_buffer_size().unwrap(); let send_after = sock.get_send_buffer_size().unwrap(); println!("Recv {recv_before} -> {recv_after}"); println!("Send {send_before} -> {send_after}");Prints:
Recv 65536 -> 65536 Send 8192 -> 43520
With this in mind, the underlying issue seems less about inheritance and more about the fact that
getsockopt(SO_RCVBUF or SO_SNDBUF)returns meaningless values until either:
- the option has been explicitly set, or
- a connection has been established.
It's unclear to me how WASI should handle this situation. Automatically calling
setsockopton every accepted socket just to satisfy a single test seems undesirable, as it would add runtime overhead and disable the kernel's dynamic buffer sizing.
- One option is to leave the behavior as-is and simply document these platform-specific quirks.
- Another possibility is to have
get_send/receive_buffer_sizereturn an error until the buffer size has been explicitly set or the socket has been connected. From my current understanding, there is rarely a meaningful use case for reading these values beforehand anyways. But I can't predict how many existing applications might break because of that.
- Edit: instead of returning an error, maybe returning 0 may not be so bad. Which can be interpreted as: the socket starts out with an empty buffer.
In summary, I understand the problem now but I don't know the best way forward yet.
Last updated: Feb 24 2026 at 04:36 UTC