guybedford commented on issue #7569:
Note that the underlying reason for this is the use of IEEE-754 floating point to represent these times in Node.js (some APIs have upgraded to BigInt, but not all).
alexcrichton commented on issue #7569:
Sorry I'm still pretty confused as to what's going on.
2000 for u64 is 2000 * 100 nanoseconds is 200 microseconds
I don't understand what this means in the context of the structures here. All timestamps in preview1 are in units of nanoseconds, not in units of 100 nanoseconds. That means that 2000 nanoseconds is 2 microseconds, so I don't understand where your 200 number is coming from. Is this perhaps a bug in the Node bits where Node is reporting timestamps in units of 100 nanoseconds? If that's the case then numbers should be multiplied by 100 when making their way into WASI.
Could these assertions be left as-is with a larger adjustment above that isn't arounded?
There's two axes here sort of I think. On one axis it should probably be mandated that some granularity is required to pass the test suite. The previous granularity was too small for Node so testing a higher granularity seems fine. That would be my suggestion of updating time in an increment of 200us for example as in theory that granularity should be supported. This could go as high as 2 seconds perhaps though.
The other axis is a fine-grained update of time. That part of the test could be disabled on Node. I think that if the test is updated though then it shouldn't test one thing in Node and another everywhere else. Ideally only parts of it are disabled in Node if Node doesn't support finer-grained timestamps.
I also don't understand why Node is testing equality using division. Shouldn't the exact values be equal in Node? For example if Node reports a timestamp is N and it's updated to N+200us, then shouldn't the resulting timestamp afterwards be exactly N+200us? I'm not sure why the division would be needed because N should already be at the maximal precision (or so I'm assuming) and the 200us would also be at the maximal precision too, neither is more precise than what the system supports.
alexcrichton commented on issue #7569:
Also sorry I don't mean to be pedantic about minor test changes. It's ok to land something that works since the cases here are relatively minor, but nevertheless I'd also like to understand this change more.
guybedford deleted a comment on issue #7569:
Note that the underlying reason for this is the use of IEEE-754 floating point to represent these times in Node.js (some APIs have upgraded to BigInt, but not all).
guybedford commented on issue #7569:
@alexcrichton apologies for the confusion, I made an invalid assumption that the u64 mtime value was in units of 100 nanoseconds due to a misreading of the docs. I've updated the PR to now reference 1us accuracy.
I've changed the configuration option name to
FS_TIME_PRECISION
.I also don't understand why Node is testing equality using division. Shouldn't the exact values be equal in Node? For example if Node reports a timestamp is N and it's updated to N+200us, then shouldn't the resulting timestamp afterwards be exactly N+200us?
Because Node.js rounds to microseconds, if we set a timestamp N, the timestamp that actually gets set is a Node.js-rounded N.
The other axis is a fine-grained update of time.
For now I've treated both of these axes the same - the fs time precision. Where the unit of time we update is twice the precision, and the unit of time we check for is equality under that same precision. This makes sense I think because we are effectively observing this implementation-defined rounding function.
Ready for review, further feedback welcome.
Last updated: Nov 22 2024 at 17:03 UTC