Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Weird file CI issues with benchmark_effect_size test #178

Open
abrown opened this issue May 26, 2022 · 3 comments
Open

Weird file CI issues with benchmark_effect_size test #178

abrown opened this issue May 26, 2022 · 3 comments

Comments

@abrown
Copy link
Collaborator

abrown commented May 26, 2022

On Windows, the benchmark_effect_size test has started intermittently failing since #170. It is unclear to me what could have modified the execution of the code that copies the built engine to a location provided by tempfile::NamedTempFile. I added an assert in hopes that it would trigger the error sooner but the OS always thinks the file exists:

assert!(alt_engine_path.exists());

One example of this failure is here:

thread 'benchmark::benchmark_effect_size' panicked at 'Unexpected failure.
code--1073741819
...
command=`"D:\\a\\sightglass\\sightglass\\target\\debug\\sightglass-cli.exe" "benchmark" "--engine" "\\\\?\\D:\\a\\sightglass\\sightglass\\engines\\wasmtime\\engine.dll" "--engine" "C:\\Users\\RUNNER~1\\AppData\\Local\\Temp\\.tmpaNVVum" "--processes" "1" "--iterations-per-process" "3" "../../benchmarks-next/noop/benchmark.wasm"`

If this random link is to be believed, the code referenced above is an access violation that might indicate referencing a null pointer, e.g. But why? The original engine library seems to run fine according to the logs; only the copied engine library has the problem.

@abrown
Copy link
Collaborator Author

abrown commented Aug 11, 2022

@fitzgen, should we now consider this closed? If the CI issues with benchmark_effect_size are no longer a problem, maybe my "access violation" hypothesis was incorrect and it was the variance issue you saw?

@fitzgen
Copy link
Member

fitzgen commented Aug 11, 2022

Yeah sure, we can always reopen if we see it again.

@fitzgen fitzgen closed this as completed Aug 11, 2022
abrown added a commit to abrown/sightglass that referenced this issue Mar 14, 2023
As reported in bytecodealliance#178, the `benchmark_effect_size` test on Windows
occasionally fails with a strange error code and no output. This
conditional `#[ignore]` temporarily avoids the issue until we can figure
out what is going on.
@abrown
Copy link
Collaborator Author

abrown commented Mar 14, 2023

I don't think this is resolved. @jlb6740 just triggered it again yesterday (logs). I'm going to disable the test on Windows until we figure out what is going on.

@abrown abrown reopened this Mar 14, 2023
jlb6740 pushed a commit that referenced this issue Mar 14, 2023
As reported in #178, the `benchmark_effect_size` test on Windows
occasionally fails with a strange error code and no output. This
conditional `#[ignore]` temporarily avoids the issue until we can figure
out what is going on.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants