Conversation
| if not find_spec("plotly"): | ||
| pytest.skip("Plotly not available - cannot test force_cdn override") |
There was a problem hiding this comment.
Shouldn't this test only run when plotly is available for import?
There was a problem hiding this comment.
Yes, here it says if it can't find plotly, it skips the test. Or "only run when plotly is available"
There was a problem hiding this comment.
Yeah what I mean is that I don't like the usage of if (condition): pytest.skip() because it could lead to essentially silently skipping this test entirely if Plotly is never available when the test is run.
Can't this if check be removed entirely?
There was a problem hiding this comment.
Well, no, I'm happy to change it to a fail, I think that makes sense, but w/o plotly present, this test doesn't accomplish its goal which making sure "force_cdn" actually overrides plotly.py. We have another test to make sure that the CDN is also the default.
There was a problem hiding this comment.
Yeah I think changing it to a fail would be better (as long as that doesn't break the test)
There was a problem hiding this comment.
Okay, but I did it in a later PR
|
|
||
| # Test user overrides | ||
| @settings(suppress_health_check=[HealthCheck.function_scoped_fixture]) | ||
| @given(st.data()) |
There was a problem hiding this comment.
How does @given and data.draw() work? Is it testing all possible values returned from st_valid_path, or a random subset?
There was a problem hiding this comment.
data.draw() is basically what hypothesis does normally, it just lets you do it during the test instead of during hypothesis setup. So like all hypothesis testing, it does a random sample of default value n=100.
the st.data() + data.draw() strategy allows you to do the sampling at runtime during your test function instead of during test setup.. I was having trouble with windows not recognizing Path(__file__) as a valid file (according to chat GPT, it can happen), and wanted to use the builtin-in tmp_path fixture (pretty robust) which is only available at runtime.
| # Test with regular path | ||
| with pytest.raises(FileNotFoundError): | ||
| PageGenerator(plotly=str(nonexistent_file_path)) |
There was a problem hiding this comment.
Should test the case where a Path is passed as well, right? For this test and the others.
| # Test with regular path | |
| with pytest.raises(FileNotFoundError): | |
| PageGenerator(plotly=str(nonexistent_file_path)) | |
| # Test with Path object | |
| with pytest.raises(FileNotFoundError): | |
| PageGenerator(plotly=nonexistent_file_path) | |
| # Test with path as string | |
| with pytest.raises(FileNotFoundError): | |
| PageGenerator(plotly=str(nonexistent_file_path)) |
There was a problem hiding this comment.
this found an error btw
| with pytest.warns(RuntimeWarning, match="already"): | ||
| kaleido.start_sync_server(silence_warnings=False) | ||
|
|
||
| kaleido.start_sync_server(silence_warnings=True) |
There was a problem hiding this comment.
We need to explicitly test that no warning is emitted here, right?
Or have you set the Pytest configuration such that a warning will trigger a test failure?
There was a problem hiding this comment.
Yes, all warnings are upgraded to errors via CLI options.
| with pytest.warns(RuntimeWarning, match="closed"): | ||
| kaleido.stop_sync_server(silence_warnings=False) | ||
|
|
||
| kaleido.stop_sync_server(silence_warnings=True) |
|
|
||
| _h_url = st.tuples( | ||
| st.sampled_from(["s", ""]), | ||
| st.text( |
There was a problem hiding this comment.
might be misunderstanding here since I'm not familiar with hypothesis, but the number of possible values that could be sampled here (character strings from 1-20 characters) is huge. Could that space dwarf the other sampling being done here, and reduce the variety of the test cases? I'm not sure if that makes sense, I'm also not sure how hypothesis sampling logic works.
There was a problem hiding this comment.
This is an extremely interesting question and I don't have a better response than "hypothesis is said to handle these things pretty reasonably, especially ensuring to try a good range of combinations and corner cases" which I take from AI.
I'm under the impression that In this particular case, everytime text is sampled, all the other options are sampled both independently and intentionally stratified to test all corner cases.
There was a problem hiding this comment.
I've been going back and forth on this, and no, I don't think the range of values makes it more favored in sampling. I think actually for continuous ranges hypothesis uses at least a statistical representation of the possible values (min, max, etc)
This PR implements testing for the entire public API.
It also fixes all the bugs found by that testing.
Bugs Fixed:
Pathinstead ofstr: (casting andisinstanceflexibility).open()given that it requiresclose()(prevents leak in instances where user doesn't.open(), real corner case)appendto it.asyncio.return_task()doesn't return at askThese were all bugs found from doing tests using the public API as its expected to be used.
Also, typing was added, a private attribute was changed to public to aid testing, and tests are added.