Skip to content

TST: Add more tests for np.pad #11961

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Sep 18, 2018
Merged

TST: Add more tests for np.pad #11961

merged 3 commits into from
Sep 18, 2018

Conversation

eric-wieser
Copy link
Member

Extracted from #11358 + the addition of an xfail marker.

def test_same_prepend_append(self, mode):
# Check if the prepended and appended values are the same.
# Regression test for issue gh-11216
a = np.array([-1, 2, -1]) + np.array([0, 1e-12, 0], dtype=np.float64)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is best if we keep this test just for 'mean'. The other functions will not suffer from this bug.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not test them anyway?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get it, but this is very convoluted targeted test.
I think it can definitely make use of a "in source" comment summarizing the issue.

We are essentially testing the implementation of the algorithm as well as the peculiarities of floating point numbers. it is easy to understand why maximum and minimum are not likely to suffer from this bug. median will also not suffer from this.

If we want to test the general behaviour for all 4 other modes, I think it is better to have them in a more general different test and keep this particular test very targeted.

@lagru
Copy link
Contributor

lagru commented Sep 16, 2018

Would this be a good opportunity to include a test covering the behavior described in #11358 (comment):

>>> from fractions import Fraction
>>> np.pad([Fraction(10)], (2, 2), mode='linear_ramp', end_values=Fraction(1,2))
array([0.5, 5.25, Fraction(10, 1), 5.25, 0.5], dtype=object)

as well?

@eric-wieser
Copy link
Member Author

@lagru: I don't thing the current output is really correct for that either. I suppose I could add another xfailing test for that.

@lagru
Copy link
Contributor

lagru commented Sep 16, 2018

@eric-wieser That is the behavior of the old pad function.

Would you expect the output to be array([Fraction(1, 2), Fraction(21,4), Fraction(10, 1), Fraction(21,4), Fraction(1, 2)], dtype=object)?

@eric-wieser
Copy link
Member Author

@lagru: Indeed it is, and yes I would. I think the best option here is @pytest.mark.xfail(raises=AssertionError), so that we can allow the output to be suboptimal, but still make the test catch cases when a new error is raised.

@eric-wieser
Copy link
Member Author

Added that test, and updated the comments in the other test

lagru and others added 2 commits September 17, 2018 02:05
The test is marked xfail right now as it is not fixed in master
@charris charris merged commit a19d9de into numpy:master Sep 18, 2018
@charris
Copy link
Member

charris commented Sep 18, 2018

Thanks Eric.

@eric-wieser eric-wieser deleted the pad-tests branch October 16, 2018 03:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants