Skip to content

DOC: added exp weighting clarifications from #8861 #11361

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 1, 2015
Merged

DOC: added exp weighting clarifications from #8861 #11361

merged 1 commit into from
Nov 1, 2015

Conversation

matthewgilbert
Copy link
Contributor

I integrated the comments in #8861 from @seth-p into the exponential weighted moment functions docs.

@jorisvandenbossche jorisvandenbossche added this to the 0.17.1 milestone Oct 18, 2015
@jorisvandenbossche
Copy link
Member

Thanks! Looks good from a technical point.
@seth-p Can you do a review?

@seth-p
Copy link
Contributor

seth-p commented Oct 19, 2015

It looks good, though would be even better to include some of the substance in @immerrr's #8861 (comment), specifically that for the adjust=False case to make sense, you need to assume that "the first element is not an ordinary value, but rather a prior knowledge of the whole infinite-ish series".

@matthewgilbert
Copy link
Contributor Author

Good point, I'll put some of that content in.

@jreback
Copy link
Contributor

jreback commented Oct 19, 2015

@matthewgilbert pls review the doc string pd.ewma as well for consistency with this (you can also put in a link to the actual docs to not make it overly long)

@matthewgilbert
Copy link
Contributor Author

@jreback Bit confused with your suggestion to link to the docs string, was there a specific place you had in mind for this?

@jreback
Copy link
Contributor

jreback commented Oct 20, 2015

put a link in the doc-string to the more detailed explanations

@matthewgilbert
Copy link
Contributor Author

Incorporated link to the docstring and added some explanation of describing assumptions on first element when adjust=False, squashed this into one commit

@seth-p
Copy link
Contributor

seth-p commented Oct 24, 2015

@matthewgilbert, I'm afraid I don't think you incorporated @immerrr's point in #8861 (comment) correctly. It is when adjust=False that one is implicitly assuming that x[0] is a moving average encapsulating the data from all "previous" observation; whereas for adjust=True one is implicitly assuming that x[0] is an observation just like every other x[i], i>0. (In all of these, 0 should really be replaced by the index of the first non-NaN value in x[], but I'm glossing over that subtlety.)

@matthewgilbert
Copy link
Contributor Author

@seth-p Thanks, yes this was a typo, sorry about that. Just to be clear, changing

"When adjust=True we have..." to "When adjust=False we have..." should fix this, or is there something else I'm missing?

@seth-p
Copy link
Contributor

seth-p commented Oct 25, 2015

@matthewgilbert, I think that's fine, though also I think :math:\alpha x_t + (1 - \alpha) y_{t-1} should be `:math:`y_t = \alpha x_t + (1 - \alpha) y_{t-1}.

@matthewgilbert
Copy link
Contributor Author

All comments have been incorporated. I squashed this all into one commit so unsure if as a result this does not send notifications and thought I would just comment here to be sure.

@jreback
Copy link
Contributor

jreback commented Oct 31, 2015

@matthewgilbert yes a push does not ping anyone.

This looks good.

@seth-p any final comments?

@seth-p
Copy link
Contributor

seth-p commented Nov 1, 2015

Looks good.

jorisvandenbossche added a commit that referenced this pull request Nov 1, 2015
DOC: added exp weighting clarifications from #8861
@jorisvandenbossche jorisvandenbossche merged commit f82be6d into pandas-dev:master Nov 1, 2015
@jorisvandenbossche
Copy link
Member

@matthewgilbert Thanks a lot!
In some time, this should be built in the dev docs: http://pandas-docs.github.io/pandas-docs-travis/, so you can always check there if everything looks fine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants