tfp.math.lbeta
Stay organized with collections
Save and categorize content based on your preferences.
Returns log(Beta(x, y)).
tfp.math.lbeta(
x, y, name=None
)
This is semantically equal to
lgamma(x) + lgamma(y) - lgamma(x + y)
but the method is more accurate for arguments above 8.
The reason for accuracy loss in the naive computation is catastrophic
cancellation between the lgammas. This method avoids the numeric cancellation
by explicitly decomposing lgamma into the Stirling approximation and an
explicit log_gamma_correction
, and cancelling the large terms from the
Stirling analytically.
The computed gradients are the same as for the naive forward computation,
because (i) digamma grows much slower than lgamma, so cancellations aren't as
bad, and (ii) it's simpler and faster than trying to be more accurate.
References:
[1] DiDonato and Morris, "Significant Digit Computation of the Incomplete Beta
Function Ratios", 1988. Technical report NSWC TR 88-365, Naval Surface
Warfare Center (K33), Dahlgren, VA 22448-5000. Section IV, Auxiliary
Functions. https://apps.dtic.mil/dtic/tr/fulltext/u2/a210118.pdf
Args |
x
|
Floating-point Tensor.
|
y
|
Floating-point Tensor.
|
name
|
Optional Python str naming the operation.
|
Returns |
lbeta
|
Tensor of elementwise log beta(x, y).
|
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2023-11-21 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2023-11-21 UTC."],[],[],null,["# tfp.math.lbeta\n\n\u003cbr /\u003e\n\n|-------------------------------------------------------------------------------------------------------------------------------------------|\n| [View source on GitHub](https://github.com/tensorflow/probability/blob/v0.23.0/tensorflow_probability/python/math/special.py#L2092-L2129) |\n\nReturns log(Beta(x, y)). \n\n tfp.math.lbeta(\n x, y, name=None\n )\n\nThis is semantically equal to\nlgamma(x) + lgamma(y) - lgamma(x + y)\nbut the method is more accurate for arguments above 8.\n\nThe reason for accuracy loss in the naive computation is catastrophic\ncancellation between the lgammas. This method avoids the numeric cancellation\nby explicitly decomposing lgamma into the Stirling approximation and an\nexplicit `log_gamma_correction`, and cancelling the large terms from the\nStirling analytically.\n\nThe computed gradients are the same as for the naive forward computation,\nbecause (i) digamma grows much slower than lgamma, so cancellations aren't as\nbad, and (ii) it's simpler and faster than trying to be more accurate.\n\n#### References:\n\n\\[1\\] DiDonato and Morris, \"Significant Digit Computation of the Incomplete Beta\nFunction Ratios\", 1988. Technical report NSWC TR 88-365, Naval Surface\nWarfare Center (K33), Dahlgren, VA 22448-5000. Section IV, Auxiliary\nFunctions. \u003chttps://apps.dtic.mil/dtic/tr/fulltext/u2/a210118.pdf\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Args ---- ||\n|--------|---------------------------------------------|\n| `x` | Floating-point Tensor. |\n| `y` | Floating-point Tensor. |\n| `name` | Optional Python `str` naming the operation. |\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n| Returns ------- ||\n|---------|---------------------------------------|\n| `lbeta` | Tensor of elementwise log beta(x, y). |\n\n\u003cbr /\u003e"]]