Jump to content

Talk:Inverse distribution

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Intended topic?

[edit]

THe original version has the first stated density as propotional to x-1, aand thids was broadly valid as a potenrial density function. However, it was not correct in terms of how it was stated to be derived in the lead, and in relation to the subsequent sections. So I have replaced it. But was the x-1 density the real intended target, with the rest being mistaken? 81.98.35.149 (talk) 10:24, 29 March 2013 (UTC)[reply]

Dubious synonym

[edit]

Currently the lede says

reciprocal distributions can be defined for other distributions, in which case they are more commonly called "inverse" distributions.

I think this is dubious because typing in Inverse distribution function gives a redirect to Quantile function, which says

In probability and statistics, the quantile function, also called percent point function or inverse cumulative distribution function, of the probability distribution of a random variable specifies, for a given probability, the value which the random variable will be at, or below, with that probability. [bolding added]

That is, it defines essentially the same term as the inverse function of the distribution function of x, in contrast to the lede of the present article which defines it as the distribution function of the inverse of x.

So I think this wording needs a source or needs to be deleted. Duoduoduo (talk) 17:39, 8 April 2013 (UTC)[reply]

There is one example in the "see also" section, and Wikipedia has others ... Inverse-gamma distribution, Inverse-Wishart distribution, Inverse matrix gamma distribution (and some composite versions to please Bayesians). Of course there is also the Inverse Gaussian distribution, which isn't formulated as a reciprocal distribuion. An "Inverse distribution function" is different from an inverse distribution as a distribution isn't a distribution function.

x is sometimes the dummy for the original variable and sometimes for the reciprocal variable

[edit]

I like the way the section on the t-distribution gives the density of the original variable X as f(x) and the distribution of Y=1 / X as g(y). Likewise, this notational approach is used for the normal, Cauchy, and F distributions. But the chart for the inverse uniform uses x instead of y and the sections on the uniform distribution and on other distributions again appear to use x instead of y (unless I'm just confused about undefined notation).

I think the notational treatment of the uniform and of "other" distributions should be made the same as that for the t-distribution. Duoduoduo (talk) 17:39, 8 April 2013 (UTC)[reply]

Info on reciprocal uniform cdf and pdf appears to be wrong

[edit]

The table at the top on the reciprocal uniform, as well as the section on the reciprocal uniform, gives the cdf as

This appears wrong to me for three reasons:

(1) G(x) evaluated at the upper bound does not equal 1.

(2) The median is correctly given as 2/(a+b). A median has the property that the cdf evaluated at the median equals 1/2. But

(3) CDF(x), where x is the reciprocal of a uniformly distributed variable Z, should have the property that

which does not agree with the cdf given in the article in two places.

And since the pdf given in the article agrees with (is in fact the derivative of) the given cdf, the pdf must be wrong too.

Or am I just confused? Duoduoduo (talk) 18:55, 8 April 2013 (UTC)[reply]

Also, the section on other distributions says, correctly I believe,

where g and G refer to the inverse distribution and f and F refer to the original distribution. These imply the expression I gave above for the cdf G, and for the pdf they imply

again contrary to what appears two places in the article.

Anyone object to my making these changes to the inverse uniform pdf and cdf? Without objection, I think it needs to be done right away before too many people read it the wrong way.Duoduoduo (talk) 22:12, 8 April 2013 (UTC)[reply]

The above looks broadly correct, except that the general case is incorrect unless some restriction on the range of X exists, The uniform case has the stated assumption a>0, which impiles X>0. There is no such assumption stated for the general case, at present. With no restriction, the steps above become more complicated, starting from something like ...
... leading to

The existing t-distribution example is one where X>0 does not apply. Of course, if the uniform case is treated more generally so as to allow a<0, b>0, then a two part domain will occur for the reciprocal, so a judgement would be needed about whether this complicaton is worthwhile. There is also the possibilty of including the (0,1) uniform case as a separate consideration. Melcombe (talk) 00:58, 9 April 2013 (UTC)[reply]

Variance of reciprocal normal

[edit]

The statement that the variance of the reciprocal normal is infinite needs to be justified, either with a citation or at least with a proof here on the talk page. I think I can be forgiven for being skeptical about anything in this article that's not obvious, given that some of what was in the article until very recently was wrong. In particular, until four days ago the article said

[the reciprocal normal distribution's] variance is infinite and the distribution is not defined at X = 0.

The last half of that was demonstrably wrong, so I'm skeptical about the first half.

This was a very good idea for an article, and I appreciate that DrMicro created it. I want to be sure it's totally reliable. Duoduoduo (talk) 19:54, 10 April 2013 (UTC)[reply]

For example, let X be Gausian with mean=5. In the limit as var(X) goes to zero, all the probability mass of X bunches up close to the value 5, all the probability mass of Y = 1/X bunches up close to 1/5 and the variance of Y also goes to zero. So for tiny variance of X we have tiny, and thus not infinite, variance of Y.
Therefore at a minimum this infinite variance assertion needs to be heavily qualified. I'm going to re-delete the infinite variance passage since it is demonstrably false in the general way it is stated. Duoduoduo (talk) 20:48, 10 April 2013 (UTC)[reply]
This hand-waving argument has led to an incorrect conclusion. The mean of an inverse normal distribution "does not exist" in more standard termonology, and hence the variance is infinite. For the mean of the inverse of an original distribution with inite and positive continuous density at zero, the integral for the expected value of 1/X has to cross the pole like 1/x at zero, for which the formal integrand is log(x), which is infinite as x appoaches zero. Thus a general result should be available for a wide range of distributions, not just the normal distribution. Of course it does help if people provide sources for what they put in articles. In this case the lack of sources for the general reciprocal distribution concept may indicate that it is not really notable. As someone noted above, there was even confusion as to whether the term should apply to distributions with densities proportional to 1/x. Melcombe (talk) 08:19, 11 April 2013 (UTC)[reply]
I agree that my argument was hand-waving because the case of zero variance is qualitatively different from the case of epsilon variance in that the latter distribution goes all the way to plus and minus infinity. As for your consideration of what happens at a zero crossing, consider first a distribution with finite support (-1, 1). The support of the inverse distribution is the union of (-infinity, -1) and (1, infinity). So to get its mean or variance, you never have to integrate across zero. Now for an unbounded original distribution like the normal, it goes to plus and minus infinity so its inverse distribution crosses zero; but as long as the inverse distribution is not infinite at y=0 it's not obvious to me that any problem arises, especially since in the specific case of the normal we have by L'Hopital's Rule that g(y) = 0 at y=0. Duoduoduo (talk) 14:59, 11 April 2013 (UTC)[reply]
In my approach, I was integrating across zero in the original distribution (density f), not in the final distribution (density g). The problem (non-existence of the mean) doesn't arise from the density g at zero, but from its behaviour at +/- infinity:
You may have been confused by an earlier flawed statement interpreted as meaning that the density g doesn't exist at zero.Melcombe (talk) 16:29, 11 April 2013 (UTC)[reply]
Okay. So for the standard normal we have whose integrand is infinity at x=0. So integrate each side of zero separately. Since an area could be finite even though the curve approaches the vertical axis asymptotically, I don't see that any general statement could be made about E(1/X) not existing. Duoduoduo (talk) 17:06, 11 April 2013 (UTC)[reply]
I am not aware of any definition of the normal distribution that states that it is the sum of two distributions around 0 but excluding 0. That the variance of the inverse of the normal distribution is infinite as has been proved above. The variance of two half normal distributions may defined but as x -> 0 the integral increases beyond any finite value. Because a variance is always > 0 the left and right limits do not cancel and the sum that increases beyond any finite value. The correct statement appears to be (1) that the variance of reciprocal of the normal is infinite and hence not defined and (2) that the variance of the sum of the inverse of two half normal distributions is finite provided that a finite segment around zero is omitted from the domain of integration. I think that it can now be safely concluded that the original statement that the variance of the reciprocal of the normal distribution is infinite is correct.DrMicro (talk) 13:27, 12 April 2013 (UTC)[reply]
DrMicro, I think you may have misunderstood what I said about zero above. I never said anything about two half normal distributions. The variance of the reciprocal standard normal does not exist if its mean does not exist, and its mean is
This thread contained no proof that this is infinite. However, since Melcombe has put in a source for the infinite-variance assertion, it's fine now. Duoduoduo (talk) 17:43, 12 April 2013 (UTC)[reply]
Thank you checking the correctness of the original assertion made on the page. And thank you Melcombe for clarifying these points. This is Wikipedia at its best. DrMicro (talk) 12:39, 13 April 2013 (UTC)[reply]

Practical Aspects of Inverse Normal

[edit]
The section on an inverse normal distrubtion, while true, is completely unhelpful for someone (like me) solving a practical engineering problem. I have a variable that, from a practical standpoint, is normally distributed; the exception being that it's physically limited to non-negative numbers. I need the distribution of its inverse. It would be helpful for this page to say something productive about that. User:LarryMickelson — Preceding undated comment added 22:05, 27 January 2015 (UTC)[reply]

Section on inverse t distribution seems incorrect

[edit]

The section on the inverse t distribution says

Let X be a t distributed random variate with k degrees of freedom. Then its density function is
The density of Y = 1 / X is
(1)

It appears that this density expression was derived (original research?) from the expression earlier in the article that applies only to distributions with positive support:

(2)

which in turn implies

(3)

But I think this can'gt be right: For example, at y=1/5 the left side of eq. (3) is

(4)

whereas the right side of eq. (3) is

(5)

where the last expression is < 1/2 because F(5)>1/2 due to symmetry of the t density around zero.

So can we remove this section? Duoduoduo (talk) 12:47, 13 April 2013 (UTC)[reply]

Your (3) is different from the expression I put above ...
If you differentiate either part of this you still get (2). I think you would need to be more careful in trying to derive (3) from (2). Your (3) suffers from the problem that G(y) tends to 1-F(0) as y tends to plus or minus infinity, whereas these should be 1 or 0 respectively. Melcombe (talk) 16:36, 13 April 2013 (UTC)[reply]
Okay, got it now. Thanks. Should your two-part G(y) be put into the top section, "Relation to original distribution"? Ordinarily I'd say not without a source, but you've convinced me that it's right, and it would provide context for where the g(y) expression in the t-distribution section came from. What do you think? Duoduoduo (talk) 22:13, 13 April 2013 (UTC)[reply]
I have found http://stats.stackexchange.com/questions/20666/the-reciprocal-of-t-distributed-random-variable , but that doesn't seem to count as a reliable source. In this search I came across many webpages and some papers using "reciprocal distribution" (only) for the x-1 density that has now been omitted, but which was in the initial version of this article. Any thought about renaming the present version to "inverse distribution" so that "reciprocal distribution" can be used for the specific distribution that seems to appear in at least some software packages under that name? Melcombe (talk) 22:59, 13 April 2013 (UTC)[reply]

I'm still not clear what that original distribution at the article's inception was supposed to be. Was it the distribution of the reciprocal of a random variable, and if so what was supposed to be the distribution of the original random variable?

What a messy history this article has! At first, in November 2012, the article didn't say what that was supposed to be the reciprocal distribution of. Then on 6 December somebody wrongly inserted that it was the reciprocal uniform. On 28 March someone came along and changed the pdf and cdf to something closer to what would be right for the reciprocal uniform (and changed the designation pmf to the correct pdf). Then I fixed the pdf and cdf when I just recently became aware of the article. What a messy history!

Would you be willing to put back in something correct about what your sources call the reciprocal distribution? I'd support changing the article name to inverse distribution. If we do that, does the one called the "reciprocal distribution" still go in this article or in its own article? Duoduoduo (talk) 23:59, 13 April 2013 (UTC)[reply]

The one that appears in the lterature under the name "reciprocal distribution" is the one in the existing reference by Hamming. You will see that it is its own reciprocal/inverse distribution, via the relation between the density functions: the x-1 density goes to a y-1 density. I think the original reason for the name "reciprocal distribution" was that its density functon is/was a reciprocal function, rather than starting from the reciprocal of a random variable. We could leave things with "reciprocal distribution" having the probability distribution template detail that was in the initial version here, and with "inverse distribution" containing some brief reference to it, but being more in parallel to the existing articles product distribution and ratio distribution. From a pedantic point of view it might be better to use names like "ratio of random variables" for these, but some people prefer shorter names. If there are no objections in a day or so, I will go ahead and rename to "inverse distribution", and leave a version of "recipocal distribution". I do have a problem in that my own access to academic journals in restricted to what is available for free online.... so that in many cases, while a search shows that a "reciprocal distribution" is being used in a paper, I can't see precisely how. Melcombe (talk) 09:38, 14 April 2013 (UTC)[reply]
OK, done this now .... two separate articles, with some cross-referencing. Melcombe (talk) 13:18, 16 April 2013 (UTC)[reply]
Nice job -- thanks! Duoduoduo (talk) 14:45, 16 April 2013 (UTC)[reply]
Thank you Duoduoduo and Melcombe. The current arrangment is much better.DrMicro