Talk:T-statistic

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Motivation[edit]

I think it useful to have a separate page for t-statistics, in addition to t-tests and t-distributions: while t-statistics is most often used in t-tests, and is useful to understand in that light, as a formula it can be understood without the machinery and assumptions of statistical hypothesis testing. Further, this gives a direct way to look up and discuss the statistics themselves without wading through discussion of t-tests, which is hopefully a useful reference.

Hope this proves useful!

—Nils von Barth (nbarth) (talk) 17:20, 19 April 2009 (UTC)[reply]

Definition[edit]

The last part of the definition is tagged as needing clarification.

The definition states:

In the case of a single-sample t-statistic, where the statistic is a single draw from a normal distribution, and thus the standard error is the (population) standard deviation, and the estimate of the error is the sample standard deviation s, divided by , which yields:[clarification needed]
which is sometimes referred to as the t-statistic.

The statement above cannot be corrected because if we have a single draw from a normal distribution, the sample standard deviation is zero. It is not clear to me what was intended here. Actually the whole Definition section needs a revision; working on it now. Mathstat (talk) 19:33, 19 February 2011 (UTC)[reply]

Current definition completely misses the point. T-statistic may or may not have student's t-distribution (more often, it doesn't). Say, Stata (and any other statistical program) happily computes t-statistics in all regressions, regardless of the distribution of the error terms in this regression, or even the nature of the regression.  // stpasha »  07:48, 22 February 2011 (UTC)[reply]

Revised current definition:

Let be an estimator of parameter β in some statistical model. Then a t-statistic for this parameter is any quantity of the form

where β0 is a non-random, known constant, and is the standard error of the estimator . By default, statistical packages report t-statistic with β0 = 0 (these t-statistics are used to test the significance of corresponding regressor). However, when t-statistic is needed to test the hypothesis of the form H0: β = β0, then a non-zero β0 may be used.

is better but ...
  • If you adopt this definition then you have a redundant constant in the numerator. The estimator has the same se as . The general consensus seems to be that one could define a "t"-statistic as the ratio of an estimator over its standard error. The estimator in this case is simply the difference .
  • The second sentence beginning "By default ..." is a non-sequiter. It is an example, not part of the definition.
  • The average person who needs to consult a WP article on "t-statistic" probably needs to see the example of the single-sample t-statistic near the beginning. Statisticians are not likely to refer to this article for reference but undergraduate students may frequently view it.
  • Yes it is true that statistical software reports p-values of t-statistics for parameter estimates in regression because under the null hypothesis of interest (parameter equals 0) and normal theory (Gauss-Markov) conditions the statistic does have a t-distribution. This is why the statistic is reported with the "known constant" equal to zero (because it is zero under the null). We would not want to use a software package that refused to compute p-values because the hypotheses for the computation might be false. This is the responsibility of the user - to understand the procedure and whatever conditions are necessary for its validity. You could say exactly the same thing about the one or two-sample t-test. Of course the statistic does not have a t-distribution when the necessary conditions fail to hold. Validity of the assumptions is up to the user. The important point is that when properly applied, some t-statistics have t distributions and some "t" ratios do not.
  • In textbooks it seems that such ratios may often be called "studentized" statistics when their distributions are not t distributions. For example "studentized residuals" or "internally studentized residuals" or "externally studentized residuals" for regression diagnostics (A first course in linear model theory by Ravishankar and Dey). In Applied Linear Regression Models by Kutner, Nachsheim and Neter, they refer to "semistudentized residuals". There is the "studentized range statistic" and so on.
  • Anyway it should be made clear somehow which of these are t-statistics (have a t distribution) and which are "t" type statistics. Mathstat (talk) 16:48, 25 February 2011 (UTC)[reply]

Clarity[edit]

Can you make the article less esoteric? it is hard to understand with so much terminology.

agreed, This number is given to me along with the predicted result from the NIR at work. I still havent the faintest idea what to do with it though. Is high good or low? are the units the same as my measured variable or a percentage or an absolute? I've not yet seen a negative or zero Tstat, is that impossible or merely unusual. 203.36.161.41 (talk) 03:33, 1 November 2013 (UTC)[reply]

Merger proposal[edit]

I propose that T-statistic be merged into Test statistic. Though the articles are written differently, these two technical terms have identical meanings (t-statistic is an abbreviation). For this reason, and to improve clarity by building upon the extant articles, they should be merged ASAP. Jacobwsl (talk) 22:25, 4 March 2014 (UTC)[reply]

While the "t" in "t-statistic" probably comes from the word "test", the t-stastistic is only one example of a test statistic. They don't have identical meanings. Since the t-statistic is widely used, it needs its own separate article. Loraof (talk) 18:43, 12 July 2017 (UTC)[reply]

Merge[edit]

Just like Z-score and standardization refer to the same concept, so does the t-statistic and Studentization. Fgnievinski (talk) 20:24, 10 May 2015 (UTC)[reply]

Agreed. This could easily be incorporated into Studentization. Andrew. Z. Colvin • Talk 06:58, 26 January 2017 (UTC)[reply]
Oppose: it seems from the definition given on the page that Studentization is a much broader statistical concept than that applying to the t-statistics. The t-statistic is an independently notable example of studentization. However, it does seem that some consolidation is required. Perhaps merge t-statistic into Student's t-distribution? Klbrain (talk) 14:51, 4 November 2017 (UTC)[reply]
Closing, given no consensus. Klbrain (talk) 21:53, 1 December 2017 (UTC)[reply]

A t test is not a test relating to "small sample sizes" though millions of students each year are taught this.[edit]

Is this claim I heard from a statistics grad student true? LaceyUF (talk) 14:04, 8 April 2021 (UTC)[reply]

P value technicality[edit]

The discussion of the p value in the opening sentence is not technically correct.

Article states that "It is also used along with p-value when running hypothesis tests where the p-value tells us what the odds are of the results to have happened."

However, the p-value in null hypothesis tests do not tell us the odds of achieving the results in the case that they 'have happened'.

It only makes sense to talk about likelihoods in terms of the results 'having happened' in a bayesian interpretation of probability. Unlike in Frequentist interpretations of probability - which leads to the use of null-hypothesis tests - Bayesian interpretations assume the truth/accuracy of the observations and talks about the probability of hypotheses.

You would not use p-values in a bayesian framework, because p-values are used when you are interested in trying to estimate the probability of observing the data that you have, while assuming the hypothesis you want to test (i.e. a specific effect size between two variables) is true. In other words, you might have a reason for thinking there is a specific effect between two variables, and then you want to test how consistent your observations are with the idea that this effect is real.

Mathematically this is practically impossible to do, so p-values are used instead.

P-values are used in null-hypothesis testing to evaluate how consistent your observations are with the assumption that there is a precise effect of 0 between your variables of interest. The general idea is to compare this likelihood against an alpha that has been pre-specified that takes into account the chances of type 1 and type 2 error rates (which most people in science don't care to assess). If the chance of a type 1 error is more likely than observing the results on the assumption that there is a precise effect of zero, we take this to be sufficient evidence for thinking that there is not actually a non-zero effect (i.e. the null hypothesis is false). This is then most frequently taken to vindicate the observed effect as real or meaningful.

What most people don't understand is that observing a p-value that is lower than alpha doesn't actually tell us whether there is a meaningful difference between the two variables. This has to be determined by philosophical and theoretical considerations. All the test itself tells us is that there is not likely to be a set of observations consistent with an effect of zero that is not best explained by a type 1 error.

Observing a p-value lower than the alpha in a null-hypothesis test doesn't guarantee that your observations are consistent with your actual hypothesis. Taking the example of a hypothesis 'the cancer treatment will lead to a meaningful reduction in cancer'. The null is that it won't do anything.

You can run a NHST and get a p value that's lower than the alpha (i.e. You can reject that there is an effect of zero from the cancer treatment). But it is still possible for the observations to be less consistent with your hypothesis (the cancer treatment reduces cancer) than the null (the cancer treatment doesn't do anything).

My worry with the wording of p-values here is that this is going to contribute to people misunderstanding and misusing p-values. — Preceding unsigned comment added by 180.150.113.13 (talk) 02:15, 7 June 2021 (UTC)[reply]