Chi-squared Test - Wikipedia

Statistical hypothesis test
Chi-squared distribution, showing χ2 on the first axis and p-value (right tail probability) on the second axis.

A chi-squared test (also chi-square or χ2 test) is a statistical hypothesis test used in the analysis of contingency tables when the sample sizes are large. In simpler terms, this test is primarily used to examine whether two categorical variables (two dimensions of the contingency table) are independent in influencing the test statistic (values within the table).[1]

The test is valid when the test statistic is chi-squared distributed under the null hypothesis, specifically Pearson's chi-squared test and variants thereof. Pearson's chi-squared test is used to determine whether there is a statistically significant difference between the expected frequencies and the observed frequencies in one or more categories of a contingency table. For contingency tables with smaller sample sizes, a Fisher's exact test is used instead.

In the standard applications of this test, the observations are classified into mutually exclusive classes. If the null hypothesis that there are no differences between the classes in the population is true, the test statistic computed from the observations follows a χ2 frequency distribution. The purpose of the test is to evaluate how likely the observed frequencies would be assuming the null hypothesis is true.

Test statistics that follow a χ2 distribution occur when the observations are independent. There are also χ2 tests for testing the null hypothesis of independence of a pair of random variables based on observations of the pairs.

Chi-squared tests often refers to tests for which the distribution of the test statistic approaches the χ2 distribution asymptotically, meaning that the sampling distribution (if the null hypothesis is true) of the test statistic approximates a chi-squared distribution more and more closely as sample sizes increase.

History

[edit]

In the 19th century, statistical analytical methods were mainly applied in biological data analysis and it was customary for researchers to assume that observations followed a normal distribution, such as Sir George Airy and Mansfield Merriman, whose works were criticized by Karl Pearson in his 1900 paper.[2]

At the end of the 19th century, Pearson noticed the existence of significant skewness within some biological observations. In order to model the observations regardless of being normal or skewed, Pearson, in a series of articles published from 1893 to 1916,[3][4][5][6] devised the Pearson distribution, a family of continuous probability distributions, which includes the normal distribution and many skewed distributions, and proposed a method of statistical analysis consisting of using the Pearson distribution to model the observation and performing a test of goodness of fit to determine how well the model really fits to the observations.

Pearson's chi-squared test

[edit] See also: Pearson's chi-squared test

In 1900, Pearson published a paper[2] on the χ2 test which is considered to be one of the foundations of modern statistics.[7] In this paper, Pearson investigated a test of goodness of fit.

Suppose that n observations in a random sample from a population are classified into k mutually exclusive classes with respective observed numbers of observations xi (for i = 1,2,…,k), and a null hypothesis gives the probability pi that an observation falls into the ith class. So we have the expected numbers mi = npi for all i, where

∑ i = 1 k p i = 1 ∑ i = 1 k m i = n ∑ i = 1 k p i = n {\displaystyle {\begin{aligned}&\sum _{i=1}^{k}{p_{i}}=1\\[8pt]&\sum _{i=1}^{k}{m_{i}}=n\sum _{i=1}^{k}{p_{i}}=n\end{aligned}}}

Pearson proposed that, under the circumstance of the null hypothesis being correct, as n → ∞ the limiting distribution of the quantity given below is the χ2 distribution.

X 2 = ∑ i = 1 k ( x i − m i ) 2 m i = ∑ i = 1 k x i 2 m i − n {\displaystyle X^{2}=\sum _{i=1}^{k}{\frac {(x_{i}-m_{i})^{2}}{m_{i}}}=\sum _{i=1}^{k}{{\frac {x_{i}^{2}}{m_{i}}}-n}}

Pearson dealt first with the case in which the expected numbers mi are large enough known numbers in all cells assuming every observation xi may be taken as normally distributed, and reached the result that, in the limit as n becomes large, X2 follows the χ2 distribution with k − 1 degrees of freedom.

However, Pearson next considered the case in which the expected numbers depended on the parameters that had to be estimated from the sample, and suggested that, with the notation of mi being the true expected numbers and mi being the estimated expected numbers, the difference

X 2 − X ′ 2 = ∑ i = 1 k x i 2 m i − ∑ i = 1 k x i 2 m i ′ {\displaystyle X^{2}-{X'}^{2}=\sum _{i=1}^{k}{\frac {x_{i}^{2}}{m_{i}}}-\sum _{i=1}^{k}{\frac {x_{i}^{2}}{m'_{i}}}}

will usually be positive and small enough to be omitted. In a conclusion, Pearson argued that if we regarded X2 as also distributed as χ2 distribution with k − 1 degrees of freedom, the error in this approximation would not affect practical decisions. This conclusion caused some controversy in practical applications and was not settled for 20 years until Fisher's 1922 and 1924 papers.[8][9]

Other examples of chi-squared tests

[edit]

One test statistic that follows a chi-squared distribution exactly is the test that the variance of a normally distributed population has a given value based on a sample variance. Such tests are uncommon in practice because the true variance of the population is usually unknown. However, there are several statistical tests where the chi-squared distribution is approximately valid:

Fisher's exact test

[edit]

For an exact test used in place of the 2 × 2 chi-squared test for independence when all the row and column totals were fixed by design, see Fisher's exact test. When the row or column margins (or both) are random variables (as in most common research designs) this tends to be overly conservative and underpowered.[10]

Binomial test

[edit]

For an exact test used in place of the 2 × 1 chi-squared test for goodness of fit, see binomial test.

Other chi-squared tests

[edit]
  • Cochran–Mantel–Haenszel chi-squared test.
  • McNemar's test, used in certain 2 × 2 tables with pairing
  • Tukey's test of additivity
  • The portmanteau test in time-series analysis, testing for the presence of autocorrelation
  • Likelihood-ratio tests in general statistical modelling, for testing whether there is evidence of the need to move from a simple model to a more complicated one (where the simple model is nested within the complicated one).

Yates's correction for continuity

[edit] Main article: Yates's correction for continuity

Using the chi-squared distribution to interpret Pearson's chi-squared statistic requires one to assume that the discrete probability of observed binomial frequencies in the table can be approximated by the continuous chi-squared distribution. This assumption is not quite correct and introduces some error.

To reduce the error in approximation, Frank Yates suggested a correction for continuity that adjusts the formula for Pearson's chi-squared test by subtracting 0.5 from the absolute difference between each observed value and its expected value in a 2 × 2 contingency table.[11] This reduces the chi-squared value obtained and thus increases its p-value.

Chi-squared test for variance in a normal population

[edit]

If a sample of size n is taken from a population having a normal distribution, then there is a result (see distribution of the sample variance) which allows a test to be made of whether the variance of the population has a pre-determined value. For example, a manufacturing process might have been in stable condition for a long period, allowing a value for the variance to be determined essentially without error. Suppose that a variant of the process is being tested, giving rise to a small sample of n product items whose variation is to be tested. The test statistic T in this instance could be set to be the sum of squares about the sample mean, divided by the nominal value for the variance (i.e. the value to be tested as holding). Then T has a chi-squared distribution with n − 1 degrees of freedom. For example, if the sample size is 21, the acceptance region for T with a significance level of 5% is between 9.59 and 34.17.

Example chi-squared test for categorical data

[edit]

Suppose there is a city of 1,000,000 residents with four neighborhoods: A, B, C, and D. A random sample of 650 residents of the city is taken and their occupation is recorded as "white collar", "blue collar", or "no collar". The null hypothesis is that each person's neighborhood of residence is independent of the person's occupational classification. The data are tabulated as:

A B C D Total
White collar 90 60 104 95 349
Blue collar 30 50 51 20 151
No collar 30 40 45 35 150
Total 150 150 200 150 650

Let us take the sample living in neighborhood A, 150, to estimate what proportion of the whole 1,000,000 live in neighborhood A. Similarly we take 349/650 to estimate what proportion of the 1,000,000 are white-collar workers. By the assumption of independence under the hypothesis we should "expect" the number of white-collar workers in neighborhood A to be

150 × 349 650 ≈ 80.54 {\displaystyle 150\times {\frac {349}{650}}\approx 80.54}

Then in that "cell" of the table, we have

( observed − expected ) 2 expected = ( 90 − 80.54 ) 2 80.54 ≈ 1.11 {\displaystyle {\frac {\left({\text{observed}}-{\text{expected}}\right)^{2}}{\text{expected}}}={\frac {\left(90-80.54\right)^{2}}{80.54}}\approx 1.11}

The sum of these quantities over all of the cells is the test statistic; in this case, ≈ 24.57 {\displaystyle \approx 24.57} . Under the null hypothesis, this sum has approximately a chi-squared distribution whose number of degrees of freedom is

( number of rows − 1 ) ( number of columns − 1 ) = ( 3 − 1 ) ( 4 − 1 ) = 6 {\displaystyle ({\text{number of rows}}-1)({\text{number of columns}}-1)=(3-1)(4-1)=6}

If the test statistic is improbably large according to that chi-squared distribution, then one rejects the null hypothesis of independence.

A related issue is a test of homogeneity. Suppose that instead of giving every resident of each of the four neighborhoods an equal chance of inclusion in the sample, we decide in advance how many residents of each neighborhood to include. Then each resident has the same chance of being chosen as do all residents of the same neighborhood, but residents of different neighborhoods would have different probabilities of being chosen if the four sample sizes are not proportional to the populations of the four neighborhoods. In such a case, we would be testing "homogeneity" rather than "independence". The question is whether the proportions of blue-collar, white-collar, and no-collar workers in the four neighborhoods are the same. However, the test is done in the same way.

Applications

[edit]

In cryptanalysis, the chi-squared test is used to compare the distribution of plaintext and (possibly) decrypted ciphertext. The lowest value of the test means that the decryption was successful with high probability.[12][13] This method can be generalized for solving modern cryptographic problems.[14]

In bioinformatics, the chi-squared test is used to compare the distribution of certain properties of genes (e.g., genomic content, mutation rate, interaction network clustering, etc.) belonging to different categories (e.g., disease genes, essential genes, genes on a certain chromosome etc.).[15][16]

Limitations

[edit]

The chi-squared test, while widely used for categorical data analysis, has several important limitations that researchers should consider. First, it assumes that observations are independent; violating this assumption can lead to misleading results. Second, the test is sensitive to sample size. In very large samples, even trivial differences between observed and expected frequencies may produce statistically significant results, while in very small samples, the test may lack power to detect meaningful associations. Third, the chi-squared test does not measure the strength or practical importance of an association—it only indicates whether a significant relationship exists. Effect size measures, such as Cramér’s V or the contingency coefficient, should be reported alongside the test statistic to provide context. Finally, the test may be inaccurate when expected frequencies in any cell are very small, often recommended to be at least 5; in such cases, exact tests or alternative methods are preferred. Researchers must also be cautious when dealing with unevenly distributed categories, as dominant groups can overshadow patterns in smaller groups.

See also

[edit]
  • iconMathematics portal
  • Chi-squared test nomogram
  • GEH statistic
  • G-test
  • Minimum chi-square estimation
  • Nonparametric statistics
  • Wald test
  • Wilson score interval

References

[edit]
  1. ^ "Chi-Square - Sociology 3112 - Department of Sociology - The University of utah". soc.utah.edu. Retrieved 2022-11-12.
  2. ^ a b Pearson, Karl (1900). "On the criterion that a given system of deviations from the probable in the case of a correlated system of variables is such that it can be reasonably supposed to have arisen from random sampling". Philosophical Magazine. Series 5. 50 (302): 157–175. doi:10.1080/14786440009463897.
  3. ^ Pearson, Karl (1893). "Contributions to the mathematical theory of evolution [abstract]". Proceedings of the Royal Society. 54: 329–333. doi:10.1098/rspl.1893.0079. JSTOR 115538.
  4. ^ Pearson, Karl (1895). "Contributions to the mathematical theory of evolution, II: Skew variation in homogeneous material". Philosophical Transactions of the Royal Society. 186: 343–414. Bibcode:1895RSPTA.186..343P. doi:10.1098/rsta.1895.0010. JSTOR 90649.
  5. ^ Pearson, Karl (1901). "Mathematical contributions to the theory of evolution, X: Supplement to a memoir on skew variation". Philosophical Transactions of the Royal Society A. 197 (287–299): 443–459. Bibcode:1901RSPTA.197..443P. doi:10.1098/rsta.1901.0023. JSTOR 90841.
  6. ^ Pearson, Karl (1916). "Mathematical contributions to the theory of evolution, XIX: Second supplement to a memoir on skew variation". Philosophical Transactions of the Royal Society A. 216 (538–548): 429–457. Bibcode:1916RSPTA.216..429P. doi:10.1098/rsta.1916.0009. JSTOR 91092.
  7. ^ Cochran, William G. (1952). "The Chi-square Test of Goodness of Fit". The Annals of Mathematical Statistics. 23 (3): 315–345. doi:10.1214/aoms/1177729380. JSTOR 2236678.
  8. ^ Fisher, Ronald A. (1922). "On the Interpretation of χ2 from Contingency Tables, and the Calculation of P". Journal of the Royal Statistical Society. 85 (1): 87–94. doi:10.2307/2340521. JSTOR 2340521.
  9. ^ Fisher, Ronald A. (1924). "The Conditions Under Which χ2 Measures the Discrepancey Between Observation and Hypothesis". Journal of the Royal Statistical Society. 87 (3): 442–450. JSTOR 2341149.
  10. ^ Campbell, Ian (2007-08-30). "Chi-squared and Fisher–Irwin tests of two-by-two tables with small sample recommendations". Statistics in Medicine. 26 (19): 3661–3675. doi:10.1002/sim.2832. ISSN 0277-6715. PMID 17315184.
  11. ^ Yates, Frank (1934). "Contingency table involving small numbers and the χ2 test". Supplement to the Journal of the Royal Statistical Society. 1 (2): 217–235. doi:10.2307/2983604. JSTOR 2983604.
  12. ^ "Chi-squared Statistic". Practical Cryptography. Archived from the original on 18 February 2015. Retrieved 18 February 2015.
  13. ^ "Using Chi Squared to Crack Codes". IB Maths Resources. British International School Phuket. 15 June 2014.
  14. ^ Ryabko, B. Ya.; Stognienko, V. S.; Shokin, Yu. I. (2004). "A new test for randomness and its application to some cryptographic problems" (PDF). Journal of Statistical Planning and Inference. 123 (2): 365–376. doi:10.1016/s0378-3758(03)00149-6. Retrieved 18 February 2015.
  15. ^ Feldman, I.; Rzhetsky, A.; Vitkup, D. (2008). "Network properties of genes harboring inherited disease mutations". PNAS. 105 (11): 4323–432. Bibcode:2008PNAS..105.4323F. doi:10.1073/pnas.0701722105. PMC 2393821. PMID 18326631.
  16. ^ "chi-square-tests" (PDF). Archived from the original (PDF) on 29 June 2018. Retrieved 29 June 2018.

[1]

Further reading

[edit]
  • Weisstein, Eric W. "Chi-Squared Test". MathWorld.
  • Corder, G. W.; Foreman, D. I. (2014). Nonparametric Statistics: A Step-by-Step Approach. New York: Wiley. ISBN 978-1118840313.
  • Greenwood, Cindy; Nikulin, M. S. (1996). A guide to chi-squared testing. New York: Wiley. ISBN 0-471-55779-X.
  • Nikulin, M. S. (1973). Chi-squared test for normality. Proceedings of the International Vilnius Conference on Probability Theory and Mathematical Statistics. Vol. 2. pp. 119–122.
  • Bagdonavicius, Vilijandas B.; Nikulin, Mikhail S. (2011). "Chi-squared goodness-of-fit test for right censored data". International Journal of Applied Mathematics & Statistics. 24: 30–50. MR 2800388.
  • v
  • t
  • e
Statistics
  • Outline
  • Index
Descriptive statistics
Continuous data
Center
  • Mean
    • Arithmetic
    • Arithmetic-Geometric
    • Contraharmonic
    • Cubic
    • Generalized/power
    • Geometric
    • Harmonic
    • Heronian
    • Heinz
    • Lehmer
  • Median
  • Mode
Dispersion
  • Average absolute deviation
  • Coefficient of variation
  • Interquartile range
  • Percentile
  • Range
  • Standard deviation
  • Variance
Shape
  • Central limit theorem
  • Moments
    • Kurtosis
    • L-moments
    • Skewness
Count data
  • Index of dispersion
Summary tables
  • Contingency table
  • Frequency distribution
  • Grouped data
Dependence
  • Partial correlation
  • Pearson product-moment correlation
  • Rank correlation
    • Kendall's τ
    • Spearman's ρ
  • Scatter plot
Graphics
  • Bar chart
  • Biplot
  • Box plot
  • Control chart
  • Correlogram
  • Fan chart
  • Forest plot
  • Histogram
  • Pie chart
  • Q–Q plot
  • Radar chart
  • Run chart
  • Scatter plot
  • Stem-and-leaf display
  • Violin plot
Data collection
Study design
  • Effect size
  • Missing data
  • Optimal design
  • Population
  • Replication
  • Sample size determination
  • Statistic
  • Statistical power
Survey methodology
  • Sampling
    • Cluster
    • Stratified
  • Opinion poll
  • Questionnaire
  • Standard error
Controlled experiments
  • Blocking
  • Factorial experiment
  • Interaction
  • Random assignment
  • Randomized controlled trial
  • Randomized experiment
  • Scientific control
Adaptive designs
  • Adaptive clinical trial
  • Stochastic approximation
  • Up-and-down designs
Observational studies
  • Cohort study
  • Cross-sectional study
  • Natural experiment
  • Quasi-experiment
Statistical inference
Statistical theory
  • Population
  • Statistic
  • Probability distribution
  • Sampling distribution
    • Order statistic
  • Empirical distribution
    • Density estimation
  • Statistical model
    • Model specification
    • Lp space
  • Parameter
    • location
    • scale
    • shape
  • Parametric family
    • Likelihood (monotone)
    • Location–scale family
    • Exponential family
  • Completeness
  • Sufficiency
  • Statistical functional
    • Bootstrap
    • U
    • V
  • Optimal decision
    • loss function
  • Efficiency
  • Statistical distance
    • divergence
  • Asymptotics
  • Robustness
Frequentist inference
Point estimation
  • Estimating equations
    • Maximum likelihood
    • Method of moments
    • M-estimator
    • Minimum distance
  • Unbiased estimators
    • Mean-unbiased minimum-variance
      • Rao–Blackwellization
      • Lehmann–Scheffé theorem
    • Median unbiased
  • Plug-in
Interval estimation
  • Confidence interval
  • Pivot
  • Likelihood interval
  • Prediction interval
  • Tolerance interval
  • Resampling
    • Bootstrap
    • Jackknife
Testing hypotheses
  • 1- & 2-tails
  • Power
    • Uniformly most powerful test
  • Permutation test
    • Randomization test
  • Multiple comparisons
Parametric tests
  • Likelihood-ratio
  • Score/Lagrange multiplier
  • Wald
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
  • Chi-squared
  • G-test
  • Kolmogorov–Smirnov
  • Anderson–Darling
  • Lilliefors
  • Jarque–Bera
  • Normality (Shapiro–Wilk)
  • Likelihood-ratio test
  • Model selection
    • Cross validation
    • AIC
    • BIC
Rank statistics
  • Sign
    • Sample median
  • Signed rank (Wilcoxon)
    • Hodges–Lehmann estimator
  • Rank sum (Mann–Whitney)
  • Nonparametric anova
    • 1-way (Kruskal–Wallis)
    • 2-way (Friedman)
    • Ordered alternative (Jonckheere–Terpstra)
  • Van der Waerden test
Bayesian inference
  • Bayesian probability
    • prior
    • posterior
  • Credible interval
  • Bayes factor
  • Bayesian estimator
    • Maximum posterior estimator
  • Correlation
  • Regression analysis
Correlation
  • Pearson product-moment
  • Partial correlation
  • Confounding variable
  • Coefficient of determination
Regression analysis
  • Errors and residuals
  • Regression validation
  • Mixed effects models
  • Simultaneous equations models
  • Multivariate adaptive regression splines (MARS)
  • Template:Least squares and regression analysis
Linear regression
  • Simple linear regression
  • Ordinary least squares
  • General linear model
  • Bayesian regression
Non-standard predictors
  • Nonlinear regression
  • Nonparametric
  • Semiparametric
  • Isotonic
  • Robust
  • Homoscedasticity and Heteroscedasticity
Generalized linear model
  • Exponential families
  • Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
  • Analysis of variance (ANOVA, anova)
  • Analysis of covariance
  • Multivariate ANOVA
  • Degrees of freedom
Categorical / multivariate / time-series / survival analysis
Categorical
  • Cohen's kappa
  • Contingency table
  • Graphical model
  • Log-linear model
  • McNemar's test
  • Cochran–Mantel–Haenszel statistics
Multivariate
  • Regression
  • Manova
  • Principal components
  • Canonical correlation
  • Discriminant analysis
  • Cluster analysis
  • Classification
  • Structural equation model
    • Factor analysis
  • Multivariate distributions
    • Elliptical distributions
      • Normal
Time-series
General
  • Decomposition
  • Trend
  • Stationarity
  • Seasonal adjustment
  • Exponential smoothing
  • Cointegration
  • Structural break
  • Granger causality
Specific tests
  • Dickey–Fuller
  • Johansen
  • Q-statistic (Ljung–Box)
  • Durbin–Watson
  • Breusch–Godfrey
Time domain
  • Autocorrelation (ACF)
    • partial (PACF)
  • Cross-correlation (XCF)
  • ARMA model
  • ARIMA model (Box–Jenkins)
  • Autoregressive conditional heteroskedasticity (ARCH)
  • Vector autoregression (VAR) (Autoregressive model (AR))
Frequency domain
  • Spectral density estimation
  • Fourier analysis
  • Least-squares spectral analysis
  • Wavelet
  • Whittle likelihood
Survival
Survival function
  • Kaplan–Meier estimator (product limit)
  • Proportional hazards models
  • Accelerated failure time (AFT) model
  • First hitting time
Hazard function
  • Nelson–Aalen estimator
Test
  • Log-rank test
Applications
Biostatistics
  • Bioinformatics
  • Clinical trials / studies
  • Epidemiology
  • Medical statistics
Engineering statistics
  • Chemometrics
  • Methods engineering
  • Probabilistic design
  • Process / quality control
  • Reliability
  • System identification
Social statistics
  • Actuarial science
  • Census
  • Crime statistics
  • Demography
  • Econometrics
  • Jurimetrics
  • National accounts
  • Official statistics
  • Population statistics
  • Psychometrics
Spatial statistics
  • Cartography
  • Environmental statistics
  • Geographic information system
  • Geostatistics
  • Kriging
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject
Authority control databases Edit this at Wikidata
National
  • United States
  • Israel
Other
  • Yale LUX
  1. ^ Homer, Matt (2017-05-30). "An introduction to secondary data analysis with IBM SPSS statistics (1st ed.)". Educational Review. 70 (2): 251–252. doi:10.1080/00131911.2017.1330503. ISSN 0013-1911.

Từ khóa » H0 Chi2