Alternative Hypothesis - Wikipedia

Alternative assumption to the null hypothesis Main article: Statistical hypothesis testing

In statistical hypothesis testing, the alternative hypothesis is one of the proposed propositions in the hypothesis test. In general the goal of hypothesis test is to demonstrate that in the given condition, there is sufficient evidence supporting the credibility of alternative hypothesis instead of the exclusive proposition in the test (null hypothesis).[1] It is usually consistent with the research hypothesis because it is constructed from literature review, previous studies, etc. However, the research hypothesis is sometimes consistent with the null hypothesis.

In statistics, alternative hypothesis is often denoted as Ha or H1. Hypotheses are formulated to compare in a statistical hypothesis test.

In the domain of inferential statistics, two rival hypotheses can be compared by explanatory power and predictive power.

Basic definition

[edit]

The alternative hypothesis and null hypothesis are types of conjectures used in statistical tests, which are formal methods of reaching conclusions or making judgments on the basis of data. In statistical hypothesis testing, the null hypothesis and alternative hypothesis are two mutually exclusive statements.

"The statement being tested in a test of statistical significance is called the null hypothesis. The test of significance is designed to assess the strength of the evidence against the null hypothesis. Usually, the null hypothesis is a statement of 'no effect' or 'no difference'."[2] Null hypothesis is often denoted as H0.

The statement that is being tested against the null hypothesis is the alternative hypothesis.[2] Alternative hypothesis is often denoted as Ha or H1.

In statistical hypothesis testing, to prove the alternative hypothesis is true, it should be shown that the data is contradictory to the null hypothesis. Namely, there is sufficient evidence against null hypothesis to demonstrate that the alternative hypothesis is true.

Example

[edit]

One example is where water quality in a stream has been observed over many years, and a test is made of the null hypothesis that "there is no change in quality between the first and second halves of the data", against the alternative hypothesis that "the quality is poorer in the second half of the record".

If the statistical hypothesis testing is thought of as a judgement in a court trial, the null hypothesis corresponds to the position of the defendant (the defendant is innocent) while the alternative hypothesis is in the rival position of prosecutor (the defendant is guilty). The defendant is innocent until proven guilty, so likewise in a hypothesis test, the null hypothesis is initially presumed to be true. To prove the statement of the prosecutor, evidence must be convincing enough to convict the defendant; this is analogous to sufficient statistical significance in a hypothesis test.

In the court, only legal evidence can be considered as the foundation for the trial. As for hypothesis testing, a reasonable test statistic should be set to measure the statistic significance of the null hypothesis. Evidence would support the alternative hypothesis if the null hypothesis is rejected at a certain significance level. However, this does not necessarily mean that the alternative hypothesis is true due to the potential presence of a type I error. In order to quantify the statistical significance, the test statistic variables are assumed to follow a certain probability distribution such as the normal distribution or t-distribution to determine the probability of obtaining test results at least as extreme as the results actually observed, under the assumption that the null hypothesis is correct, which is defined as the p-value.[3][4] If the p-value is smaller than the chosen significance level (α), it can be claimed that observed data is sufficiently inconsistent with the null hypothesis and hence the null hypothesis may be rejected. After testing, a valid claim would be "at the significance level of (α), the null hypothesis is rejected, supporting the alternative hypothesis instead". In the metaphor of a trial, the announcement may be "with tolerance for the probability α of an incorrect conviction, the defendant is guilty."

History

[edit]

The concept of an alternative hypothesis in testing was devised by Jerzy Neyman and Egon Pearson, and it is used in the Neyman–Pearson lemma. It forms a major component in modern statistical hypothesis testing. However it was not part of Ronald Fisher's formulation of statistical hypothesis testing, and he opposed its use.[5] In Fisher's approach to testing, the central idea is to assess whether the observed dataset could have resulted from chance if the null hypothesis were assumed to hold, notionally without preconceptions about what other models might hold.[citation needed] Modern statistical hypothesis testing accommodates this type of test since the alternative hypothesis can be just the negation of the null hypothesis.

Types

[edit]

In the case of a scalar parameter, there are four principal types of alternative hypothesis:

  • Point. Point alternative hypotheses occur when the hypothesis test is framed so that the population distribution under the alternative hypothesis is a fully defined distribution, with no unknown parameters; such hypotheses are usually of no practical interest but are fundamental to theoretical considerations of statistical inference and are the basis of the Neyman–Pearson lemma.
  • One-tailed directional. A one-tailed directional alternative hypothesis is concerned with the region of rejection for only one tail of the sampling distribution.
  • Two-tailed directional. A two-tailed directional alternative hypothesis is concerned with both regions of rejection of the sampling distribution.
  • Non-directional. A non-directional alternative hypothesis is not concerned with either region of rejection; rather, it is only concerned that null hypothesis is not true.

See also

[edit]
  • iconMathematics portal
  • Antithesis
  • Null hypothesis
  • Type I and type II errors

References

[edit]
  1. ^ Carlos Cortinhas; Ken Black (23 September 2014). Statistics for Business and Economics. Wiley. p. 314. ISBN 978-1-119-94335-8.
  2. ^ a b Moore, David S. (2003). Introduction to the practice of statistics. George P. McCabe (Fourth ed.). New York. ISBN 0-7167-9657-0. OCLC 49751157.{{cite book}}: CS1 maint: location missing publisher (link)
  3. ^ Corneliussen, Steven T. (2015-11-24). "Which scientists can winningly explain a flame, time, sleep, color, or sound to 11-year-olds?". Physics Today (11) 11792. Bibcode:2015PhT..2015k1792C. doi:10.1063/pt.5.8150. ISSN 1945-0699.
  4. ^ Wasserstein, Ronald L.; Lazar, Nicole A. (2016-04-02). "The ASA Statement on p -Values: Context, Process, and Purpose". The American Statistician. 70 (2): 129–133. doi:10.1080/00031305.2016.1154108. ISSN 0003-1305. S2CID 124084622.
  5. ^ Cohen, J. (1990). "Things I have learned (so far)". American Psychologist. 45 (12): 1304–1312. doi:10.1037/0003-066X.45.12.1304. S2CID 7180431.
  • v
  • t
  • e
Statistics
  • Outline
  • Index
Descriptive statistics
Continuous data
Center
  • Mean
    • Arithmetic
    • Arithmetic-Geometric
    • Contraharmonic
    • Cubic
    • Generalized/power
    • Geometric
    • Harmonic
    • Heronian
    • Heinz
    • Lehmer
  • Median
  • Mode
Dispersion
  • Average absolute deviation
  • Coefficient of variation
  • Interquartile range
  • Percentile
  • Range
  • Standard deviation
  • Variance
Shape
  • Central limit theorem
  • Moments
    • Kurtosis
    • L-moments
    • Skewness
Count data
  • Index of dispersion
Summary tables
  • Contingency table
  • Frequency distribution
  • Grouped data
Dependence
  • Partial correlation
  • Pearson product-moment correlation
  • Rank correlation
    • Kendall's τ
    • Spearman's ρ
  • Scatter plot
Graphics
  • Bar chart
  • Biplot
  • Box plot
  • Control chart
  • Correlogram
  • Fan chart
  • Forest plot
  • Histogram
  • Pie chart
  • Q–Q plot
  • Radar chart
  • Run chart
  • Scatter plot
  • Stem-and-leaf display
  • Violin plot
Data collection
Study design
  • Effect size
  • Missing data
  • Optimal design
  • Population
  • Replication
  • Sample size determination
  • Statistic
  • Statistical power
Survey methodology
  • Sampling
    • Cluster
    • Stratified
  • Opinion poll
  • Questionnaire
  • Standard error
Controlled experiments
  • Blocking
  • Factorial experiment
  • Interaction
  • Random assignment
  • Randomized controlled trial
  • Randomized experiment
  • Scientific control
Adaptive designs
  • Adaptive clinical trial
  • Stochastic approximation
  • Up-and-down designs
Observational studies
  • Cohort study
  • Cross-sectional study
  • Natural experiment
  • Quasi-experiment
Statistical inference
Statistical theory
  • Population
  • Statistic
  • Probability distribution
  • Sampling distribution
    • Order statistic
  • Empirical distribution
    • Density estimation
  • Statistical model
    • Model specification
    • Lp space
  • Parameter
    • location
    • scale
    • shape
  • Parametric family
    • Likelihood (monotone)
    • Location–scale family
    • Exponential family
  • Completeness
  • Sufficiency
  • Statistical functional
    • Bootstrap
    • U
    • V
  • Optimal decision
    • loss function
  • Efficiency
  • Statistical distance
    • divergence
  • Asymptotics
  • Robustness
Frequentist inference
Point estimation
  • Estimating equations
    • Maximum likelihood
    • Method of moments
    • M-estimator
    • Minimum distance
  • Unbiased estimators
    • Mean-unbiased minimum-variance
      • Rao–Blackwellization
      • Lehmann–Scheffé theorem
    • Median unbiased
  • Plug-in
Interval estimation
  • Confidence interval
  • Pivot
  • Likelihood interval
  • Prediction interval
  • Tolerance interval
  • Resampling
    • Bootstrap
    • Jackknife
Testing hypotheses
  • 1- & 2-tails
  • Power
    • Uniformly most powerful test
  • Permutation test
    • Randomization test
  • Multiple comparisons
Parametric tests
  • Likelihood-ratio
  • Score/Lagrange multiplier
  • Wald
Specific tests
  • Z-test (normal)
  • Student's t-test
  • F-test
Goodness of fit
  • Chi-squared
  • G-test
  • Kolmogorov–Smirnov
  • Anderson–Darling
  • Lilliefors
  • Jarque–Bera
  • Normality (Shapiro–Wilk)
  • Likelihood-ratio test
  • Model selection
    • Cross validation
    • AIC
    • BIC
Rank statistics
  • Sign
    • Sample median
  • Signed rank (Wilcoxon)
    • Hodges–Lehmann estimator
  • Rank sum (Mann–Whitney)
  • Nonparametric anova
    • 1-way (Kruskal–Wallis)
    • 2-way (Friedman)
    • Ordered alternative (Jonckheere–Terpstra)
  • Van der Waerden test
Bayesian inference
  • Bayesian probability
    • prior
    • posterior
  • Credible interval
  • Bayes factor
  • Bayesian estimator
    • Maximum posterior estimator
  • Correlation
  • Regression analysis
Correlation
  • Pearson product-moment
  • Partial correlation
  • Confounding variable
  • Coefficient of determination
Regression analysis
  • Errors and residuals
  • Regression validation
  • Mixed effects models
  • Simultaneous equations models
  • Multivariate adaptive regression splines (MARS)
  • Template:Least squares and regression analysis
Linear regression
  • Simple linear regression
  • Ordinary least squares
  • General linear model
  • Bayesian regression
Non-standard predictors
  • Nonlinear regression
  • Nonparametric
  • Semiparametric
  • Isotonic
  • Robust
  • Homoscedasticity and Heteroscedasticity
Generalized linear model
  • Exponential families
  • Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
  • Analysis of variance (ANOVA, anova)
  • Analysis of covariance
  • Multivariate ANOVA
  • Degrees of freedom
Categorical / multivariate / time-series / survival analysis
Categorical
  • Cohen's kappa
  • Contingency table
  • Graphical model
  • Log-linear model
  • McNemar's test
  • Cochran–Mantel–Haenszel statistics
Multivariate
  • Regression
  • Manova
  • Principal components
  • Canonical correlation
  • Discriminant analysis
  • Cluster analysis
  • Classification
  • Structural equation model
    • Factor analysis
  • Multivariate distributions
    • Elliptical distributions
      • Normal
Time-series
General
  • Decomposition
  • Trend
  • Stationarity
  • Seasonal adjustment
  • Exponential smoothing
  • Cointegration
  • Structural break
  • Granger causality
Specific tests
  • Dickey–Fuller
  • Johansen
  • Q-statistic (Ljung–Box)
  • Durbin–Watson
  • Breusch–Godfrey
Time domain
  • Autocorrelation (ACF)
    • partial (PACF)
  • Cross-correlation (XCF)
  • ARMA model
  • ARIMA model (Box–Jenkins)
  • Autoregressive conditional heteroskedasticity (ARCH)
  • Vector autoregression (VAR) (Autoregressive model (AR))
Frequency domain
  • Spectral density estimation
  • Fourier analysis
  • Least-squares spectral analysis
  • Wavelet
  • Whittle likelihood
Survival
Survival function
  • Kaplan–Meier estimator (product limit)
  • Proportional hazards models
  • Accelerated failure time (AFT) model
  • First hitting time
Hazard function
  • Nelson–Aalen estimator
Test
  • Log-rank test
Applications
Biostatistics
  • Bioinformatics
  • Clinical trials / studies
  • Epidemiology
  • Medical statistics
Engineering statistics
  • Chemometrics
  • Methods engineering
  • Probabilistic design
  • Process / quality control
  • Reliability
  • System identification
Social statistics
  • Actuarial science
  • Census
  • Crime statistics
  • Demography
  • Econometrics
  • Jurimetrics
  • National accounts
  • Official statistics
  • Population statistics
  • Psychometrics
Spatial statistics
  • Cartography
  • Environmental statistics
  • Geographic information system
  • Geostatistics
  • Kriging
  • Category
  • icon Mathematics portal
  • Commons
  • WikiProject

Tag » Appropriate Alternative Hypothesis