Phi Coefficient - Wikipedia
Có thể bạn quan tâm
In statistics, the phi coefficient, or mean square contingency coefficient, denoted by φ or rφ, is a measure of association for two binary variables.
In machine learning, it is known as the Matthews correlation coefficient (MCC) and used as a measure of the quality of binary (two-class) classifications, introduced by biochemist Brian W. Matthews in 1975.[1]
Introduced by Karl Pearson,[2] and also known as the Yule phi coefficient from its introduction by Udny Yule in 1912[3] this measure is similar to the Pearson correlation coefficient in its interpretation.
In meteorology, the phi coefficient,[4] or its square (the latter aligning with M. H. Doolittle's original proposition from 1885[5]), is referred to as the Doolittle Skill Score or the Doolittle Measure of Association.
Definition
[edit]A Pearson correlation coefficient estimated for two binary variables will return the phi coefficient.[6]
Two binary variables are considered positively associated if most of the data falls along the diagonal cells. In contrast, two binary variables are considered negatively associated if most of the data falls off the diagonal.
If we have a 2×2 table for two random variables x and y
| y = 1 | y = 0 | total | |
| x = 1 | |||
| x = 0 | |||
| total |
where n11, n10, n01, n00, are non-negative counts of numbers of observations that sum to n, the total number of observations. The phi coefficient that describes the association of x and y is
Phi is related to the point-biserial correlation coefficient and Cohen's d and estimates the extent of the relationship between two variables (2×2).[7]
The phi coefficient can also be expressed using only , , , and , as
Maximum values
[edit]Although computationally the Pearson correlation coefficient reduces to the phi coefficient in the 2×2 case, they are not in general the same. The Pearson correlation coefficient ranges from −1 to +1, where ±1 indicates perfect agreement or disagreement, and 0 indicates no relationship. The phi coefficient has a maximum value that is determined by the distribution of the two variables if one or both variables can take on more than two values.[further explanation needed] See Davenport and El-Sanhury (1991) [8] for a thorough discussion.
Machine learning
[edit]The MCC is defined identically to phi coefficient, introduced by Karl Pearson,[2][9] also known as the Yule phi coefficient from its introduction by Udny Yule in 1912.[3] Despite these antecedents which predate Matthews's use by several decades, the term MCC is widely used in the field of bioinformatics and machine learning.
The coefficient accounts for true and false positives and negatives and is generally regarded as a balanced measure which can be used even if the classes are of very different sizes.[10] The MCC is in essence a correlation coefficient between the observed and predicted binary classifications; it returns a value between −1 and +1. A coefficient of +1 represents a perfect prediction, 0 no better than random prediction and −1 indicates total disagreement between prediction and observation. However, if MCC equals neither −1, 0, or +1, it is not a reliable indicator of how similar a predictor is to random guessing because MCC is dependent on the dataset.[11] MCC is closely related to the chi-square statistic for a 2×2 contingency table
where n is the total number of observations.
While there is no perfect way of describing the confusion matrix of true and false positives and negatives by a single number, the Matthews correlation coefficient is generally regarded as being one of the best such measures.[12] Other measures, such as the proportion of correct predictions (also termed accuracy), are not useful when the two classes are of very different sizes. For example, assigning every object to the larger set achieves a high proportion of correct predictions, but is not generally a useful classification.
The MCC can be calculated directly from the confusion matrix using the formula:
In this equation, TP is the number of true positives, TN the number of true negatives, FP the number of false positives and FN the number of false negatives. If exactly one of the four sums in the denominator is zero, the denominator can be arbitrarily set to one; this results in a Matthews correlation coefficient of zero, which can be shown to be the correct limiting value. In case two or more sums are zero (e.g. both labels and model predictions are all positive or negative), the limit does not exist.
The MCC can be calculated with the formula:
using the positive predictive value, the true positive rate, the true negative rate, the negative predictive value, the false discovery rate, the false negative rate, the false positive rate, and the false omission rate.
The original formula as given by Matthews was:[1]
This is equal to the formula given above. As a correlation coefficient, the Matthews correlation coefficient is the geometric mean of the regression coefficients of the problem and its dual. The component regression coefficients of the Matthews correlation coefficient are markedness (Δp) and Youden's J statistic (informedness or Δp′).[12][13] Markedness and informedness correspond to different directions of information flow and generalize Youden's J statistic, the statistics, while their geometric mean generalizes the Matthews correlation coefficient to more than two classes.[12]
Some scientists claim the Matthews correlation coefficient to be the most informative single score to establish the quality of a binary classifier prediction in a confusion matrix context.[14][15]
Example
[edit]Given a sample of 12 pictures, 8 of cats and 4 of dogs, where cats belong to class 1 and dogs belong to class 0,
actual = [1,1,1,1,1,1,1,1,0,0,0,0],assume that a classifier that distinguishes between cats and dogs is trained, and we take the 12 pictures and run them through the classifier, and the classifier makes 9 accurate predictions and misses 3: 2 cats wrongly predicted as dogs (first 2 predictions) and 1 dog wrongly predicted as a cat (last prediction).
prediction = [0,0,1,1,1,1,1,1,0,0,0,1]With these two labelled sets (actual and predictions) we can create a confusion matrix that will summarize the results of testing the classifier:
| PredictedclassActual class | Cat | Dog |
|---|---|---|
| Cat | 6 | 2 |
| Dog | 1 | 3 |
In this confusion matrix, of the 8 cat pictures, the system judged that 2 were dogs, and of the 4 dog pictures, it predicted that 1 was a cat. All correct predictions are located in the diagonal of the table (highlighted in bold), so it is easy to visually inspect the table for prediction errors, as they will be represented by values outside the diagonal.
In abstract terms, the confusion matrix is as follows:
| PredictedclassActual class | P | N |
|---|---|---|
| P | TP | FN |
| N | FP | TN |
where P = positive; N = negative; TP = truepositive; FP = false positive; TN = true negative; FN = false negative.
Plugging the numbers from the formula:
Confusion matrix
[edit] Main article: Confusion matrixLet us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows:
| Predicted condition | Sources: [16][17][18][19][20][21][22][23]
| ||||
| Total population = P + N | Predicted positive | Predicted negative | Informedness, bookmaker informedness (BM) = TPR + TNR − 1 | Prevalence threshold (PT) = √TPR × FPR − FPR/TPR − FPR | |
| Actual condition | Real Positive (P) [a] | True positive (TP), hit[b] | False negative (FN), miss, underestimation | True positive rate (TPR), recall, sensitivity (SEN), probability of detection, hit rate, power = TP/P = 1 − FNR | False negative rate (FNR), miss rate type II error [c] = FN/P = 1 − TPR |
| Real Negative (N)[d] | False positive (FP), false alarm, overestimation | True negative (TN), correct rejection[e] | False positive rate (FPR), probability of false alarm, fall-out type I error [f] = FP/N = 1 − TNR | True negative rate (TNR), specificity (SPC), selectivity = TN/N = 1 − FPR | |
| Prevalence = P/P + N | Positive predictive value (PPV), precision = TP/TP + FP = 1 − FDR | False omission rate (FOR) = FN/TN + FN = 1 − NPV | Positive likelihood ratio (LR+) = TPR/FPR | Negative likelihood ratio (LR−) = FNR/TNR | |
| Accuracy (ACC) = TP + TN/P + N | False discovery rate (FDR) = FP/TP + FP = 1 − PPV | Negative predictive value (NPV) = TN/TN + FN = 1 − FOR | Markedness (MK), deltaP (Δp) = PPV + NPV − 1 | Diagnostic odds ratio (DOR) = LR+/LR− | |
| Balanced accuracy (BA) = TPR + TNR/2 | F1 score = 2 PPV × TPR/PPV + TPR = 2 TP/2 TP + FP + FN | Fowlkes–Mallows index (FM) = √PPV × TPR | phi or Matthews correlation coefficient (MCC) = √TPR × TNR × PPV × NPV - √FNR × FPR × FOR × FDR | Threat score (TS), critical success index (CSI), Jaccard index = TP/TP + FN + FP | |
- ^ the number of real positive cases in the data
- ^ A test result that correctly indicates the presence of a condition or characteristic
- ^ Type II error: A test result which wrongly indicates that a particular condition or attribute is absent
- ^ the number of real negative cases in the data
- ^ A test result that correctly indicates the absence of a condition or characteristic
- ^ Type I error: A test result which wrongly indicates that a particular condition or attribute is present
Multiclass case
[edit]The Matthews correlation coefficient has been generalized to the multiclass case. The generalization called the statistic (for K different classes) was defined in terms of a confusion matrix [24] .[25]
When there are more than two labels the MCC will no longer range between −1 and +1. Instead the minimum value will be between −1 and 0 depending on the true distribution. The maximum value is always +1.
This formula can be more easily understood by defining intermediate variables:[26]
- is the actual value index
- is the predicted value index
- is the total number of classes
- the number of times class k truly occurred,
- the number of times class k was predicted,
- the total number of samples correctly predicted,
- the total number of samples. This allows the formula to be expressed as:
| PredictedclassActual class | Cat | Dog | Sum |
|---|---|---|---|
| Cat | 6 | 2 | 8 |
| Dog | 1 | 3 | 4 |
| Sum | 7 | 5 | 12 |
Using above formula to compute MCC measure for the dog and cat example discussed above, where the confusion matrix is treated as a 2 × Multiclass example:
An alternative generalization of the Matthews Correlation Coefficient to more than two classes was given by Powers [12] by the definition of Correlation as the geometric mean of Informedness and Markedness.
Several generalizations of the Matthews Correlation Coefficient to more than two classes along with new Multivariate Correlation Metrics for multinary classification have been presented by P Stoica and P Babu.[27]
See also
[edit]- Cohen's kappa
- Contingency table
- Cramér's V, a similar measure of association between nominal variables.
- F1 score
- Fowlkes–Mallows index
- Polychoric correlation (subtype: Tetrachoric correlation), when variables are seen as dichotomized versions of (latent) continuous variables
References
[edit]- ^ a b Matthews, B. W. (1975). "Comparison of the predicted and observed secondary structure of T4 phage lysozyme". Biochimica et Biophysica Acta (BBA) - Protein Structure. 405 (2): 442–451. doi:10.1016/0005-2795(75)90109-9. PMID 1180967.
- ^ a b Cramer, H. (1946). Mathematical Methods of Statistics. Princeton: Princeton University Press, p. 282 (second paragraph). ISBN 0-691-08004-6 https://archive.org/details/in.ernet.dli.2015.223699
- ^ a b Yule, G. Udny (1912). "On the Methods of Measuring Association Between Two Attributes". Journal of the Royal Statistical Society. 75 (6): 579–652. doi:10.2307/2340126. JSTOR 2340126.
- ^ Hogan, Robert J.; Mason, Ian B. (December 16, 2011). "Deterministic Forecasts of Binary Events". In Jolliffe, Ian T.; Stephenson, David B. (eds.). Forecast Verification: A Practitioner's Guide in Atmospheric Science, Second Edition. John Wiley & Sons. pp. 31–59. doi:10.1002/9781119960003.ch3. ISBN 978-0-470-66071-3.
- ^ Armistead, Timothy W. (2016). "Misunderstood and Unattributed: Revisiting M. H. Doolittle's Measures of Association, With a Note on Bayes' Theorem". The American Statistician. 70 (1): 63–73. doi:10.1080/00031305.2015.1086686. JSTOR 45118274.
- ^ Guilford, J. (1936). Psychometric Methods. New York: McGraw–Hill Book Company, Inc.
- ^ Aaron, B., Kromrey, J. D., & Ferron, J. M. (1998, November). Equating r-based and d-based effect-size indices: Problems with a commonly recommended formula. Paper presented at the annual meeting of the Florida Educational Research Association, Orlando, FL. (ERIC Document Reproduction Service No. ED433353)
- ^ Davenport, E.; El-Sanhury, N. (1991). "Phi/Phimax: Review and Synthesis". Educational and Psychological Measurement. 51 (4): 821–8. doi:10.1177/001316449105100403.
- ^ Date unclear, but prior to his death in 1936.
- ^ Boughorbel, S.B (2017). "Optimal classifier for imbalanced data using Matthews Correlation Coefficient metric". PLOS ONE. 12 (6) e0177678. Bibcode:2017PLoSO..1277678B. doi:10.1371/journal.pone.0177678. PMC 5456046. PMID 28574989.
- ^ Chicco, D.; Tötsch, N.; Jurman, G. (2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (1): 13. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410.
- ^ a b c d Powers, David M. W. (10 October 2020). "Evaluation: from precision, recall and F-measure to ROC, informedness, markedness and correlation". arXiv:2010.16061 [cs.LG].
- ^ Perruchet, P.; Peereman, R. (2004). "The exploitation of distributional information in syllable processing". J. Neurolinguistics. 17 (2–3): 97–119. doi:10.1016/s0911-6044(03)00059-9. S2CID 17104364.
- ^ Chicco D (December 2017). "Ten quick tips for machine learning in computational biology". BioData Mining. 10 (35) 35. doi:10.1186/s13040-017-0155-3. PMC 5721660. PMID 29234465.
- ^ Chicco D, Jurman G (February 2023). "The Matthews correlation coefficient (MCC) should replace the ROC AUC as the standard metric for assessing binary classification". BioData Min. 16 (1) 4. doi:10.1186/s13040-023-00322-4. PMC 9938573. PMID 36800973.
- ^ Fawcett, Tom (2006). "An Introduction to ROC Analysis" (PDF). Pattern Recognition Letters. 27 (8): 861–874. doi:10.1016/j.patrec.2005.10.010. S2CID 2027090.
- ^ Provost, Foster; Tom Fawcett (2013-08-01). "Data Science for Business: What You Need to Know about Data Mining and Data-Analytic Thinking". O'Reilly Media, Inc.
- ^ Powers, David M. W. (2011). "Evaluation: From Precision, Recall and F-Measure to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63.
- ^ Ting, Kai Ming (2011). Sammut, Claude; Webb, Geoffrey I. (eds.). Encyclopedia of machine learning. Springer. doi:10.1007/978-0-387-30164-8. ISBN 978-0-387-30164-8.
- ^ Brooks, Harold; Brown, Barb; Ebert, Beth; Ferro, Chris; Jolliffe, Ian; Koh, Tieh-Yong; Roebber, Paul; Stephenson, David (2015-01-26). "WWRP/WGNE Joint Working Group on Forecast Verification Research". Collaboration for Australian Weather and Climate Research. World Meteorological Organisation. Retrieved 2019-07-17.
- ^ Chicco D, Jurman G (January 2020). "The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation". BMC Genomics. 21 (1): 6-1–6-13. doi:10.1186/s12864-019-6413-7. PMC 6941312. PMID 31898477.
- ^ Chicco D, Toetsch N, Jurman G (February 2021). "The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation". BioData Mining. 14 (13): 13. doi:10.1186/s13040-021-00244-z. PMC 7863449. PMID 33541410.
- ^ Tharwat A. (August 2018). "Classification assessment methods". Applied Computing and Informatics. 17: 168–192. doi:10.1016/j.aci.2018.08.003.
- ^ Gorodkin, Jan (2004). "Comparing two K-category assignments by a K-category correlation coefficient". Computational Biology and Chemistry. 28 (5): 367–374. doi:10.1016/j.compbiolchem.2004.09.006. PMID 15556477.
- ^ Gorodkin, Jan. "The Rk Page". The Rk Page. Retrieved 28 December 2016.
- ^ "Matthew Correlation Coefficient". scikit-learn.org.
- ^ Stoica P and Babu P (2024), Pearson–Matthews correlation coefficients for binary and multinary classification, Elsevier Signal Processing, 222, 109511, doi = https://doi.org/10.1016/j.sigpro.2024.109511
| |||||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||
| |||||||||||||||||||||||||||
| |
|---|---|
| Regression |
|
| Classification |
|
| Clustering |
|
| Ranking |
|
| Computer vision |
|
| NLP |
|
| Deep learning |
|
| Recommender system |
|
| Similarity |
|
| |
Từ khóa » Phi R2
-
Area Of A Circle - Wikipedia
-
Phi×r2 Merupakan Rumus Luas Dari Bangun Apa? Dan Apa ... - Brainly
-
Solids - Volumes And Surfaces - The Engineering ToolBox
-
Alienware X17 R2 Gaming Laptop | Dell Singapore
-
Alienware X15 R2 Gaming Laptop | Dell Singapore
-
[PDF] Ring Homomorphisms And The Isomorphism Theorems
-
Real Statistics Time Series Analysis Functions
-
Có Các Khoản Phí Gửi Tiền Nào? — WU R2 US Trung Tâm Trợ Giúp
-
Fc Receptors For IgE And Interleukin-4 Induced IgE And IgG4 Secretion