Fisher's method is a technique for "meta-analysis" developed by [[Ronald Fisher]] that it's used to combine the results of several [[Independence Test|independence tests]] bearing upon the same hypothesis. The main essence of the Fisher's method is combining [[p-value]] probabilities into a single [[Test Statistic|test statistic]].
## Combining weak tests
The main use of it is to combine several potentially-weak tests and combining them into a stronger one – as explained in [this example](https://scientistseessquirrel.wordpress.com/2016/06/07/the-most-useful-statistical-test-that-nobody-knows/):
> Imagine four insecticide experiments:
>
> - two _t_-tests, _P_ = 0.11 and _P_ = 0.12
> - a _G_-test, _P_ = 0.21
> - a regression, _P_ = 0.08
>
> Nothing significant, right? Wrong. Fisher’s method gives a test statistic of 16.8, with 8 degrees of freedom and a combined _P_ = 0.03. This shouldn’t shock you: while none of the individual tests have _P_ below the [(absolutist)](https://scientistseessquirrel.wordpress.com/2015/11/16/is-nearly-significant-ridiculous/) threshold of 0.05, it’s unlikely that four experiments would get four smallish values in the absence of any real effect.
There is one important caveat though: finding a significant result in Fisher's method means that **you have found an existing pattern, but you have not measured its effect**.