By Julien Hernandez Lallement, 2020-02-20, in category Course
In research, we typically publish our results in the form of articles, where findings are reported together with statistical analysis, that provide a way to assess the strength and plausibility of observed effects.
While statistical analysis is critical to any research paper, concluding on one single study might not be the best approach.
For instance, one might have a too small sample, while still drawing conclusions on results. Small samples limit your ability (in statistical terms, you are lacking power) to have meaningful conclusions about the effects you are observing. Normally (unfortunately, not typically), one runs a priori power analysis using pre collected data. That is: "from what we know about the process we want to investigate, what effect size would we expect?". Using historical data helps you making an informed guess about the expected outcome, which in turns allows you to calculate the required sample size. The math behind this is not too complicated, but I won't be going in detail here. Some open source software like G-Power allow you to make that kind of computation.
Alternatively, you might be looking at a very particular set of data, mostly created by outliers (normally, that would be ruled you with large enough statistical power), so your observed truth would not be the truth of the overall population from which you believe your sample to be taken.
Alternatively, you might as well be doing something wrong with your statistics. I myself remember running ANOVAs, non parametric tests and correlations that required some set of assumptions to be fullfiled, without having checked these assumptions before hand. This is, I think, a problem of how researchers are (or at least how I was) taught to use statistics... but that's for another post!
Now, when multiple papers investigating a similar process are published, one useful thing to do is to review them together, and see whether trends are emerging in the data collected independently in each paper. These so-called systematic reviews
can be quite useful to outline unexplored areas of a process, which in turn would guide researchers to collect the necessary data to populate that missing part of the distribution.
Another approach, maybe one step further, would be to perform a meta-analysis
. This differs from a systematic review in that a meta-analysis does not simply reviews the current knowledge of a given process, but also quantifies the strength of statistical effects, and provides an overall estimation.
Having worked in the field of Neuroscience for quite a bit, together with a colleague of mine, we collected data on animal empathy, to quantify the strength of effects reported across the literature.
Here, I intend to report a small part of that paper to share some code that might be useful to people currently working on meta-analytic approaches.
Data typically takes the shape of distributions, which have some parameters, like a mean and a standard deviation. You can do all of stuff on these distributions, like comparing them to some value (one sample testing, t-test or Wilcoxon, if parametric assumptions are met ;) ) or compare them to some other distribution (paired testing, t-test or Mann-Whitney), among other things.
That's very useful, because you can tell whether these distributions are significantly different from one another (although this frequentist approach as some downsides as well, but that's for another post on Bayesian statistics).
Statistical significance is the probability that the observed difference between two distributions is due to chance. If the p-value is larger than a threshold arbitrarily chosen (i.e., alpha level, typically equal to 0.05), it is assumed that observed differences can be explained by random noise in the data.
Problem is, with a sufficiently large sample, statistical tests will very often find a significant difference, sometimes for even lower alpha levels. These small differences that emerge as significant are often useless for research results, and no meaningful conclusion can be drawn from them. The fact that p-values are dependent not only on effect sizes, but also on sample size makes it crucial to report other measures. This is why authors working with statistical comparisons should not only report p-values, but also differences in magnitude between the distributions. In the words of Jacob Cohen, "Statistical significance is the least interesting thing about the results. You should describe the results in terms of measures of magnitude –not just, does a treatment affect people, but how much does it affect them.".
A commonly cited example illustrating this problem is a Physicians Health Study, that investigated whether aspirin could help preventing myocardial infarction (MI). The researchers tested around 22.000 subjects, and found that aspirin was associated with a reduction in myocardial infarction, with a significance level < .00001. In other words, very very significant (the kind of value that makes PhD students' head drool...). The study was terminated early than scheduled due to the conclusive findings, resulting in a recommendation of aspirin to treat this problem. Problem was, the effect size was very small (r2 = 0.001), suggesting that actual between group differences were very small and negligible. The recommendation to use aspirin has since been changed, but for quite some time, this study fueled a treatment that was not very suited to the problem. All due to a focus on p-value...
Conclusion "The primary product of a research inquiry is one or more measures of effect size, not P values.". Gene V. Glass
Effect sizes (ES
) are a way to boil down the strength of an observed effect. Typically, your data will take the shape of distributions, that have parameters such as the mean and the standard deviation (I assume the readers know about these concepts). Since the data I use in this post was based on these parameters, I describe below the analysis pipeline for data preparation.
Assume your first study used two different independent groups, each one with a mean (M
) and a standard deviation (SD
). It is easy to convert that data in a so-called standardized effect size, given by the simple equation below:
where M
, SD
and n
are the mean, standard deviation and sample size of each distribution 1 & 2, respectively.
Note that very often, standard error of the mean (SEM
) is provided in the graphical representation of data. You can then convert to the SD
by using the following formula:
Assume that the second study you screened used the same group at two different time points (e.g., before and after treatment). Here, you are be dealing with a within-subject design instead of a between-subject one. In this case, you could compute the ES
using the following equation:
where Mt
is the mean initial measurement (usually baseline), M(t+i)
is the measurement at a second time point, SDt
is the standard deviation of the distribution during the initial measurement, SD(t+i)
is the standard deviation of the distribution at the second measurement point, and N
represents the sample size of the group
In order to bring all measures to the same metric and to ease interpretation, the effect sizes can be transformed to correlation coefficient, called r
, following this equation:
In some cases, the relevant statistics are actually provided in the paper, in which case, the following equations can be used:
In some cases (typically in old publications), no statistical information is available nor graphical representation of data, and only p-values are reported. In this case, you could use z-scores if you have p-values reported, using the following equation:
where z
is the z-score value and N
is the sample size. You can extract z-scores
from z-score tables like this one. Remember though, p-values are dependent on group size and should be evaluated cautiously.
At some point, we get in troubles because the value of r
becomes increasingly skewed as it gets further from 0. To compensate that effect, we can normalize the effect sizes using Fisher transformation described here, following this equation:
where r
is the effect size computed through the methods described above. By convention, $Z_r$ can be converted back to r for ease of interpretation.
If you are dealing with low sample sizes, which can create some bias in your analysis (< 20 or 10 in each group, see Nakagawa & Cuthill, 2007, we computed the unbiased $Z_r$ (zru) value using the equation proposed by Hedges & Olkin, 1985:
When conducting meta-analytic approaches, it is necessary to use either a fixed effect or a random effects statistical model. A fixed effect model assumes that all effect sizes are measuring the same effect, whereas a random effects model takes into account potential variance in the between-studies effect. Since the chosen model affects the interpretation of the summary estimates, one can test which model to use by conducting a heterogeneity test, that generates the Q-statistic
described in eq. 14. The Q
value is a measure of the dispersion of the effect sizes. This measure follows the chi square distribution with k-1
degrees of freedom, where k is the total number of effect sizes. This model assumes that the variance of each effect size ($v_i$, eq. 10) is composed of variance due to intrinsic sampling errors ($v_0$, eq. 11 & 12), plus other sources of randomly distributed variability ($v_r$, eq 12). To estimate these values, you can use the formulas 10 through 15 thoroughly described by (Lipsey & Wilson, 2001; Nakagawa & Cuthill, 2007):
To complement the Q statistic
, one can provide a I^2 statistic using eq. 16, which measures the percent of variance between studies, which is due to true heterogeneity rather than chance.
where Q is calculated using eq. 14 and df is the number of effect sizes minus one, with higher percent values indicating higher heterogeneity. Typically, the I^2 will be expressed in percent, and can have a p-value associated to it, which, if significant, would indicate a substantial amount of heterogeneity and giving further support for a random model effects.
For each variable and its different levels, one can calculated the mean effect size, 95% confidence intervals (CI) and zscore value using eqs. 16, 17, 18 and 19. See Nakagawa & Cuthill, 2007 for more information on this topic, and note that there are other proposed methods to compare these values.
There are other analysis that one might want to do when running a meta-analysis. In particular, file drawer analysis (that allows to estimate how many more effect size one might require to abolish any overall meta-analytic effect found) and funnel plots (that allow to assess publication bias) could be of interest. I won't be talking about these, but you could look at JASP if you think of running some of these on your data.