Kang et al. developed a valuable tool to visualize differences in effect sizes between studies included in a meta-analysis and understand where these differences stem from. I only have a few minor comments and suggestions: 1. As P and M-values have similar roles and the M-value is a better indicator of whether there is an effect, it seems to me the PM plot shows redundant information. Would there not be more value in displaying the effect size (Log odds ratio) on the y-axis of the PM plot rather than P-value?
To interpret potential causes of heterogeneity, it would help to be able to visualise the covariates (e.g. sex) on the PM plot. A colour code could be used to visualise if/how the studies cluster by covariate (at the moment the colour code is redundant with the grey shading indicating the M<0.1 / 0.10.9 sections).
In the example section, please give P-value rather than P>0.03. How is the P-value for differential effect obtained? Is it based on studies 3 and 4 only or is it based on all studies? If it is based on all studies, as the M-value is too, why is study 3 blue and study 4 red? If it is based on studies 3 and 4 only, I understand the result but the reasoning developed in the Example section could be explained even more clearly.
A few study numbers are hard to read, especially when they appear in blue dots.
What does the 10-6 threshold correspond to in the PM plot?
It would help to know how the studies are ordered in the Forest plot.
Running ./run.sh initially failed because the plotrix package was not installed on my computer (error comes from forestpmplot.R). After installing the package,