Page 33 - JCTR-9-4
P. 33
Shuster | Journal of Clinical and Translational Research 2023; 9(4): 246-252 249
and Excel software do all calculations for you automatically if you and effect size estimates, contrary to Assumption A5, cannot be
enter the tabular data analogous to Table 1. presumed to be independent.
7.2. Illustration for a difference in means or proportions 10.1. Example 1: Relative risk
For each study in the conceptual universe, we project the Neto et al. [4] in a highly cited meta-analysis of randomized
difference in its totals if all patients received treatment less than if trials found a benefit in their invasive intervention over the
all patients were controls as the difference in means (or proportions) control for their primary outcome, total mortality. Table 1
multiplied by the total sample size (treatment + control). The target provides the published numerators and denominators for each
population projection adds these up for all studies in the universe of the contributing studies, while Table 2 provides the results
and divides this total by the total number of patients in the universe (i) as published, (ii) doubling all numerators and denominators,
of all conceptually completed studies. The estimate is simply the (iii) equally weighted, and (iv) by the method of Shuster [1]. As of
corresponding value in our sample. If you refer to the difference in 11/2022, this Journal of the American Medical Association paper
means data from the second example in the users’ guide, second has been cited 877 times.
study, you will note that the experimental group had a sample mean Table 2 yields surprising results. Intuitively, doubling all
of −3.0 in 42 patients while the control group had a sample mean numerators and denominators which keep the study-by-study
of −2.5 in 51 patients. This makes the projected mean difference estimates (signals) the same, but would diminish the noise (standard
of −0.5 (experimental minus control) in 93 patients for a projected errors) within each study by a factor of about 30%, should yield a
total of −0.5*93 = −46.5. The Users’ guide and Excel software do more significant result. Why would the confidence interval for the
all calculations for you automatically if you enter the tabular data overall estimate of effect size grow by 15% while losing the significant
analogous to this example in the User’s Guide. finding, with the P-value becoming 0.15 instead of 0.013? This is
8. How Equal Weighting Works indeed a red flag that will be clarified in the discussion. Neither the
equally weighted nor the Ratio estimate produces definitive results
We do not advocate equal weighting, but it can give us important on efficacy. In this case, this published result affected public health
insight into the credibility of analyses that use mainstream policy based on an off-label use of statistical methodology.
weights. We use the same methods as the mainstream to calculate
the estimate and standard error but use equal weights instead of 10.2. Example 2: Rosiglitazone and increased myocardial
mainstream weights. infarction risk
9. How Statistical Inference is Done In their publication, Nissen and Wolski [3] used a fixed-effects
method, even though the combined trials were highly diverse in
For any form of meta-analysis, including the mainstream, to terms of control groups, eligibility, duration and dose of treatment,
obtain point estimates, confidence intervals, and P-values, the and duration of follow-up. They used odds ratios instead of relative
following approximations are used: The standardized difference, risk, the preferred metric. When event rates are low, the distinction
the difference between the overall estimate of effect size and the is minor. Table 3 contrasts the results of mainstream methods, the
true global mean effect size, divided by its standard error of the published result of Nissen and Wolski [3], with those of Shuster [1],
estimate is obtained. for relative risk. The Nissen and Wolski published that confidence
a. The mainstream uses a standard normal approximation, interval excludes the neutral value of 1.00 but includes clinically
although the package CMA now has an option to use a insignificant values close to 1.00. Had ratio methods been available,
T-approximation with degrees of freedom equal to the number a full ban on rosiglitazone might have occurred in 2007, thanks
of studies being combined less one to the fact that the confidence interval includes only clinically
b. The ratio estimation method uses a T-approximation with significant increased risk for rosiglitazone. Although sales dropped
degrees of freedom equal to the number of studies being from over $2 billion in 2007 and beyond, a large volume of sales
combined less two continued for years afterward. As late as 2010, annual sales totaled
c. The equally weighted method uses a T-approximation with almost $700 million. Several other nations did not ban the drug until
degrees of freedom equal to the number of studies being 2010 or 2011. To further confuse the situation in 2007, Diamond
combined less one. and Kaul [5] published a non-significant mainstream analysis
More on these approximations will appear in the discussion. which may have slowed the decline at the additional human cost of
cardiac events. The Nissen and Wolski [3] New England Journal of
10. Numerical Examples Medicine publication is one of the most cited meta-analysis reports,
We shall provide three illustrations, one for the primary with 5908 citations as of 11/2022.
published relative risk of an invasive intervention, one for the 10.3. Example 3: From a peer-review of a submission to a major
myocardial infarction data of Nissen and Wolski [3], and one medical journal
from a submitted article that incorrectly reported one odds ratio.
The correction did not affect within-study variance estimators, but The crux of this six-study observational example is that a peer-
dramatically impacted the weights, demonstrating that weights reviewer discovered that the odds ratio estimate in one of the
DOI: http://dx.doi.org/10.18053/jctres.09.202304.22-00019

