Did the Review Explicitly Address a Sensible Question?

  • Journal Listing
  • Can J Surg
  • v.47(i); 2004 Feb
  • PMC3211813

Can J Surg. 2004 Feb; 47(1): lx–67.

Users' guide to the surgical literature: how to use a systematic literature review and meta-analysis

An evidence-based approach to surgery incorporates patients' circumstances or predicaments, identifies knowledge gaps and frames questions to fill up those gaps, conducts efficient literature searches, critically appraises the inquiry prove and applies that prove to patient care. The practise of testify-based medicine, therefore, is a process of lifelong self-directed learning in which caring for patients creates a need for clinically important information about diagnoses, prognoses, treatments and other health care issues.1 , 2

Readers are able to apply several types of summarized information from expert opinion and textbook reviews to systematic reviews. Traditional, or narrative, reviews, by definition, do non employ a systematic arroyo to identifying information on a item topic. Moreover, narrative reviews often pose background-type questions and provide a general overview of a topic such every bit those found in book capacity and instructional course lectures. A background question is, for example, "What is the epidemiology, clinical presentation, treatment options and prognosis following femoral shaft fractures in adults?" We utilize the term systematic review for any summary of the medical literature that attempts to address a focused clinical question with explicit strategies for the identification and appraisal of the available literature (Table 1 and Table 2); meta-analysis is a term used for systematic reviews that use quantitative methods (i.e., statistical techniques) to summarize the results. Systematic reviews typically pose a foreground-type question. Foreground questions are more specific and provide insight into a particular attribute of direction. For instance, investigators may provide a systematic review of plating versus nailing of humeral shaft fractures on nonunion rates (foreground question) rather than a general review of how bone heals after all treatments of humeral shaft fractures (groundwork question).

Table 1

An external file that holds a picture, illustration, etc.  Object name is 16TT1.jpg

Tabular array 2

An external file that holds a picture, illustration, etc.  Object name is 16TT2.jpg

Whereas systematic reviews (and meta-analyses) have get popular in surgery, they are not without limitations. The quality of the systematic review is influenced by the quality of the primary studies being reviewed. However, in the absence of big, definitive clinical trials, meta-analyses can provide of import data to guide patient care too as futurity clinical research.

In applying the suggested guidelines (Table 1) yous will gain a articulate understanding of the process of conducting a systematic review (Table 2).

The bear and interpretation of systematic reviews in surgery is often challenging given the paucity of clinical trails available on whatever given topic. However, if investigators attach to proper methodology, they can provide conclusions drawn from a comprehensive study with limited bias.

Clinical scenario

You lot are an orthopedic surgeon who has recently joined a group of orthopedic surgeons practising in an academic centre. You have an involvement in injuries about the foot and talocrural joint and have noticed that the treatment of ruptures of the Achilles tendon differs from that of your contempo experience acquired during your fellowship preparation. Your colleagues prefer nonoperative treatment of Achilles tendon ruptures because they believe that outcomes are practiced with this technique. Having trained with an orthopedic surgeon who preferred operative repair for Achilles tendon ruptures for the same reasons of improved issue, you begin to wonder whether your new colleagues know something your fellowship supervisor did not.

Yous determine to challenge some other colleague who uses nonoperative treatment to provide a written report to back up her choice. She replies, "There's i randomized trial from Europe, but I'm sure in that location is lots of information on this topic in the literature. Why don't y'all nowadays a summary of the data on this topic at next week'southward m rounds?"

Intrigued past this opportunity, you gladly accept your colleague's claiming and begin to wait for relevant information.

The search

Yous apace determine from talking with your colleagues and your fellowship supervisor that in that location take been a number of randomized trials comparing operative and nonoperative treatment of acute ruptures of the Achilles tendon. Realizing that your 1-week borderline volition not exist sufficient to summarize all of the articles, you decide to focus your literature search to identify whatever recent reviews of this topic. Being relatively proficient on the Internet, you select your favourite search site, National Library of Medicine's PubMed at www.pubmed.gov. Yous select the "Clinical Queries" section and cull a search for systematic reviews. Yous blazon in the words "Achilles tendon." This identifies 12 documents. You review the titles of the 12 potentially relevant studies and are happy to detect a systematic review and meta-analysis of operative versus nonoperative handling for acute ruptures of the Achilles tendon.3 You remember this article for farther review.

Are the results of this review valid?

Did the review explicitly address a sensible clinical question?

Consider a systematic overview that pooled results from all fracture therapies (both surgical and medical) for all types of fractures to generate a single estimate of the affect on fracture union rates. Clinicians would not find this type of review useful: they would conclude that information technology is "too wide." What makes a systematic review as well wide? We believe the underlying question that clinicians ask themselves when because whether a review is excessively broad is as follows: Across the range of patients and interventions included, and ways the event was measured, can I expect more or less the aforementioned magnitude of result?

The reason that clinicians reject "all therapies for all fracture types" is that they know that some fracture therapies are extremely effective and others are harmful. Pooling beyond such therapies would yield an intermediate guess of effect inapplicable to either the highly beneficial or harmful interventions. Clinicians also know that fracture types differ in their biology and response to treatment, again making estimates of average treatment effects inapplicable to all fractures.

The task of the reader, then, is to decide whether the range of patients, interventions or exposures and the outcomes called make sense. To aid them with this determination, reviewers need to present a precise argument of what range of patients, exposures and outcomes they accept decided to consider; in other words, they demand to define explicit inclusion and exclusion criteria for their review. Explicit eligibility criteria not only facilitate the user's conclusion regarding whether the question is sensible but arrive less likely that the authors will preferentially include studies supporting their ain prior conclusion. Bias in choosing articles to cite is a problem for both systematic reviews and original reports of research.

There are expert reasons to choose wide eligibility criteria. Offset, one of the primary goals of a systematic review, and of pooling information in particular, is to obtain a more precise estimate of the treatment effect. The broader the eligibility criteria, the greater are the number of studies and number of patients, and the narrower are the confidence intervals (CIs). 2d, broad eligibility criteria pb to more than generalizable results. If it is true that the results utilize to a broad variety of patients with a broad range of injury severities, the surgeon is on stronger ground applying the findings to a detail patient.

Was the search for relevant studies detailed and exhaustive?

It is of import that authors conduct a thorough search for studies that run into their inclusion criteria. Their search should include the use of bibliographic databases such as MEDLINE, EMBASE, the Cochrane Controlled Trials Register (containing more than than 250 000 randomized clinical trials); checking the reference lists of the articles they retrieved; and personal contact with experts in the expanse (Table 3). Information technology may besides be important to examine books of recently published abstracts presented at scientific meetings, and less ofttimes used databases, including those that summarize doctoral theses. With all these sources, it becomes evident that a MEDLINE search alone will not exist satisfactory. Previous meta-analyses in orthopedics have variably reported a comprehensive search strategy.4

Table 3

An external file that holds a picture, illustration, etc.  Object name is 16TT3.jpg

Unless the authors tell the states what they did to locate relevant studies, it is difficult to know how likely information technology is that relevant studies were missed. In that location are two of import reasons why authors of a review should use personal contacts. The first is to place published studies that might have been missed (including studies that are in press or non yet indexed or referenced). The second is to identify unpublished studies. Although controversies remain most including unpublished studies,1 , 2 , v , 6 their omission increases the chances that studies with positive results will exist overrepresented in the review (leading to a systematic overestimation of the handling effect, referred to equally publication bias).7 The tendency for authors to differentially submit — and journals to differentially accept — studies with positive results constitutes a serious threat to the validity of systematic reviews.

If investigators include unpublished studies in a review, they should obtain full written reports and assess the validity of both published and unpublished studies, and they may employ statistical techniques to explore the possibility of publication bias. Reviews based on a small number of small-scale studies with weakly positive effects are the most susceptible to publication bias.2 , viii The cess of potential publication bias can exist explored visually using a funnel plot.two This method uses a scatter plot of studies that relates the magnitude of the treatment effect to the weight of the study. An inverted funnel-shaped, symmetrical appearance of dots suggests that no study has been left out, whereas an asymmetrical appearance of dots, typically in favour of positive outcomes, suggests the presence of publication bias.

The authors of the systematic review of alternative management strategies for Achilles tendon ruptures identified articles with MEDLINE, the Cochrane Cardinal Database of Randomized Trials (Primal) and SCISEARCH, manual hand searches of orthopedic journals, textbooks and proceedings of almanac orthopedic meetings. The investigators likewise contacted content experts. Ultimately, eleven potentially eligible studies were identified. Five of the xi potentially eligible studies published in non-English journals (ane French, 4 German language) were translated into English earlier additional eligibility review. Afterwards review of all 11 studies, vi randomized trials (448 patients) were eventually included. The rigour of the reviewers' search methods reassures the clinician that omission of important studies is unlikely. Identifying articles published in not-English journals and articles exterior Northward America strengthens the generalizability of the results.

Were the main studies of loftier methodologic quality?

Even if a review article includes only randomized trials, information technology is important to know whether they were of good quality. Unfortunately, peer review does not guarantee the validity of published research. For exactly the same reason that the guides for using original reports of research begin by asking if the results are valid, it is essential to consider the validity of master articles in systematic reviews. Differences in written report methods might explain important differences amid the results.9 For instance, studies with less rigorous methodology tend to overestimate the effectiveness of the intervention.9 , ten Consequent results from weak studies are less compelling than from strong ones. Consistent results from observational studies are particularly suspect. Physicians may systematically select patients with a good prognosis to receive therapy, and this pattern of practice may be consistent over fourth dimension and geographic setting. There is no single correct fashion to appraise validity. Some investigators employ long checklists to evaluate methodologic quality, others focus on 3 or 4 key aspects of the report.11 , 12 , 13 , 14 Whether assessors of methodologic quality should be blinded remains a subject of debate.14 , 15 In an independent assessment of 76 randomized trials, Clark and colleaguesfifteen were unable to find meaning effect of reviewer blinding on quality scores.

2 of the authors of the Achilles tendon rupture review independently assessed the methodologic quality of each report, focusing on vi methodologic domains (randomization and blinding, population, intervention, outcomes, follow-up and statistical analysis) and a summary quality scale. Study quality ranged from 57 to 72 points out of a maximum possible 100 points. Use of 2 independent assessors provided greater assurance that the cess of quality was unbiased and reproducible.

The approach, while rigorous, omits an important attribute of validity. Randomization may fail to achieve its purpose of producing groups with comparable prognostic features if those enrolling patients are aware of the arm to which they will exist allocated. For example, using year of birth or hospital identification numbers let investigators to uncover the treatment allocation of their patients before enrolling them in a study. In a randomized trial of open versus laparoscopic appendectomy, the residents responsible for enrolling patients selectively avoided recruiting patients into the laparoscopic appendectomy group at dark.2 To the extent that patients coming in at night were sicker, this practice would have biased the results in favour of the laparoscopic appendectomy group. Allocation concealment (i.e., ensuring that study investigators do not know the handling to which the next patient volition be allocated) is a particularly important event in surgical trials. Equally it turns out, non ane of the trials considered in this systematic review instituted safeguards to ensure curtained randomization. Such safeguards crave a separation of the roles of enrolment into the report and resource allotment into the study arms. In the laparoscopic appendectomy study for case, the investigators could have had the residents phone call a randomization centre to enrol a patient, and the surgical process to which the patient was randomized, communicated just afterwards enrolment had been confirmed.

Were assessments of studies reproducible?

As we have seen, authors of review articles must decide which studies to include, how valid they are and what data to excerpt from them. Each of these decisions requires judgement by the reviewers, and each is discipline to both mistakes (random errors) and bias (systematic errors). Having 2 or more people participate in each decision guards against errors, and if at that place is good risk-corrected understanding amongst the reviewers, the clinician tin take more conviction in the results of the review.16 , 17

The authors of the systematic review that addressed the direction of Achilles tendon rupture assessed the reproducibility of the identification and assessment of study validity using the κ statistic and intraclass correlations (ICCs). Both of these estimate chance-corrected agreement, range between 0 and i, with values closer to ane representing better agreement.

The estimated κ statistic for the identification of potentially eligible studies was high (κ = 0.81, 95% CI 0.75–0.88). The ICC coefficient for rating of study quality was too very loftier (ICC = 0.85, 95% CI 0.lxx–0.97).

Summary of the validity guide to the meta-assay of operative versus nonoperative treatment

The authors of the review specified explicit eligibility criteria. Their search strategy was comprehensive and reproducible. The principal studies had serious methodologic limitations. However, because these randomized trials stand for the best available evidence, the results merit further consideration.

What are the results?

Were the results similar from study to study?

One aim of a systematic review, and in particular of a meta-analysis, is to increase the sensitivity of the primary studies to detect an event past combining them equally if all patients were role of a larger study. The validity of this assumption is confirmed if the magnitude of effect is similar across the range of patients, interventions and ways of measuring outcomes.

Nosotros accept argued that the fundamental assumption is that across the range of patients, interventions and means of measuring outcome, we anticipate more or less the same magnitude of upshot. We take also noted that goals of increasing the precision of estimates of treatment effect, and the generalizability of results, provides reviewers with stiff, legitimate reasons for selecting relatively wide eligibility criteria. Broad selection criteria, notwithstanding, also increase the heterogeneity of the patient population, so systematic reviews ofttimes document important differences in patients, exposures, effect measures and research methods from report to written report. Fortunately, investigators can accost this unsatisfactory situation by presenting their results in a way that allows clinicians to check the validity of the initial assumption. That is, did results show similar from study to report? The remaining challenge is, then, to make up one's mind how similar is like plenty.

There are 3 criteria to consider when deciding whether the results are sufficiently similar to warrant a single estimate of treatment effect that applies across the populations, interventions and outcomes. Starting time, how like are the best estimates of the treatment outcome (that is, the point estimates) from the individual studies. The more than different they are, the more than clinicians should question the determination to pool results across studies.

Second, to what extent do the CIs overlap? The greater the overlap between CIs of dissimilar studies, the more than powerful is the rationale for pooling the results of those studies. The reviewers can too expect at the indicate estimates for each individual written report and determine if the CI around the pooled estimate includes each of the primary study point estimates.

Finally, reviewers can exam the extent to which differences amongst the results of individual studies are greater than would be expected if all studies were measuring the same underlying effect and the observed differences were due to chance. The statistical analyses that are used to do this are chosen tests of homogeneity.18 When the p value associated with the test of homogeneity is small-scale (east.1000., < 0.05), gamble becomes an unlikely explanation for the observed differences in the size of the effect. Unfortunately, a higher p value (0.1 or even 0.3) does not necessarily rule out of import heterogeneity. The reason is that when the number of studies and their sample sizes are minor, the test of heterogeneity is not very powerful. Hence, large differences between the apparent magnitude of the treatment outcome among the master studies (i.e., the point estimates) dictate circumspection in interpreting the overall findings, fifty-fifty in the face up of a nonsignificant exam of homogeneity.18 Conversely, if the differences in results across studies are not clinically important, so heterogeneity is of fiddling business, fifty-fifty if information technology is statistically significant.

Reviewers should attempt to explain between-study differences past looking for apparent explanations (i.e., sensitivity analyses). Heterogeneity in the current review of Achilles tendon ruptures may be attributable to differences in the surgical technique (e.g., elementary v. Kessler 5. Bunnell stitches), postoperative rehabilitation protocols (e.one thousand., bandage v. boot), methodologic features (methodologic quality scores), whether studies were full papers or abstracts, or whether studies were published in English language or not-English language-language journals.

What are the overall results of the review?

In clinical enquiry, investigators collect data from individual patients. In systematic reviews, investigators collect data from individual studies rather than patients. Reviewers must as well summarize these data and, increasingly, they are relying on quantitative methods to exercise so.

Simply comparing the number of positive studies to the number of negative studies is not an adequate way to summarize the results. With this sort of vote counting, large and pocket-sized studies are given equal weight, and (unlikely as it may seem) i investigator may interpret a study every bit positive, while some other investigator interprets the aforementioned study every bit negative. For case, a clinically important effect that is not statistically meaning could be interpreted as positive with respect to clinical importance and negative with respect to statistical significance.nineteen There is a tendency to overlook small-scale simply clinically important effects if studies with statistically nonsignificant (but potentially clinically important) results are counted as negative. Moreover, a reader cannot tell anything nearly the magnitude of an issue from a vote count, even when studies are appropriately classified using additional categories for studies with a positive or negative blueprint.

Typically, meta-analysts weight studies according to their size, with larger studies receiving more weight.ane Thus, the overall results stand for a weighted average of the results of the individual studies. Occasionally studies are also given more than or less weight depending on their quality; poorer-quality studies might be given a weight of 0 (excluded) either in the primary analysis or in a secondary analysis testing the extent to which unlike assumptions atomic number 82 to unlike results (a sensitivity assay). A reader should look to the overall results of a meta-analysis the aforementioned way one looks to the results of primary studies. In a systematic review of a question of therapy, 1 should look for the relative risk and relative risk reduction, or the odds. In reviews regarding diagnosis, ane should expect for summary estimates of the likelihood ratios.

Sometimes the upshot measures that are used in different studies are similar only not exactly the same. For example, dissimilar trials might measure functional status using different instruments. If the patients and the interventions are reasonably like, it might still exist worthwhile to estimate the boilerplate effect of the intervention on functional condition. One way of doing this is to summarize the results of each study as an effect size. The effect size is the deviation in outcomes betwixt the intervention and control groups divided past the standard divergence. The effect size summarizes the results of each written report in terms of the number of standard deviations of divergence between the intervention and command groups (rather than using the conventional — and differing — units of mensurate). Investigators can then calculate a weighted boilerplate of effect sizes from studies that measured an outcome in different ways.

Readers are likely to find it difficult to interpret the clinical importance of an effect size (if the weighted average result is half of a standard departure, is this result clinically petty or is information technology big?). One time once more, ane should await for a presentation of the results that conveys their clinical relevance (east.g., by translating the summary effect size back into conventional units). For instance, if surgeons have become familiar with the significance of differences in functional consequence scores on a detail questionnaire, such as the Musculoskeletal Functional Assessment,20 investigators can catechumen the result size back into differences in score in this detail questionnaire.

Although it is generally desirable to have a quantitative summary of the results of a review, this is not ever advisable. In this case, investigators should nowadays tables or graphs that summarize the results of the primary studies, and their conclusions should be cautious.

How precise were the results?

In the same manner that it is possible to estimate the average result across studies, it is possible to estimate a CI effectually that estimate, that is, a range of values with a specified probability (typically 95%) of including the truthful effect. The CI combines the average upshot of the intervention, its standard deviation and the sample size to requite usa a range within which there is a 95% probability that the true effect falls. If the sample size is small the CI is wider; if the standard difference is wide the CI is wide.

Results of the meta-analysis of operative versus nonoperative handling of astute Achilles tendon ruptures

The mean age of patients in trials included in the current meta-assay ranged from 36.5 to 41 years. Between 69% and 92% of the Achilles tendon ruptures were the result of sports-related injuries.

The authors tested the ceremoniousness of pooling data from 6 trials by examining trial-to-trial variability in results. When examining their master issue of repeat rupture rates, they found essentially similar point estimates, widely overlapping CIs and a nonsignificant test of heterogeneity (p > 0.1). However, they conducted a series of secondary analyses (sensitivity analyses) to explore their almost questionable pooling decisions: pooling across publication status (published or unpublished), report quality score (< fifty five. ≥ 50) and language of publication.

In relation to the pooled analysis beyond all studies, operative treatment reduced the relative risk of repeat rupture by 68% (95% CI 29%–86%). All the same, operative fixation did significantly increase the hazard of infection (relative risk iv.6%, 95% CI 1.2%–17.eight%). Return to normal function and spontaneous complaints did non differ between the ii groups.

Volition the results assist me in caring for my patients?

How can I all-time interpret the results to utilize them to the care of my patients?

Although the pooled indicate guess suggests a substantial reduction in the relative hazard of repeat rupture (68%) with surgery, the 95% CI ranges from 29% to 86%. If one accepts the point gauge every bit accurate, in patients at boilerplate risk for repeat rupture (say 30%), for every 10 patients treated with surgery, surgery would prevent 1 repeat rupture (number needed to treat = one ÷ 0.10 = 10).

The about obvious drawback to surgical repair is the increased hazard of infection. In the current group of trials, there was a 4.7% rate of infection after surgery and no infections after conservative treatment. Therefore, for every 21 patients who receive surgical treatment, surgery would cause 1 wound infection (number needed to harm = ane ÷ 0.047 = 21.two, 95% CI 17–59).

Were all clinically important outcomes considered?

Although information technology is a good idea to wait for focused review articles because they are more likely to provide valid results, this does not mean that one should ignore outcomes that are not included in a review. For example, the potential benefits and harms of operative repair of Achilles tendon rupture include reduced risk of reoperation and increased take a chance of infection. Focused reviews of the evidence for private outcomes are more likely to provide valid results, but a clinical decision requires consideration of all of them.21 It is non unusual for systematic reviews to fail the adverse effects of therapy. For example other outcomes of interest include magnitude and duration of pain, timing and extent of return to full part, and costs.

Are the benefits worth the costs and potential risks?

Finally, either explicitly or implicitly, when making recommendations to their patients surgeons must weigh the expected benefits against the potential harms and costs. Although this is almost obvious for deciding whether to utilize a therapeutic or preventive intervention, providing patients with data about causes of illness or prognosis can too accept benefits and harms. For example, a patient may do good from decreased run a risk of infection with cast treatment of an Achilles tendon rupture at the cost (i.e., potential harm) of an increased risk of repeat rupture. A valid review article provides the best possible basis for quantifying the expected outcomes, but these outcomes even so must exist considered in the context of your patient's values and preferences most the expected outcomes of a specific decision.2 For example, ane could recommend nonoperative direction of Achilles tendon rupture to a patient who places a higher value on preventing infection and a lower value on preventing re-rupture.

Resolution of the scenario

The meta-analysis of operative versus nonoperative treatment of Achilles tendon ruptures meets most of the criteria for study validity, including explicit eligibility criteria, a comprehensive search strategy, and assessment and reproducibility of study validity.ii The authors found a very large benefit of operative repair on re-rupture rates at the price of greater infection risk. Furthermore, pooling of study results seems justified past the nonsignificant tests of heterogeneity, reasonable similarity of results (betoken estimates) and widely overlapping CIs around those point estimates. On the other hand, the quality of studies was relatively poor, including a failure to muffle randomization in all studies. Our interpretation is that the magnitude of the effect is sufficiently large that, despite the limitations in study quality, the inference that operative repair provides substantially lower repeat rupture rates in patients with Achilles tendon ruptures is secure. Thus, surgeons who manage patients with Achilles tendon ruptures similar to those presented in this meta-analysis (younger, athletic, astute ruptures) can reassure them that current testify favours operative treatment. When patients seem different from those included in a meta-assay, clinicians should consider whether they are actually so different that the results cannot be applied to their patients.

In every state of affairs, physicians should try to discover out the general decisional preferences of their patients, such as the favoured decision-making model, the amount of information desired and their ideal degree of interest in deliberation and decision-making. Clinicians should likewise exist enlightened that their patients' preferences might vary with the nature of the conclusion, the choices and the outcomes. While researchers find answers to these questions, practising clinicians should try their best to make sure that of import decisions remain as consistent as possible with the values and preferences of informed patients, the women and men who will live (and dice) with the outcomes.

The current increase in the number of small randomized trials in the field of orthopedic surgery provides a strong argument in favour of meta-analysis. Nevertheless, it remains essential that those who are planning future meta-analyses attach to accustomed methodologies and provide the best available evidence to address sharply defined clinical questions.4 Although the quality of the primary studies will always be the major limiting gene in cartoon valid conclusions, the quality of the meta-analysis is also important in ensuring that the pooling of these results is as valid and free of bias equally possible.

Notes

The Evidence-Based Surgery Working Group members include: Stuart Archibald, Dr.;*†‡ Mohit Bhandari, MD; Charles H. Goldsmith, PhD;‡§ Dennis Hong, MD; John D. Miller, MD;*†‡ Marko Simunovic, Md, MPH;†‡§¶ Ved Tandan, Doctor, MSc;*†‡§ Achilleas Thoma, MD;†‡ John Urschel, MD;†‡ Sylvie Cornacchi, MSc†‡

*Department of Surgery, St. Joseph's Hospital, †Department of Surgery, ‡Surgical Outcomes Research Heart and §Department of Clinical Epidemiology and Biostatistics, McMaster University, and ¶Hamilton Health Sciences, Hamilton, Ont.

This manuscript is based, in part, on: Guyatt GH, Rennie D, editors. Users' guides to the medical literature: a transmission for prove-based clinical practice. Chicago: AMA Press; 2002 and Bhandari M, Guyatt GH, Montori 5, Devereaux PJ, Swiontkowski MF. User's guide to the orthopaedic literature: how to employ a systematic literature review. J Bone Joint Surg Am 2002;84:1672-82.

Competing interests: None declared.

Correspondence to: Dr. Mohit Bhandari, Department of Clinical Epidemiology and Biostatistics, Rm. 2C12, McMaster University Health Sciences Centre, 1200 Main St. Westward, Hamilton ON L8N 3Z5; fax 905 524-3841; ac.ocitapmys@iradnahb

Accepted for publication June 13, 2003.

References

ane. Oxman A, Cook DJ, Guyatt GH. User's guide to the medical literature: how to use an overview. JAMA 1994;272:1367-71. [PubMed]

2. Guyatt GH, Rennie D, editors. Users' guides to the medical literature: a transmission for bear witness-based clinical practice. Chicago: AMA Press; 2001.

3. Bhandari One thousand, Guyatt GH, Siddiqui F, Morrow F, Busse J, Leighton RK, et al. Treatment of acute Achilles tendon ruptures: a systematic overview and metaanalysis. Clin Orthop 2002;400:190-200. [PubMed]

4. Bhandari M, Morrow F, Kulkarni A, Tornetta P tertiary. Meta-analyses in orthopædic surgery: a systematic review of their methodologies. J Bone Joint Surg Am 2001;83: xv-24. [PubMed]

5. Dickerson K. The existence of publication bias and run a risk factors for its occurrence. JAMA 1990;263:1385-nine. [PubMed]

half dozen. Dickerson K, Chan S, Chalmers TC, Sacks HS, Smith H. Publication bias and clinical trials. Command Clin Trials 1987;8:343-53. [PubMed]

7. Montori VM, Smieja Thousand, Guyatt GH. Publication bias: a brief review for clinicians. Mayo Clin Proc 2000;75:1284-8. [PubMed]

8. Detsky Equally, Naylor CD, O'Rourke K, McGeer AJ, 50'Abbe A. Incorporating variation in the quality of private randomized trials into meta-analysis. J Clin Epidemiol 1992;45:255-65. [PubMed]

9. Moher D, Jones A, Melt DJ, Jadad AR, Moher G, Tugwell P, et al. Does the quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 1998;352:609-13. [PubMed]

10. Khan KS, Daya Due south, Jadad AR. The importance of quality of master studies in producing unbiased systematic reviews. Curvation Intern Med 1996;156:661-half dozen. [PubMed]

eleven. Cook DJ, Sackett DL, Spitzer WO. Methodological guidelines for systematic reviews of randomized controlled trials in health care from the Potsdam consultation on meta-analysis. J Clin Epidemiol 1995;48:167-71. [PubMed]

12. Cook DJ, Mulrow CD, Haynes RB. Synthesis of best evidence for clinical decisions. In: Mulrow C, Melt DJ, editors. Systematic reviews. Philadelphia: American Higher of Physicians; 1998. p. five-15.

13. Turner JA, Ersek M, Herron 50, Deyo R. Surgery for lumbar spinal stenosis: attempted meta-assay of the literature. Spine 1992;17:1-8. [PubMed]

14. Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, et al. Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control Clin Trials 1996;17:1-12. [PubMed]

15. Clark H, Wells G, Huet C, McAlister F, Salmi LR, Fergusson D, et al. Assessing the quality of randomized trials: reliability of the Jadad calibration. Control Clin Trials 1999;20:448-52. [PubMed]

16. Fleiss JL. Measuring understanding between two judges on the presence or absence of a trait. Biometrics 1975;31:651-9. [PubMed]

17. Villar J, Carroli G, Belizan JM. Predictive power of meta-analyses of randomised controlled trials. Lancet 1995;345:772-vi. [PubMed]

18. Cooper RM, Rosenthal R. Statistical versus traditional procedures for summarizing research findings. Pyschol Bull 1980;87:442-ix. [PubMed]

xix. Breslow NE, Day DE. Combination of results from a serial of 2х2 tables; command of confounding. In: Statistical methods in cancer inquiry: the assay of example–control studies. Vol. 1. Lyon (French republic): International Agency for Enquiry on Cancer; 1980. p. 136-46.

20. Engelberg R, Martin D, Agel J. Musculoskeletal function assessment musical instrument: criterion and construct validity. J Orthop Res 1996;xiv:182-92. [PubMed]

21. Colton C. Statistical correctness. J Orthop Trauma 2000;8:527-eight. [PubMed]


Articles from Canadian Periodical of Surgery are provided here courtesy of Canadian Medical Association


crumpunise1952.blogspot.com

Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3211813/

0 Response to "Did the Review Explicitly Address a Sensible Question?"

Post a Comment

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel