Free
Editorial  |   July 2010
Reporting Standards for Intervention Effectiveness Studies
Article Information
Evidence-Based Practice / From the Desk of the Editor
Editorial   |   July 2010
Reporting Standards for Intervention Effectiveness Studies
American Journal of Occupational Therapy, July/August 2010, Vol. 64, 523-527. doi:10.5014/ajot.2010.09644
American Journal of Occupational Therapy, July/August 2010, Vol. 64, 523-527. doi:10.5014/ajot.2010.09644
Sharon A. Gutman, PhD, OTR
Sharon A. Gutman, PhD, OTR
×
The Institute of Medicine (Sox & Greenfield, 2009) has recently called for an increase in the number of effectiveness studies to provide practitioners with needed information to make best-practice decisions. The need for effectiveness studies and evidence supporting best treatment has become critical as health care costs continue to escalate and third-party payers increasingly deny reimbursement without a body of evidence to support intervention (International Committee of Medical Journal Editors, 2007). The increased demand for effectiveness studies has led to a need for uniformity in reporting standards so that practitioners can assess the reliability and relevance of research information. Without the transparent reporting of specific criteria, practitioners cannot evaluate the applicability of a given effectiveness study.
The reporting standards presented here have two objectives: (1) to help occupational therapy researchers understand the elements needed in the design and reporting of intervention studies published in the American Journal of Occupational Therapy (AJOT) and (2) to help occupational therapists understand how study findings can be applied to practice. The reporting standards described here have been developed from an integration of three sets of reporting standards widely received in the larger clinical community: (1) the CONSORT statement (Moher, Schulz, & Altman, 2001), (2) the TREND statement (Des Jarlais, Lyles, Crepaz, & TREND Group, 2004), and (3) the American Psychological Association (APA) statement (2010; APA Publications and Communications Board, 2008). Authors of intervention studies submitted to AJOT are encouraged to use these reporting standards. When conflicts arise between manuscript-length limitations and full reporting, authors are encouraged to report complete data on university repositories or other online archives.
Title
The title of a research study should include the population, type of intervention or treatment, outcome measure, and type of design (e.g., “The Effect of an Occupation Therapy Social Skills Program for Adolescents With Asperger Syndrome: A Two-Group Controlled Trial”).
Abstract
The abstract is often best organized through use of the following headings.
Objectives
Clearly state the purpose of the study.
Method
Explicitly state the research design, type of intervention, clinical population, sample size, participant allocation method, length of intervention and use of follow-up points, and outcome measures.
Results
State the primary results using statistical significance levels and effect sizes.
Conclusions
Describe the implication of the results for the profession, the larger society, or both.
Introduction
The introduction should provide the background of the larger problem to the profession and society, the need for the study, and a description of how the study will contribute to the profession’s knowledge base. Define all key concepts and constructs. Relevant research should be briefly described, and authors should identify gaps in the previous research that illustrate the need for the study. The purpose of the study should be stated in the final paragraph of the introduction, and specific research questions should be provided. Research questions are often most clear when they are based on the PICO format and include the clinical population (P), intervention (I), type of design—comparison or control (C), and outcome measure (O). An example is, “Is an occupational therapy–supported education program (I) more effective at helping people with psychiatric disabilities (P) to obtain a GED (O) than treatment as usual (C)?”
Method
Research Design
The research design should be clearly articulated. For example, a study could have one of the following designs:
  • Large, randomized controlled trial (RCT)

  • Small RCT

  • Two-group, nonrandomized controlled trial

  • Uncontrolled, one-group pretest–posttest

  • Prospective cohort study

  • Retrospective cohort study

  • Cross-over design

  • Factorial design.

Studies using random assignment should describe the following elements:
  • Random assignment sequence: What method was used to generate the random assignment sequence (include restrictions such as blocking and stratification)?

  • Random assignment concealment: Was random assignment concealed from the person enrolling participants (thereby reducing potential assignment bias) and from the outcome assessor (reducing potential assessment bias)?

  • Random assignment implementation: Who generated the assignment sequence, who enrolled participants, and who assigned participants to group conditions? To reduce bias, investigators responsible for creating the assignment sequence should be different from those enrolling and assigning participants to groups.

Studies using control groups but not random assignment should describe the assignment method. What method was used to assign participants to group conditions? What methods were used to help reduce potential bias resulting from nonrandomization of participants (e.g., matching)?
In addition, the research design section should clearly state whether intention-to-treat (ITT) analysis (described later) was used. A statement should also be made indicating that the study received Institutional Review Board approval and that participants provided informed consent. Children should provide assent.
Instruments
All instruments used to assess outcome measures should be described with regard to their intended purpose, the population for which the instrument was developed and tested, and established levels of reliability and validity. Whether the instrument was developed specifically for this study, whether no psychometric properties have been established, or both should be clearly stated. References should be provided for each instrument used.
Participant Selection
Describe methods of recruitment. Specific inclusion and exclusion criteria should be stated. If participants were selected from one or more sites, a brief description of the facility type and geographical region should be specified.
Procedures
The procedures section should describe the following elements:
  • Intervention: Briefly describe the intervention, and indicate whether it was manualized or based on a written set of practice guidelines. State how many interveners (or therapists) were used.

  • Intervention administration schedule: Describe the administration schedule for each group, including specific time periods for each study phase (e.g., baseline, intervention, postintervention, follow-up).

  • Use of multiple interveners: If multiple interveners were used, describe the procedures used to train the interveners to administer intervention. What procedures were used to ensure that interveners provided intervention uniformly?

  • Blinding of interveners and participants: Were interveners and participants blinded to group assignment? What were the procedures for blinding, and how were they assessed?

Data Collection
The data collection section should describe the following elements:
  • Data collection schedule: Describe how and when each type of data was collected in each study phase.

  • Data collector training and rater reliability: Describe how data collectors were trained to collect data uniformly. If multiple raters were used to measure participant performance, was interrater reliability established for all raters?

  • Blinding of data collectors: Were data collectors blinded to participant group assignment?

  • Separation of data collectors and interveners: To reduce bias, were data collectors different from interveners?

Data Analysis
Describe the statistical methods used to compare groups on primary and secondary outcomes. Describe the methods used for any ancillary analyses such as subgroup and adjusted analysis. Justify the use of nontraditional statistical procedures or statistical methods that are not congruent with established rules of parametric and nonparametric data analysis. Make sure to cite all statistical methods and briefly describe those that are not well established. Indicate the statistical software program used.
Intention-to-Treat Analysis.
If a randomized or nonrandomized controlled trial was conducted, it is important to indicate whether ITT was used. ITT is particularly useful in pretest–posttest study designs (Salim, Mackinnon, Christensen, & Griffiths, 2008). In ITT analysis, all participants originally enrolled in a study are included in the final data analysis regardless of whether they actually received the treatment condition to which they were assigned. Because ITT analysis more accurately mirrors nonadherence and treatment changes that may occur in practice, ITT analysis reduces bias and provides a fuller understanding of the effects of treatment. Removal of participants who did not complete the study biases analysis in favor of treatment effectiveness. The particular imputation method chosen is dependent on the pattern of “missingness” of data. Complete application of ITT principles typically accompanies more traditional analytic processes and should include a thorough description of missing responses and the imputation procedures (Hollis & Campbell, 1999).
Results
The statistical methods and results should be described in enough detail that readers can verify the reported findings. Although raw scores for participants should not be included in an intervention study (with the exception of single-subject design and case report), raw data should be made available through online university repositories or other supplemental archives. Results should be reported in absolute numbers rather than percentages. The absolute numbers from which percentages were derived should always accompany any given percentage.
Participant Flow
Describe the flow of participants through each study stage. Participant flow is a description of the number of participants who enrolled in the study, were assigned to each group, received the intervention or control–comparison condition, received follow-up measures, and were included in the final data analysis. Participant flow is often most clearly documented in chart form (see Figure 1) and allows readers to readily understand how many participants began the study and were included in final analysis. Researchers are recommended to state whether ITT analysis was performed and clearly report the number of participants who withdrew (and for what reason, if known), who were terminated by the investigators (and for what reasons), who were lost to follow-up, and who did not adhere to the treatment protocol (and why, if known).
Figure 1.
Consort E-Flowchart (August 2005).
Note. From “The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials,” by D. Moher, K. F. Schulz, & D. G. Altman, 2001, Annals of Internal Medicine, 134, 657–662. Copyright © 2001 by the American College of Physicians. Retrieved January 10, 2010, from www.consort-statement.org/consort-statement/. Used with permission.
Figure 1.
Consort E-Flowchart (August 2005).
Note. From “The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials,” by D. Moher, K. F. Schulz, & D. G. Altman, 2001, Annals of Internal Medicine, 134, 657–662. Copyright © 2001 by the American College of Physicians. Retrieved January 10, 2010, from www.consort-statement.org/consort-statement/. Used with permission.
×
Participant Demographics
Minimal reporting standards require that the following demographic characteristics be reported for all participants: age, gender, race or ethnicity, education level, socioeconomic level, and topic-specific characteristics such as independent living status.
Sample Size Justification and Power Analysis
When stating the sample size, it is important to indicate whether a power analysis was performed to estimate the minimum required sample size to avoid a Type II error.
Equivalence of Treatment and Control Groups
Describe the procedures used to determine whether treatment and control groups were statistically equivalent on baseline clinical measures and demographic variables. If differences were found between groups, what statistical methods were used to control for differences?
Effect Size and Confidence Intervals
When reporting statistical significance, report effect size in addition to p values. Effect size (e.g., odds ratio, Cohen’s d, r) allows readers to determine whether statistically significant differences between groups are large enough to be clinically meaningful. In addition, confidence intervals should be provided for the estimated effect. A confidence interval indicates the range of uncertainty for the intervention effect, within which lies the true value.
Missing Data
Missing data should be addressed, and the protocol for handling missing data in the analysis should be described.
Intervention Fidelity
Indicate whether the intervention was provided as intended and how this was confirmed. Methods through which intervention fidelity can be assessed include (1) video- or audiotaping the intervention to later analyze whether therapists provided intervention consistently and (2) gathering therapists for group meetings to discuss protocol issues throughout the intervention period (see Nelson & Mathiowetz, 2004, for further information).
Adverse Events
Any adverse reactions to intervention, either in the treatment or the comparison group, should be reported.
Discussion
Interpretation
Answer research questions on the basis of the study’s findings. Discuss whether the findings support previous work. Provide explanations for unexpected findings.
Clinical Application
Discuss the implications of the findings for clinical practice. Describe how these findings contribute to the consensus regarding best practice for the specific clinical problem addressed.
Limitations
Acknowledge any limitations that may have biased results, including confounding variables that could have reduced the study’s internal validity. Describe barriers that may have interfered with the administration of the intervention as intended. Discuss the generalizability of results and factors that may have reduced the external validity of the findings (address differences between the larger population and the study sample, the possible influence of incentives, and adherence to treatment protocols). Address the power of the sample size and whether it was sufficient to answer the research question(s). Was the follow-up long enough to understand whether intervention effects are long lasting?
Future Research
Identify how future studies can better address questions regarding the effectiveness of the targeted intervention. What limitations in the current research can be addressed in future studies? What questions regarding the intervention continue to be left unanswered (e.g., cost-efficiency, patient adherence and tolerance, use with other populations)?
Conclusion
The conclusion should be a short section summarizing the overall support for the intervention in the context of available evidence.
Acknowledgment
I thank Susan Murphy, ScD, OTR, for valuable guidance on drafts of these reporting standards.
References
American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.
American Psychological Association. (2010). Publication manual of the American Psychological Association (6th ed.). Washington, DC: Author.×
APA Publications and Communications Board Working Group on Journal Article Reporting Standards. (2008). Reporting standards for research in psychology: Why do we need them? What might they be?. American Psychologist, 63, 839–851. doi:10.1037/0003-066X.63.9.839 [Article] [PubMed]
APA Publications and Communications Board Working Group on Journal Article Reporting Standards. (2008). Reporting standards for research in psychology: Why do we need them? What might they be?. American Psychologist, 63, 839–851. doi:10.1037/0003-066X.63.9.839 [Article] [PubMed]×
Des Jarlais, D. C., Lyles, C., & Crepaz, N.TREND Group., (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366. Retrieved January 10, 2010, from www.ajph.org/cgi/content/full/94/3/361 [Article] [PubMed]
Des Jarlais, D. C., Lyles, C., & Crepaz, N.TREND Group., (2004). Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94, 361–366. Retrieved January 10, 2010, from www.ajph.org/cgi/content/full/94/3/361 [Article] [PubMed]×
Hollis, S., & Campbell, F. (1999). What is meant by intention-to-treat analysis? Survey of published randomised controlled trials. British Medical Journal, 319, 670–674. [Article] [PubMed]
Hollis, S., & Campbell, F. (1999). What is meant by intention-to-treat analysis? Survey of published randomised controlled trials. British Medical Journal, 319, 670–674. [Article] [PubMed]×
International Committee of Medical Journal Editors. (2007). Uniform requirements for manuscripts submitted to biomedical journals: Writing and editing for biomedical publication. Retrieved January 10, 2010, from www.icmje.org/
International Committee of Medical Journal Editors. (2007). Uniform requirements for manuscripts submitted to biomedical journals: Writing and editing for biomedical publication. Retrieved January 10, 2010, from www.icmje.org/×
Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Annals of Internal Medicine, 134, 657–662. Retrieved January 10, 2010, from www.consort-statement.org/consort-statement/ [Article] [PubMed]
Moher, D., Schulz, K. F., & Altman, D. G. (2001). The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials. Annals of Internal Medicine, 134, 657–662. Retrieved January 10, 2010, from www.consort-statement.org/consort-statement/ [Article] [PubMed]×
Nelson, D. L., & Mathiowetz, V. (2004). Randomized controlled trials to investigate occupational therapy research questions. American Journal of Occupational Therapy, 58, 24–34. [Article] [PubMed]
Nelson, D. L., & Mathiowetz, V. (2004). Randomized controlled trials to investigate occupational therapy research questions. American Journal of Occupational Therapy, 58, 24–34. [Article] [PubMed]×
Salim, A., Mackinnon, A., Christensen, H., & Griffiths, K. (2008). Comparison of data analysis strategies for intent-to-treat analysis in pre-test–post-test designs with substantial dropout rates. Psychiatry Research, 160, 335–345. doi:10.1016/j.psychres.2007.08.005 [Article] [PubMed]
Salim, A., Mackinnon, A., Christensen, H., & Griffiths, K. (2008). Comparison of data analysis strategies for intent-to-treat analysis in pre-test–post-test designs with substantial dropout rates. Psychiatry Research, 160, 335–345. doi:10.1016/j.psychres.2007.08.005 [Article] [PubMed]×
Sox, H. C., & Greenfield, S. (2009). Comparative effectiveness research: A report from the Institute of Medicine. Annals of Internal Medicine, 151, 203–205. Retrieved January 10 from www.annals.org/cgi/content/short/0000605-200908040-00125v1 [Article] [PubMed]
Sox, H. C., & Greenfield, S. (2009). Comparative effectiveness research: A report from the Institute of Medicine. Annals of Internal Medicine, 151, 203–205. Retrieved January 10 from www.annals.org/cgi/content/short/0000605-200908040-00125v1 [Article] [PubMed]×
Sharon A. Gutman, PhD, OTR
Sharon A. Gutman, PhD, OTR
×
Figure 1.
Consort E-Flowchart (August 2005).
Note. From “The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials,” by D. Moher, K. F. Schulz, & D. G. Altman, 2001, Annals of Internal Medicine, 134, 657–662. Copyright © 2001 by the American College of Physicians. Retrieved January 10, 2010, from www.consort-statement.org/consort-statement/. Used with permission.
Figure 1.
Consort E-Flowchart (August 2005).
Note. From “The CONSORT statement: Revised recommendations for improving the quality of reports of parallel-group randomized trials,” by D. Moher, K. F. Schulz, & D. G. Altman, 2001, Annals of Internal Medicine, 134, 657–662. Copyright © 2001 by the American College of Physicians. Retrieved January 10, 2010, from www.consort-statement.org/consort-statement/. Used with permission.
×