Non-randomized controlled studies refers mainly to quasi- experimental studies. The Transparent reporting of evaluations with non-randomized designs (TREND-2004) statement guidelines and checklists (http://www.cdc.gov/trendstatement/), which were drawn up to enhance reporting of non-randomized studies, can be accessed via the Table of Contents for the Author Guidelines. They do not, however, provide an organizing framework for your report. See the suggested CJNR framework below.
ORGANIZATIONAL STRUCTURE: KEY COMPONENTS
In this section you should address the following issues:
- the rationale for the study and the intervention
- the theoretical or conceptual framework underlying the design and the mechanism responsible (mediating variables) for the intervention effects, specifying any moderating variables
- the distinguishing characteristics of the experimental intervention
- the theoretical or empirical evidence supporting key components of the experimental intervention in the context of the framework and the proposed moderators and mediators
- the evidence from prior pilot work or from the literature pertaining to similar interventions or the same intervention evaluated with similar target populations
- the preliminary steps that led to the study — previous feasibility and pilot studies that examined different components of the intervention
All of these elements culminate in the aim or purpose of the study.
State the overall study intention or research question as well as specific objectives or hypotheses that clearly emanate from the framework and the related literature.
Identify the type of non-randomized study, such as between-subject (e.g., parallel group or cohort, regression discontinuity) and within-subject (e.g., interrupted time series) designs.
In this section describe the following points:
- data-collection settings/locations (e.g., clinics, nursing homes)
- sampling frame
- inclusion/exclusion criteria and how they were determined
- sampling method
- recruitment method (self-selection, referral)
- how the sample size was calculated and the outcome on which it was based, with references
Describe how the participants were assigned to the experimental and comparison groups (e.g., alternate sequencing, naturally occurring).
State the unit of assignment (individual, group).
Describe the strategy used to minimize bias, such as a matching procedure, and the selected variables and rules used in the matching process.
Describe the experimental and control/comparison interventions.
Describe the components of the intervention, including activities.
Identify the approach for providing a standardized or tailored mode of delivery (e.g., individual or group, face-to-face, online) and dose (e.g., number and duration of sessions) making up the intervention.
Describe the professional qualifications of the interventionists and the training provided to them.
Control or comparison treatment
State whether the participants in this group received no treatment at all (no-treatment control) or comparison treatment (e.g., care as usual, minimal treatment).
Describe the components, mode of delivery, and dose of comparison treatment as well as the qualifications of the health professionals who provided the treatment.
Procedures and Data Collection
Describe the data-collection procedures at each time point (before, during, and after treatment):
- location and setting of data collection
- data-collection types/methods, such as face-to-face interviews, self-report instruments, blood samples, timing and frequency of measures, and duration of data collection/session
- indicate any patient follow-up between clinic visits
- identify any deviation from the original protocol/plan, and the reasons for this
- describe any monetary or other incentives offered to participants
- give the start and end dates of data collection
Identify the statistical operations used to prepare (clean) data for final analyses.
Describe how missing data were handled (e.g., intention to treat).
Identify statistics used to evaluate the effects of the interventions on outcomes, moderator and mediator effects, and the influence of adherence to the intervention on outcomes.
Describe the methods used for secondary and adjusted (in the case of baseline differences) analyses.
Give the software used and the specific version.
Define the variables of interest (moderators, mediators, adherence to treatment, outcomes) at the conceptual and operational levels.
Indicate if and how data on fidelity of intervention implementation were collected.
Identify the instruments used to measure each variable; for each instrument, give the number of items, type of scale, and direction and range of measurement.
Provide psychometric data to support the reliability and validity of each instrument.
Any instruments developed for the study (e.g., measures of adherence) or translated into another language must be tested for reliability and validity and the results reported in the paper.
Use a flow diagram to provide the following elements:
- number of people screened, eligible/ineligible, declined to participate, enrolled in each group
- number who received the experimental and comparison interventions
- number in each group who completed each follow-up assessment
- number of participants in the final analyses
- rationale for dropouts and/or missing data (e.g., reasons given by participants; instrument failure)
Use tables to summarize the following elements:
- descriptive baseline, demographic, social, and clinical characteristics of the experimental and comparison groups
- statistical baseline comparisons of the characteristics of participants who dropped out and those who completed the study, including variables measured at pretest
- whether the primary analysis was based on an “intention to treat” or “per protocol” strategy
Present the findings related to each of the objectives or hypotheses:
- present results related to fidelity in terms of implementation and adherence to the intervention
- restate each hypothesis and corresponding findings, explaining the order in which variables were entered into the equation(s), including covariates
- explain how the final covariates were selected for the final analyses
- report mean, SD/SE, estimated effect size, and confidence intervals
- report negative or null findings
- present sub-analyses of pre-specified and justified objectives and describe the method used to counter the problem of multiple comparisons and the risk of a type 1 error (e.g., use of a Bonferroni correction)
- report any side or adverse effects and clinical issues directly associated with the intervention
The discussion should cover three main areas: interpretation, generalizability, and overall evidence (Des Jarlais et al., 2004).
Identify the limitations that could contribute to a cautionary interpretation of the data (e.g., sample size and representativeness; low fidelity or adherence to intervention; outcome measures that are not sensitive to change).
Discuss the implications for clinical practice and the potential for generalizing the findings under particular conditions or in particular settings.
Propose recommended next steps in light of the explicated results and limitations.
Craig, P., Dieppe, P., MacIntyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2013). Developing and evaluating complex interventions: The new Medical Research Council guidance: Commentary. International Journal of Nursing Studies, 50, 585–592.
Des Jarlais, D. C., Lyles, C., Crepaz, N., & TREND Group. (2004). Improving the reporting quality of nonrenadomized evaluations of behavioral and public health interventions: The TREND statement. American Journal of Public Health, 94(3), 361–366.
Schulz, K. F., Altman, D. G., & Moher, D., for the CONSORT Group. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. Annals of Internal Medicine, 152(11), 726–732. PMID: 20335313.