Randomized control trials (RCTs) are the corner stone of evidence-based medicine. RCTs are designed to determine whether an intervention is more effective than the current offered treatment regimen. In RCTs the control arm or comparator forms the baseline of relative effectiveness or safety for the intervention in the study population. Comparator bias occurs when the intervention arm is given an unfair advantage through selecting a control arm that does not reflect current practice, or lower or higher doses of comparator are used.1 Avoiding comparator bias when selecting a control arm is critical, for example, using a lower dose of an available treatment that is not supported by current evidence could make the intervention more likely to show positive results.

Standard of care is defined as the current/usual/normal management that would be received by the participant at that site if they were not participating in a clinical trial. Standard of care varies between centers for several reasons, including budget, availability, administrative preferences, and uptake rate for new therapies. In addition, standard of care is expected to change over time based on research, knowledge translation, and policy updates. Policy updates and clinical practice guidelines are informed by systematic literature reviews, which combine and compare previously conducted studies. A critical problem arises when standard of care is used as the control arm, but is not well defined in the published research articles, making the trial difficult to replicate and contributing to research waste.2,3,4,5 In pediatrics, standard of care is increasingly variable between sites, including definitions, diagnosis, and available treatments.6 Now the question is how well are standard of care control arms defined in pediatric clinical trials?

Our study published in 2018 sought to compare the reporting of standard of care control arms with intervention arms within the same pediatric clinical trials. Our full report is available here: https://www.nature.com/articles/s41390-018-0019-7. We modified an existing reporting tool (TIDieR Template for Intervention Description and Replication), including 12 items: name that described the arm, references for justification, procedures, materials, description of who provided the intervention, specific training provided, route of delivery, locations of the intervention, number of delivered interventions, and description of personalization or modification if occurred. We included 214 RCTs with participants <18 years of age. These studies were mostly behavioral, rehabilitation, and psychosocial interventions, and almost half of the trials were multisite. Our analysis revealed that studies reported fewer TIDieR checklist items for standard of care control arms (5.81 vs. 8.45) compared to intervention arms. More study sites also predicted fewer reported TIDieR checklist items for the standard of care control arm. Interestingly, only 2/98 (2%) of the multi-center trials commented on limitations in ensuring equivalent care was provided across sites for their standard care control arm. We did not evaluate the appropriateness of “standard of care” that was reported.

We are unable to distinguish between problems of study design (such as inconsistent use or definitions of standard of care) with deficiencies in reporting. It could be the case that the lack of reporting is due to the impact of journal word limits for clinical trials, which may force authors to make decisions about which details are most important to report. The lack of reporting reduces the validity of trial results, both internal and external. Internal validity is a measure of how well a study is conducted, if it is not reported, this is hard to evaluate. External validity is how applicable the results are to the real world, and again, if it is not reported, it is impossible to know. Clinical trials are expensive and very time consuming, often funded using public tax dollars. Regardless of funding source, when the control arm is not well defined, the research is essentially wasted as it cannot be reproduced. Accountability for transparency and research quality in health research is critical for public support of science funding. Rigorous, reproducible evidence is especially important when serving the pediatric population given that many medications prescribed either are off label or better researched in adults, and therefore puts this population at an increased risk for adverse events.

Fictitious examples

To exemplify why it is important to fully report the standard of care arm, we will look at two examples: a clinical trial for the treatment of chronic daily headaches in adolescents and a meta-analysis on the management of opioid withdrawal in newborns.

Example one: The intervention is a new pharmaceutical treatment compared to standard of care. The measured outcomes were frequency, duration, and pain associated with headaches. The research design is a phase III randomized controlled trial, enrolling participants at eight sites, in three different countries. The fictitious results showed that three out of the eight sites showed significant improvement in the intervention arm. However, upon further investigation we discovered that only one country universally covered the costs of cognitive behavioral therapy for adolescents with headaches, two sites did not mandate three meals a day, and five sites did not include the promotion of good sleeping habits as a part of standard care. This example demonstrates how standard of care for headaches is inconsistent and may have contributed to the variability seen in participant response. It becomes difficult to interpret whether the intervention was successful, or whether improvements were related to sleep, diet, or cognitive behavioral therapy.

Example two: Researchers conduct a systematic review and identify three published clinical trials in the past ten years that evaluate two different pharmacologic therapies used to manage withdrawal in newborns exposed to opioids during pregnancy. Treatment A is considered standard of care in all studies, and has been safely used in neonates for decades. Two of the three studies identified significant improvements in the length of stay in hospital with treatment A, but the third study (and largest) showed no difference. Previous literature reports a strong protective association between rooming-in and breastfeeding for reducing neonatal withdrawal. All three studies reported pharmacologic management plans at the institutions where the studies were conducted, but rooming-in protocols and breastfeeding support was not described in two of the studies, and screening and diagnosis protocols were not defined in all three studies. Further investigation identified that participants in the third study did not receive antenatal counseling with a lactation consultant, were evaluated using a different withdrawal scoring tool, and all neonates were managed in the NICU separate from maternal wards. The systematic reviewer is left to consider whether these groups are comparable.

These examples are used to demonstrate the potential impact of ill-defined standard of care in pediatric medicine. With inadequate reporting, it is difficult to ascertain whether each site within a trial provided equivalent care, how relevant this control arm is to the patient in front of you and how care models evolve over time and across studies. Trial resources are precious. If sufficient details are not available for readers to interpret “standard of care”, this becomes a contributor to research waste.

Researchers need to acknowledge that fully reporting the standard of care, especially when used as the control arm, is important. Omitting fundamental details about the study arms is avoidable. There are many existing resources available to guide researchers with reporting control arms. For example, the TIDieR Checklist for intervention arm reporting (available at www.equator-network.org) should be implemented during the research design phase to ensure all relevant information is collected from each site where participants will be recruited. Adherence to existing intervention reporting guidelines presents an easy first step to improving study quality, interpretation, and impact.

All the necessary measures should be taken to produce valuable research findings that have high internal validity as well as strong external validity. Nowhere is this more important than in maternal–child health, where trials are mostly investigator initiated and must be conducted across multiple sites. Adequate reporting is an ethical requirement of individuals conducting clinical trials to ensure their results can be accurately interpreted, replicated, and be used to contribute to policies that inform change at the bedside.