NREPP Concerns

Whose research is better for helping American people that SAMHSA is charged to serve?

Dennis Embry

First published August 18, 2018 by Children’s Mental Health Network Newsletter

At my first combined National Advisory Council meeting, the Assistant Secretary, Dr. McCance-Katz was quite dismissive of any evidence-based practices on NREPP, both for prevention and treatment. I rather took scientific offense to her conclusion, as I hang out nationally and internationally with some of the smartest and most celebrated scientists doing this work. It’s even my guild, the Society for Prevention Research and the National Prevention Science Coalition for Saving Lives. The work of my colleagues and I in prevention science is cited in multiple Institute of Medicine and Surgeon General Reports, not to mention published in high-quality scientific journals.

During my very first meeting on the National Advisory Council, the Assistant Secretary opined that there were no evidence-based strategies on NREPP for serious addictions or serious mental illness. I knew that that was factually incorrect, and required correction. I spoke up, mentioning the contingency-management work of Nancy Petry on the “prize bowl” as NIDA single-most scientifically proven strategy to treat addictions and the work of Steven Hayes on Acceptance and Commitment Therapy (ACT), which significantly reduces rehospitalization from psychosis [1].

Dr. McCann-Katz tartly responded that people relapse as soon as Petry’s reinforcement system ends. I replied that that is factually incorrect based on high-quality experimental research, which contingency management is properly used [2]. For the life of me, I cannot understand why the Assistant Secretary would not be curious why or how a scientifically proven strategy contributes to better outcomes. Is contingency management or Acceptance and Commitment Therapy perfect? No, but they are much better than treatment as usual, using quite good science.

My scientific colleagues in the United States and I are increasingly concerned that the Assistant Secretary is blind to the incredible body of peer-reviewed research with high-quality randomized control studies funded by NICCHD, NIMH, NIAAA, CDC, IES, etc. This is not consistent with her stance on high-quality research.

Both Drs. McCance-Katz and Petry are powerful and productive women, with many publications on the National Library of Medicine ( Both have histories of work in Connecticut. Dr. McCance-Katz has 102 citations, and Dr. Nancy Petry has 332 publications on I feel completely dwarfed, having only 12 citations on I have only 3% of the citations that these incredible women combined. I should have citation envy.

At the advisory council, I sensed that Dr. McCance-Katz was dismissive when it came to learning about gold-standard research conducted outside her specialty. She may well have been burned by some touted strategy in her past. Unfortunately, Dr. McCance-Katz’s cursory review official statement about NREPP and lack of differentiation between the 2007 requirements and the 2015 change is harming treatment, intervention, and prevention in America. I hope the Assistant Secretary has the psychological flexibility to learn about the science she does not know, found in the legacy section of NREPP. The article by Dr. Sharon Hennessey brilliantly articulated the standards for those reviewers. The Assistant Secretary would be even more stunned to sit down with the new review panel convened by the Institute of Medicine to create a report on scaling up population-level prevention of mental, emotional, and behavioral disorders among young people, a sequel to the 2009 IOM Report [3].

Virtually every major scalable prevention, intervention, or treatment strategy highlighted by the CDC, Surgeon General Reports, IOM Reports, Blueprints, and European Union entities was scored high in 2007 criteria reviews in the original NREPP. Most have more potent studies to validate them since that time.

It’s inconsistent for the historic mission of SAMSHA to obliterate, by turning off a switch to the accumulated scientific treatment, intervention, and prevention treasures of NIDA, NIMH, NIAAA, NICHD, IES, CDC and foundations that are cataloged in 2007 (Legacy) NREPP. The legacy NREPP represents hundreds of millions of dollars and human hours of labor on the finest research in the world, funded mainly by the U.S. government. This is a motivational puzzle.

In human history, people ascending to power have sometimes destroyed or banned books, scourging those who created the scientific knowledge. Now, people in power only need only to erase hard drives and disconnect URL’s. Switching off NREPP destroyed knowledge for good people all over America who sought to use proven, scientific knowledge to better the lives of our children, adults, and their communities.

Thus, I’m left with a nagging, very uncomfortable question: Who benefits when proven, replicated scientific knowledge to prevent, intervene or treatment of mental, emotional, and behavioral disorders is less accessible?


Bach P, Hayes SC, Gallop R: Long-term effects of brief acceptance and commitment therapy for psychosis. Behav Modif 2012, 36(2):165-181.

Petry NM, Alessi SM, Rash CJ: Contingency management treatments decrease psychiatric symptoms. J Consult Clin Psychol 2013, 81(5):926-931.

O’Connell ME, Boat T, Warner KE (eds.): Preventing Mental, Emotional, and Behavioral Disorders Among Young People: Progress and Possibilities. Washington, DC: Institute of Medicine; National Research Council; 2009.


Dennis Embry is a prominent prevention scientist in the United States and Canada, trained as clinician and developmental and child psychologist. He is president/senior scientist at PAXIS Institute in Tucson and co-investigator at Johns Hopkins University and the Manitoba Centre for Health Policy. His work and that of colleagues is cited in 2009 the Institute of Medicine Report on The Prevention of Mental, Emotional, and Behavioral Disorders Among Young People. Clinically his work has focused on children and adults with serious mental illnesses. In March 2014, his work and the work of several signatories was featured in a Prime-TV special on the Canadian Broadcast Corporation on the prevention of mental illnesses among children—which have become epidemic in North America.

The suspension of the National Registry of Evidence-Based Programs and Practices: The importance of adhering to the evidence

Sharon Green- Hennessy, PhD

First published August 10, 2018 by Children’s Mental Health Network Newsletter

Recently the United States Assistant Secretary of Mental Health and Substance Use disclosed having suspended the National Registry of Evidence-Based Programs and Practices, stating it was so deficient in both rigor and breadth that it must be replaced. However, a closer examination of her claims about the Registry indicates many of them to be inaccurate. Contrary to her assertions, the Registry is not devoid of medication-assisted treatments for opioid use; nor does it contain but a scant few interventions related to schizophrenia and psychosis. Moreover, many of her criticisms regarding rigor pertain to reviews completed since late 2015, when the Substance Abuse and Mental Health Services Administration altered key aspects of the Registry. In contrast to reviews generated under the 2007 rules, these newer reviews rely on fewer references, incorporate less expert input, are more likely to be based exclusively on gray literature, and are no longer required either to provide dissemination readiness information or meet certain minimum research quality standards. However, only 123 (25.7%) of the 479 Registry interventions have been reviewed solely using the problematic 2015 criteria, with the remaining 356 interventions having a review which use the 2007 guidelines. Yet, rather than address the agency’s recent missteps and expand the Registry’s content coverage, the agency appears to have decided to invest considerable resources into replacing it, relying heavily on expert consensus versus empirical data in its initial attempt to do so. This raises questions about the agency’s current commitment to evidence-based practice.

Any replacement [of the National Registry of Evidence-Based Programs and Practices (NREPP)] would be SAMHSA’s fourth attempt at providing evidence-based guidance, a move that could prove costly not just in dollars but credibility. Hence, before investing further in the creation of yet another system, it would appear prudent to examine the reported problems with NREPP.

The 2007 version of NREPP

SAMHSA began NREPP in 1997 as a ranking of expert-recognized substance prevention programs, but quickly concluded a more rigorous process, inclusive of mental health and substance treatments, was needed [2]. After extensive scientific and public input, a radically different NREPP was unveiled in 2007. Contrary to its predecessor, NREPP of 2007 rejected the notion that a registry’s purpose was to tell its users what to do (a “forced fit” model); instead, it was designed as a decision support tool, integrating information into continuous quality and dissemination readiness ratings which could assist users in making informed decisions about which interventions best suited their specific needs (a “best fit” model) [2]. While not limiting itself to randomized controlled trials, the 2007 version required reviewed research meet certain minimum methodological standards [2]. Of note was its unique dual emphasis on efficacy and community effectiveness, with the latter embodied in its dissemination ratings, adverse effects notations, cultural adaptation descriptions, etc. [3].

2015 Revision

In 2015, SAMHSA changed NREPP back into a categorical ranking system [4]. NREPP’s distinctive array of numerical ratings was replaced with a “stoplight” outcome ranking (green-Effective, yellow-Promising, red-Ineffective), although exactly how these color-coded designations were derived was unclear [5]. To increase rigor, a Hedges g effect size statistic was introduced which allowed for findings across studies to be combined quantitatively, but at the same time, rigor was decreased by removing the requirement that studies meet certain specific minimum methodological requirements and limiting reviews to short-term outcome studies comparing interventions to an inactive or known less effective control [4, 6]. Finally, NREPP’s dissemination rating was removed, as was information on adverse effects, replications, and cultural adaptations [7].

At the time of its suspension, 479 interventions had been reviewed by NREPP. A total of 123 interventions (25.7%) had reviews using only the 2015 criteria, while 246 had only 2007 reviews (51.4%). As it had been SAMHSA’s intent to re-review all 2007 reviews, an additional 110 interventions (23.0%) had two reviews (both a 2007 and a 2015 one). Five hundred and seventy-seven of these reviews are currently posted on the NREPP website.

As Assistant Secretary McCance-Katz has claimed deficiencies in both rigor and breadth necessitate NREPP’s replacement [1], it is important to examine each of these stated concerns.

NREPP’s rigor

McCance-Katz appears to have relied heavily on Gorman’s recent analysis of NREPP [8] when stating her concerns about the amount and quality of literature contained in each review. However, in her summary McCance-Katz fails to clarify that Gorman’s criticisms were of the 2015 reviews, not of the 2007 ones [8]. The distinction is an important one because the 2007 and 2015 reviews differ on more than just methodological features; they embody opposing philosophies of what a registry’s purpose should be. The 2007 reviews eschewed overall rankings because their intent was not to tell stakeholders what to do, but to inform them in making their own decisions as to which intervention would be most effective in their setting. To this end, they provided users with both internal and external validity data. Such an approach is consistent with research showing providers are more likely to implement evidence-based practices if they have a role in the adoption decision process [9, 10]. The 2015 reviews, in contrast, were designed to rank programs as being more effective for the population based primarily on internal validity and effect size; contextual and individual patient factors play little role in its criteria. Thus, the 2015 version of NREPP was, in effect, telling users these were the “best” interventions for them to use irrespective of their goodness of fit [11, 12]. Therefore, it is ironic that, despite their emphasis on internal validity, it is the 2015 reviews which have been criticized for their rigor [8].

Beginning in 2015 NREPP staff determined what materials NREPP reviewers would receive to evaluate [5], a change SAMHSA claimed would ensure a more comprehensive representation of the literature [4, 13]. However, an examination of the 110 interventions which have both a 2007 and 2015 review indicated that the 2015 quality assessments were based on significantly fewer references than the 2007 ones, M = 2.01 (SD = 1.40) vs. M = 3.02 (SD = 2.00), t(108) = − 5.57, p < .001, d = −.59. The 2015 reviews also contain fewer supplemental references (M = 2.29 (SD = 2.94) vs. M = 3.22 (SD = 2.87), t(103) = − 2.61, p = .01, d = −.32) and lacked the replications references which were a standard component of 2007 reviews (M = 1.75, SD = 2.82). Without considering the additional sources used in the 2007 dissemination ratings, the 2007 reviews, on average, were based on nearly twice as many references as the 2015 ones. Moreover, the 2015 reviews not only were based on less literature, but they were more likely to be based exclusively on “gray” literature than their 2007 counterparts (18.3% vs. 11.8%, Fisher’s exact test, p < .001) (S. Green-Hennessy, Ph.D., unpublished data, May 2018).

SAMHSA’s 2015 revisions compromised NREPP’s rigor in other ways as well. SAMHSA reduced the number of expert reviewers from four to one (re-reviews) or two (new reviews) [2, 4, 5], which is lower than the 2 to 13 typically used with national registries [14]. Moreover, as supporting research was no longer required to meet certain specific minimum methodological standards, the agency needed to create an additional Inconclusive outcome category to denote interventions whose research lacked sufficient rigor to generate an effect size [5]. SAMHSA framed the Inconclusive rating as a positive addition, stating its existence would dissuade the public from equating NREPP membership with NREPP endorsement [13]. However, as several programs which received an Inconclusive rating a year ago are currently describing themselves as being “listed on NREPP” (Adult Self-Directed Learning Cognitive Lifeskills Program, Active Parenting of Teens: Families in Action, Reward and Reminder) [15, 16, 17], the Inconclusive category does not appear to have addressed this concern. Lastly, the impact of the Hedges g statistic would have been greater if 48.1% of the outcomes in 2015 reviews were based on more than a single outcome measure. Of additional concern is that for 5.9% of the 2015 outcomes SAMHSA overtly cautions the user against using the Hedges g it supplies, noting significant study design weaknesses compromise its interpretability. Yet in 78.2% of those instances the intervention was still awarded a Promising rating.

Hence, McCance-Katz’ claim that NREPP reviews are regularly based on a single gray literature study [1] does not accurately characterize the 2007 reviews. While it is true that SAMHSA’s 2015 revision of NREPP was problematic, the majority of NREPP interventions have reviews using the 2007 criteria (n = 356; 74.3%).

NREPP’s breadth

Assistant Secretary McCance-Katz also cited her inability to locate any medication-assisted therapies (MAT) for opioid use, and but a few interventions for schizophrenia, as being instrumental in her decision to suspend NREPP [1]. In reality, 26 (5.4%) of NREPP’s 479 interventions address opioid misuse and/or contain research demonstrating the intervention’s specific applicability to individuals with opioid use disorder (i.e., employment supports for those with this diagnosis); the majority of the 26 opioid interventions are medication assisted ones (61.5%). Moreover, 35 (7.3%) of NREPP’s interventions specifically target individuals classified as seriously mentally ill or who have been diagnosed with schizophrenia, schizoaffective, or bipolar disorder.

This is not to say that NREPP does not possess coverage gaps. As with other voluntary review systems [18], NREPP has grown unevenly. Irregular funding from SAMHSA forced the Registry to rely on developer self-nominations to help populate it early on [19], leading it initially to become disproportionately weighted towards proprietary prevention programs [20]. Increased funding to permit staff-initiated reviews and NREPP’s growing influence has increased the diversity of programs on NREPP [21]. Nevertheless, McCance-Katz sharply criticized NREPP for permitting developer self-nominations, claiming the practice precluded the Registry from being evidence-based [1], even though a recent review of national evidence-based registries indicated that 45% allow nominations [14].

Still, addressing gap areas, as well as creating a process for regular updating, had been identified by its developer as NREPP’s most important challenges going forward (K. D. Hennessy, Ph.D., personal oral communication, January 15, 2015). Nevertheless, it is important to note that NREPP operates within certain constraints such as those from the Food and Drug Administration (FDA); as noted on the NREPP website, the Registry does not evaluate freestanding pharmacological interventions [4, 22], as medication safety and efficacy traditionally fall within FDA’s purview. This limitation disproportionately affects certain substance and mental health disorders where pharmacological therapies are heavily utilized.

SAMHSA’s Evidence-Based Practice Resource Center

Instead of improving and expanding NREPP, SAMHSA has chosen to invest in an Evidence-Based Practices Resource Center [23], which it states exemplifies the agency’s new approach to evidence-based practice [24]. Of the 138 resources listed on the Center’s site in April 2018, nearly half (n = 66, 47.8%) were classified by SAMHSA as being expert consensus/guidelines, as opposed to empirical evidence, despite a recent Cochrane review which did not find expert mental health guidance to significantly influence which interventions practitioners employed [25].

Even more concerning is the looseness with which SAMHSA is now applying the “evidence-based” moniker. For instance, 25% (n = 11) of the 44 identified mental health resources listed on the Center’s website consist of recent SAMHSA generated webpages on various psychiatric disorders [23]. These pages identify certain treatments as being evidence-based, but do not contain a single reference justifying why those treatments, as opposed to others, are identified as such [26].


In recent commentary piece, Gorman [8] asked if NREPP had lost its way. Unfortunately, the answer is yes. When NREPP lost its developer, it lost its way.

Moreover, SAMHSA has not sought stakeholder input to help it find its way again, with the Assistant Secretary failing to notify users that she had suspended the Registry until months after the fact [27]. Instead, the Assistant Secretary has chosen to discard a well-established, influential evidence-based system.

Equally puzzling is SAMHSA’s decision to replace NREPP with its Evidence-Based Practices Resources Center which is heavily populated by guidelines and contains a series of agency generated webpages which lack a single reference to justify their assertion that various mental health treatments are evidence-based. Such actions appear to be at odds with the 21st Century Cures Act, which mandates that substance and mental health prevention and treatment keep pace with science and that the Assistant Secretary provide on the agency’s website a listing of evidence-based practices whose evaluation metrics have been made publicly available [28].

While expert consensus guidelines may fill the gap when there is no applicable empirical evidence [29], there is evidence available which can help inform our work in the substance and mental health prevention and treatment fields. SAMHSA’s recent decisions though give rise to serious questions as to what criteria the agency is using currently to identify that evidence and if its own recent actions live up to the term “evidence-based.”


1 McCance-Katz E. [press release]. Rockville, Substance Abuse and Mental Health Services Administration; 2018. Accessed 31 Jan 2018.
2 Hennessy KD, Finkbiner R, Hill G. The National Registry of evidence-based programs and practices: a decision-support tool to advance the use of evidence-based services. Intl J Ment Health. 2006;35:21–34.View ArticleGoogle Scholar
3 Paulsell D, Thomas J, Monahan S, Seftor NS. A trusted source of information: how systematic reviews can support user decisions about adopting evidence-based programs. Eval Rev. 2017;41:50–77.View ArticleGoogle Scholar
4 U.S. Department of Health and Human Seervices. National Registry for evidence-based programs and practices. Fed Regist. 2015;80(129):38716–8.Google Scholar
5 Substance Abuse and Mental Health Services Administration (SAMHSA). NREPP: Review Process. Web site. Updated August 9, 2017. Accessed 15 Feb 2018.
6 Substance Abuse and Mental Health Services Administration (SAMHSA). NREPP: The Open Submission Process. Web site. Updated August 9, 2017. Accessed 15 Feb 2018.
7 Substance Abuse and Mental Health Services Administration (SAMHSA). NREPP: Resources for Implementation and Dissemination. Web site. Updated August 9, 2017. Accessed 15 Feb 2018.
8 Gorman DM. Has the National Registry of evidence-based programs and practices (NREPP) lost its way? The Int J Drug Policy. 2017;45:40–1.View ArticlePubMedGoogle Scholar
9 Williams JR, Blais MP, Banks D, Dusablon T, Williams WO, Hennessy KD. Predictors of the decision to adopt motivation interviewing in a community health setting. J Behav Health Serv Res. 2014;41:294–307.View ArticlePubMedGoogle Scholar
10 Horwitz SM, Hurlburt MS, Goldhaber-Fiebert JD, Palinkas LA, Rolls-Reutz J, Zhang J, Fisher E, Landsverk J. Exploration and adoption of evidence-based practice by US child welfare agencies. Child Youth Serv Rev. 2014;39:147–52.View ArticlePubMedGoogle Scholar
11 McCarnety M, Treadwell J, Maskrey N, Lehman R. Making evidence based medicine work for individual patients. BMJ. 2016;353:i2452.Google Scholar
12 Greenhaigh T, Howick J, Maskrey N. Evidence based medicine: a movement in crisis? BMJ. 2014;348:g3725.View ArticleGoogle Scholar
13 Roeber C, Dean C. Webinar presented at: National Resource Center for Mental Health Promotion & Youth Violence Prevention. Washington, D.C; 2017. Accessed 31 Jan 2018.
14 Burkhardt JT, Schröter DC, Magura A, Means SN, Coryn C. An overview of evidence-based program registers (EBPRs) in behavioral health. Eval Program Plann. 2015;48:92–9.View ArticlePubMedPubMed CentralGoogle Scholar
15 ACCI’s Cognitive Life Skills. Corrections Lifeskills Programs and Courses: Adult Self- Directed Cognitive Lifeskills Program. Web site. Published 2017. Accessed 15 Feb 2018.
16 Active Parenting Publishers. Active Parenting programs listed on SAMHSA’s Registry of Evidence-Based Programs and Practices (NREPP). Web site. Published 2018. Accessed 15 Feb 2018.
17 The Paxis Institute. Reward and Reminder. Web site. Published 2018. Accessed 15 Feb 2018.
18 Green-Hennessy S. Coverage of mental health and substance misuse topics in the Cochrane review system. Epidemiol Psychiatr Sci. 2013;22:155–62.View ArticlePubMedGoogle Scholar
19 U.S. Department of Health and Human Services. National Registry for evidence-based programs and practices. Fed Regist. 2010;75(159):51075–7.Google Scholar
20 Hennessy KD, Green-Hennessy S. A review of mental health interventions in SAMHSA’s National Registry of evidence-based programs and practices. Psychiatr Serv. 2011;62:303–5.View ArticlePubMedGoogle Scholar
21 Gillen AC, Elefantis AB, Hodgson AB, Hennessy KD. The international reach of SAMHSA’s National Registry of evidence-based programs and practices. Intl J Ment Health. 2013;42:78–94.View ArticleGoogle Scholar
22 Substance Abuse and Mental Health Services Administration (SAMHSA). NREPP: Submission Process. Web site. Updated September 26, 2017. Accessed 29 May 2018.
23 Substance Abuse and Mental Health Services Administration (SAMHSA). Evidence-Based Practices Resource Center. Web site. Updated April 3, 2018. Accessed 27 Apr 2018.
24 Substance Abuse and Mental Health Services Administration (SAMHSA). About the Evidence-Based Practices Resource Center. Web site. Updated April 3, 2018. Accessed 10 Apr 2018.
25 Bighelli I, Ostuzzi G, Girlanda F, Cipriani A, Becker T, Koesters M, Barbui C. Implementation of treatment guidelines for specialist mental health care. Cochrane Database Syst Rev 2016, 12:CD009780.Google Scholar
26 Substance Abuse and Mental Health Services Administration (SAMHSA). Treatments for Mental Disorders. Web site. Updated April 5, 2017. Accessed 27 Apr 2018.
27 Sun LH, Eilperin J. Trump administration freezes database of addiction and mental health treatments: The Washington Post. January 11, 2018:A3. Accessed 15 Feb 2018.
28 21st Century Cures Act, Pub. L. No. 114–225, § [section 7002], 130 Stat 1033, 2016.Google Scholar
29 Minas H, Jorm AF. Where there is no evidence: use of expert consensus methods to fill the evidence gap in low-income countries and culture minorities. Int J Ment Health Syst. 2010;4:33.

The complete article can be found here: Substance Abuse Treatment, Prevention, and Policy, 2018. 13:26