Frequently Asked Questions
We have recently added sections of Evidence for ESSA on research on social emotional learning (SEL) and attendance. Because there are many distinct measures of SEL and two of attendance, there are several unique features of these sections, as follows:
1. In order to allow for consistency and interpretability across programs reviewed, we have organized SEL measures under four major categories, and within these we have 17 individual variables. The categories and variables are as follows:
Academic
Academic performance
Academic engagement
Problem Behaviors
Aggression/misconduct
Bullying
Disruptive behavior
Drug/alcohol abuse
Sexual/Racial Harassment or Aggression
Early/Risky Sexual Behavior
Social Relationships
Empathy
Interpersonal relationships
Pro-social behavior
Social skills
School climate
Emotional Well-Being
Reduction of Anxiety/depression
Coping skills/stress management
Emotional regulation
Self-esteem/self-efficacy
When we review studies that use other names for these variables, we translate them to our variable names. For example, we would report “externalizing” as “aggression/conduct problems” and “cooperative behavior” becomes “social skills.” We always list key references to articles on each included program, and readers interested in the original variable names can find them in those articles.
Attendance has just two categories, “average daily attendance” and “chronic absenteeism,” defined as the number of students absent more than 10% of school days each year, which are common school accountability metrics many states use under ESSA..
2. SEL and attendance variables include behaviors we want to see more of (such as emotional regulation and average daily attendance) and those we want to see less of (such as bullying, anxiety/depression, and chronic absenteeism). We have reversed the +/- signs on negative behaviors, so that all outcomes (in effect sizes) with a positive sign are desirable. Some articles list, for example, “absenteeism” as a negative number if it is going down more in the experimental group than in the control group. We have recoded “less absenteeism” as a positive number.
3. Because we have multiple, very different outcomes across SEL studies and two for attendance studies, our reports on these topics are more complicated than those for reading and mathematics. The summary pages for SEL show outcomes on the four categories. Following definitions in the ESSA law, the categories are marked in green if at least one variable in that category met the ESSA “Strong” criterion, in orange if at least one variable met the ESSA “Moderate” criterion, and in fuchsia if at least one variable met the ESSA “Promising” criterion. If a category contains variables that met different ESSA levels, the summary page shows the highest level reached across all variables measured. For example, if one variable under Social Relationships had a “Strong” rating and another had a “Promising” rating, the summary page would indicate social relationships in green (i.e., “Strong”).
The summary also shows the average effect size for the category, including all variables in that category. This means that it is possible that a given category is marked in green (“Strong”), but the mean effect size is very low. This could happen if a program had a statistically significant positive outcome on one variable but a zero or negative outcome on one or more other variables in that category. If the category average is zero or negative, however, we do not list the category as meeting ESSA standards, even if it does have one or more outcomes meeting one of the three ESSA criteria.
4. As in the reading and mathematics reviews, programs are ranked on the summary page from highest to lowest. These rankings are based on the number of categories rated “Strong,” “Moderate,” or “Promising,” the average effect sizes, the number of studies, and the number of students involved in the outcomes.
The exact formula for ranking is (Σ(ES)(100)(1+SQRT N/100), where effect sizes are quadrupled if “Strong,” tripled if “Moderate,” and doubled if “Promising.”
5. SEL and attendance can be filtered by specific outcome measures, specific student ages, demographic characteristics of study populations (e.g., race, ELs, percent free lunch, urban/rural), and other study features.
6. Program pages go into more detail, providing a narrative description of each program and of program outcomes, as well as much other information. Costs, contacts for program providers, and other information are listed if we have them from the developers or disseminators.
7. We search exhaustively for all SEL and attendance programs available for use in the U.S. (as we do in all sections of our website). If a program has no evidence that meets our inclusion standards, or if its evidence does meet our inclusion standards but no study found any effects that meet ESSA standards for “Strong,” “Moderate,” or “Promising” evidence of effectiveness for even a single variable, we include the study, as “No studies met inclusion criteria” or “Qualifying studies demonstrated no significant positive outcomes.” To find studies that fell into one of these categories, type in the program name in the green box that appears in the upper right corner. If you find any SEL or attendance programs you know to exist but that do not appear when you type the program name in the green box, please tell us, and send us any studies or citations to studies. Our intention is to have every program currently in existence represented in our website.
The Every Student Succeeds Act, also known as ESSA, is the main federal law governing k-12 education. It has replaced No Child Left Behind (NCLB).
ESSA recognizes four levels of evidence. The top three levels require findings of a statistically significant effect on improving student outcomes or other relevant outcomes.
- Strong evidence: At least one well-designed and well-implemented experimental (i.e., randomized) study.
- Moderate evidence: At least one well-designed and well-implemented quasi-experimental (i.e., matched) study.
- Promising evidence: At least one well-designed and well-implemented correlational study with statistical controls for selection bias.
The fourth level is a program or practice that does not yet have evidence qualifying for the top 3 levels, and can be considered evidence-building and under evaluation.
This website identifies programs that fall into the strong, moderate, and promising definitions under ESSA because they have met scientific evidence of effectiveness. All other programs, including those in the fourth level, will be recognized as either Not Proven (Qualifying Studies Found No Significant Positive Outcomes) or No Evidence (No Studies Met Inclusion Requirements) according to the ESSA evidence standards. Programs listed as “no evidence” or “not proven” can be located by typing in the program name in the green search bar in the upper right corner of the homepage.
NCLB only referred to programs “based on scientifically based research.” However, any program can be said to be “based on” research. ESSA uses a higher standard, requiring programs to have been tested and found to be significantly more effective than standard practice.
The Center for Research and Reform in Education (CRRE) at Johns Hopkins University (JHU) has long worked to promote evidence-based reform in education, especially by publishing reviews of research on effective programs and practices in most subjects and grade levels, pre-k to 12.
We believe that using proven programs is morally and fiscally responsible, increasing the likelihood of student success as well as a greater return on financial investments in education.
While there are several databases with information about education programs, such as the What Works Clearinghouse and Best Evidence Encyclopedia, this is the only website explicitly tailored to making education leaders aware of specific programs that meet the ESSA evidence standards.
The website provides information on programs and practices that meet each of the top three ESSA standards in a given subject and grade level. So far, the website covers grades PK-12 reading, mathematics, social-emotional learning, and attendance. In 2020, sections on writing and science are due to be added. The website includes brief program descriptions, information on costs, availability, and other pragmatics, and links to program web sites. You can refine a search to look for programs that have been successful with particular populations (e.g., English learners, special education), communities (e.g., urban or rural), and other special interest areas.
You can also search by program name, enabling you to find information about evidence for all programs, including those that have not yet been successfully evaluated. Enter the program name into the green search bar in the upper right corner of the homepage.
Additional topics will be added in the future, and the website will be continually updated to include new programs and to reflect new evaluations.
Schools applying for school improvement funding under ESSA are required to select and implement programs with strong, moderate, or promising evidence of effectiveness. Any of the programs on this website that are categorized as strong, moderate, or promising will meet this requirement. Other federal and state funding initiatives also encourage use of proven programs.
The website is produced by the Center for Research and Reform in Education (CRRE) at Johns Hopkins University School of Education, in collaboration with a distinguished Technical Work Group and a Stakeholder Advisory Group. It is information solely intended to be useful to educators and the public and has no official status.
Start-up funding for Evidence for ESSA was provided by the Annie E. Casey Foundation. The SEL and Attendance sections were funded by the Bill and Melinda Gates Foundation. We thank them for their support but acknowledge that the findings and conclusions presented in this site are those of the author(s) alone, and do not necessarily reflect the opinions of the Foundation.
CRRE Team
The main development team for Evidence for ESSA:
Robert Slavin, Director for the Center of Research and Reform in Education at JHU
Cynthia Lake
Amanda Inns
Nancy Madden
Marta Pellegrini, University of Florence, Italy
Sooyeun Byun
Susan Davis
Chenchen Shi, Beijing Normal University
Other important contributors:
Jennifer Morrison
Betsy Wolf
Sharon Fox
Beth Comstock
Liz Kim
Nathan Storey
Ariane Baye, University of Liege, Belgium
Alan Cheung, Chinese University of Hong Kong
Bette Chambers, University of York, England
Chen Xie, East China Normal University
Mariette Hardiman, University of Groningen
A Technical Work Group (TWG) consulted on development of content, standards and review procedures, and key decisions for the web site. For programs associated with the authors, TWG members made decisions independently.
Members of the TWG are distinguished professionals in the field of education with experience in evidence and evaluations. They also represent the types of professionals that serve as peer reviewers, and therefore understand the complexities faced by states, districts, and schools working to create plans that address local needs and satisfy the requirements established in ESSA.
More information about the TWG can be found here.
In order to ensure that we are meeting the needs of the field, we invited a Stakeholder Advisory Group (SAG) to provide ideas and feedback with a particular focus on making the website useful to their constituents. The organizations represent a diverse group of professionals involved in education at the leadership, grassroots, and policy levels. SAG Membership does not indicate endorsement of any particular programs or interventions, but rather the availability of information on programs and practices that meet the ESSA evidence standards.
You may find a list of organizations that serve on the SAG, have endorsed the website, and recommend it to their members here.
Programs meeting ESSA evidence standards within a given topic show whether they meet the strong, moderate, or promising standards according to the law. Within categories, programs are listed in order beginning with the strongest using a formula that includes average effect size, number of studies, quality of studies, and overall sample size of the studies. In general, the higher on each list you look, the stronger and more reliable the impact.
A program that has had at least two qualifying studies with an average effect size of +0.20 will be marked by a badge with a star.
The search page contains filters that allow you to look at studies involving schools with specific characteristics. In addition, once you click on an individual program that meets one of the top three standards, the information provided on the program page will summarize use and outcome information for students and schools, if such information was available from the evaluations.
An effect size is a widely accepted number indicating how much of a difference a program made in student learning. An effect size of +0.10 is small, and larger effect sizes indicate larger impacts. (Reported effect sizes are an average of all qualifying studies of the program, not just the studies that caused the program to meet one of the top three ESSA standards.) When comparing effect sizes, you should compare apples to apples, and oranges to oranges, but you cannot compare apples to oranges. What we mean by this is that different kinds of programs may have different average effect sizes and still be strong. For example, effective tutoring interventions may have an average effect size of about .30, while effective classroom approaches for teaching math might have an effect size of .15. Therefore, an effect size of +0.25 might be considered “large” for classroom approaches, “moderate” for tutoring.
Every program currently in use should appear on the website. You can search for them by typing their name into the green search box in the upper right corner. If you cannot find a program, it may be listed under another name. If you think we have missed a program, please tell us about it by emailing us at EvidenceforESSA@jhu.edu.
CRRE is committed to keeping the site current. If we learn of a qualifying study, we will post it within a month.
Write to us at EvidenceforESSA@jhu.edu. We plan to update the web site regularly based on new information and new research.
In most respects, Evidence for ESSA follows the WWC Standards and Procedures Guide, version 3.0, and therefore usually comes to similar or identical conclusions. However, there are differences in our standards, agreed to by the Evidence for ESSA Technical Working Group (TWG). For example, Evidence for ESSA does not accept measures made by developers or researchers, it does not accept after-the-fact (post-hoc) designs, and it requires a minimum study duration of twelve weeks. A link to the Evidence for ESSA Standards and Procedures can be found in the first FAQ.
Evidence for ESSA only considers programs’ evidence of outcomes on achievement or other objective outcomes, such as social-emotional learning and attendance. Educators may also take into account other factors, but the purpose of Evidence for ESSA is to help them select programs proven to make a difference with students.
Yes. We expect to continue to add topics indefinitely. In addition to the four topics we have released so far, we hope to release sections on writing and science in 2020, and will later add sections on English learners, special education, technology, and other topics as funding becomes available.
For all programs that meet the top three ESSA evidence standards, we are asking developers to provide information and tips for success, identify ambassador schools, and recruit principals and teachers to write about their experiences implementing the program. It takes time to gather this information, so please be patient and check back regularly.
There are many providers of professional development to state, district, and school leaders, including Regional Education Laboratories, Comprehensive Centers, state departments of education, and other government sources. The Center for Research and Reform in Education at Johns Hopkins University will also work with state, district, and school leaders on a fee for service basis. Contact us at bcomstock@jhu.edu.
We also encourage you to visit our Integrated Process page that explains how selecting a proven program or multiple programs for your state, district, and/or school is one step in a larger, integrated process.
More detailed information on the research is available at these websites:
U.S. Department of Education:
- ESSA home page
- Non-Regulatory Guidance: Using Evidence to Strengthen Education Investments, issued September 16, 2016
There are also numerous materials that have been developed outside the federal government that explain and discuss the ESSA evidence provisions. Here are some examples developed and distributed by members of our Stakeholder Advisory Group (SAG):
Results for America, a member of our Stakeholder Advisory Group, has put together an exhaustive list of every evidence provision in ESSA, including the section in the law where it occurs.
There are numerous materials that have been developed that explain and discuss ESSA as well as provide resources and tools. Here are some examples developed and distributed by members of our Stakeholder Advisory Group (SAG):
- AASA: The School Superintendents Association
- Alliance for Excellent Education
- America Forward
- American Youth Policy Forum
- Chiefs for Change
- National Association of Elementary School Principals
- National Association of Secondary School Principals
- National Association of State Boards of Education
- National Education Association
- National PTA
- National School Boards Association
- Results for America