Research

Academic research transformed into accessible videos and podcast discussions. While copyright prevents republishing my journal articles, AI tools like NotebookLM allow me to create new ways to share what the research says—bringing findings to life for parents, educators, and policymakers who need them most.

Career and College Ready Graduates Pilot Study

Johnson, Janet L & White, C. C (2019) Evaluation Report: Career and College Ready Graduates pilot implementation in GEAR UP NC High Schools : GEARUP NC.

What if we could eliminate the need for math remediation in community college by addressing it in high school? This study examined the initial implementation of math remediation at the high school level, exploring what happens when we move the intervention earlier. The disconnect between high school and community college is one of education’s most damaging system mismatches—students graduate believing they’re ready, only to test into remedial courses that cost time and money but don’t count toward degrees.

Change Management in K-12 Education for Data-Driven Decisions

White, C. C., & Johnson, J. (2018). Change management in K-12 education for data-driven decisions: Moving from professional judgment to evidence. In E. G. Mense & M. Crain- Dorough (Eds.), Data leadership for K-12 schools in a time of accountability (1st ed., pp. 54–74). Hershey, PA: IGI Global, Information Science Reference. Retrieved from Book.

For decades, educators used professional judgment to decide which students needed services and which were ready for advanced courses. “At-risk” meant low-income or minority—no one checked whether students actually struggled academically. Then federal accountability requirements changed everything. Programs suddenly needed measurable goals, data-driven identification of students, and evidence that services worked.

This was a fundamental shift in how education operates. It required a completely different mental model.

But no one managed the change. No meetings. No memos. No training explaining how different the new expectations were. Educators were left to figure it out on their own—and many still haven’t.

In this chapter, we describe two mental models: the traditional approach based on demographics and professional judgment, and the data-driven approach based on evidence. We document what we’ve seen across 32 years of evaluating federal grants—staff confidently describing students in data terms they had never actually verified, programs filled with students who didn’t need services while students who did were overlooked, high-achieving students placed in remedial programs who regressed.

Change management research exists. Organizational learning frameworks exist. Education just didn’t use them. This chapter explains what went wrong and what effective change management could look like.

Opportunity to Learn

Stiff, L. V. & Johnson, J. L. (2011). Mathematical reasoning and sense making begins with the opportunity to learn. In M. E. Strutchens (Ed.), Focus on High School Students: Making Mathematical Reasoning and Sense Making a Reality for All. Reston: Va.: National Council of Teachers of Mathematics.

In 2011, Lee V. Stiff and I published research showing that North Carolina schools were sorting students into math tracks based on race and income rather than actual achievement. We created a fruit metaphor to help educators see the difference between demographic characteristics and real performance data—and to expose the policies and assumptions blocking access for qualified students.

The numbers were stark. Qualified Black and Latino students were placed in advanced math at dramatically lower rates than equally qualified white and Asian students. When one principal intervened and placed 103 overlooked students in advanced classes, they succeeded. The next year, under new leadership, the school went back to old practices.

North Carolina eventually acknowledged the problem. Legislation followed. Now all students who score at the highest level are guaranteed access to advanced math. But here’s what the accountability measures don’t tell us: half or more of the students in advanced classes don’t score at the highest level. Are those students still being placed by teacher recommendations? We don’t know. And the state changed the cut scores, making it significantly harder to reach that highest level in the first place.

Things have changed. We can’t say things have been fixed.

Dropouts: Finding the Needles in the Haystacks

Sparks, Eric, Johnson, Janet, and Akos, Patrick (2010). Dropouts: Finding the Needles in the Haystacks. Education Leadership. Volume 67. Number 5.

Schools want to prevent dropouts, but resources are limited. How do educators decide which students need the most support?

For decades, many schools used demographic factors—race, family income, single-parent households—to identify students considered “at risk.” The logic seemed reasonable: if students from certain groups drop out at higher rates, then belonging to that group must signal risk.

But this approach has a fundamental flaw. Group statistics describe patterns across populations. They don’t tell you which individual student will struggle. Using demographics to target services can lead to stereotyping, misdirected resources, and missed opportunities to help the students who actually need it.

In 2010, I worked with school counselors Eric Sparks and researcher Patrick Akos to answer a different question: What factors in a student’s own educational record actually predict whether that individual will drop out?

We analyzed data on more than 17,000 ninth graders in a large southeastern school district. What we found changed how the district approached dropout prevention.

Three factors—what we called the “Big 3″—predicted dropout far more accurately than demographics: being held back a grade at any point in school, scoring below grade level in math or failing Algebra I, and receiving a long-term suspension. Students with one or more of these factors accounted for 84 percent of dropouts, yet represented only 23 percent of all ninth graders.

This transformed an impossible task into a manageable one. Instead of searching a haystack of nearly 18,000 students, counselors could focus on about 4,000 who had actual risk factors. And when they did, the results were striking. One school created a targeted tutoring program for 24 students—21 of them passed and moved on to 10th grade.

The research required a mindset shift for educators. They had to move from assumptions about who might be at risk to trusting what the data revealed about individual students. That shift made all the difference.

This work reflects a principle at the heart of everything I do: data can make invisible patterns visible. When we let evidence guide our decisions instead of assumptions, we can reach the students who truly need us.

Inside a District’s Own Evaluation Reports

A review of a large NC school district’s own evaluation reports.

When a large school district reviewed its intervention programs, internal evaluations revealed a troubling pattern: students were being served based on demographics, not academic need. Programs designed for struggling students were filled with students already at grade level—and some were harmed by receiving unneeded remediation. The findings exposed what happens when assumptions replace data.