Sections

Research

Six Myths in the New York Times Math Article by Elizabeth Green

The July 27, 2014 edition of the New York Times Sunday Magazine featured an article by Elizabeth Green entitled “Why Do Americans Stink at Math?” In this blog post, I identify six myths promulgated in that article.  Let me be clear at the outset.  I am an admirer of Elizabeth Green’s journalism and am sympathetic to the idea that improving teaching would raise American math achievement.  But this article is completely off base.  Its most glaring mistake is giving the impression that a particular approach to mathematics instruction—referred to over the past half-century as “progressive,” “constructivist,” “discovery,” or “inquiry-based”—is the answer to improving mathematics learning in the U.S.  That belief is not supported by evidence.


A Summary

Green asserts that American math reformers frequently come up with great ideas—the examples cited are the New Math of the 1960s, California’s 1985 Mathematics Framework, the National Council of Teachers of Mathematics (NCTM) math reforms of the 1980s and 1990s, and today’s Common Core—but the reforms are shot down because of poor implementation.  Green deserves credit for avoiding the common habit of attributing the failure of reform to “implementation” without defining the term.  In Green’s way of thinking, implementation of math reform hinges on changing the way teachers teach.[1] American teachers, Green argues, are wedded to the idea that learning mathematics is synonymous with memorization and practicing procedures.  They aren’t provided the training to teach in different ways.  Left on their own, teachers teach the way they themselves were taught, emphasizing, in her words, “mind numbing” routines—and perpetuating a cycle of failure.

Green believes that the 1980s math reforms failed in the U.S. but took root and flourished in Japan.  Over a 12 year span, she writes, “the Japanese educational system embraced this more vibrant approach to math.”  The two countries’ math classrooms are dramatically different, and readers are presented with a series of contrasts.  American classrooms are dull and oppressively quiet; Japanese classrooms are bursting with children “talking, arguing, shrieking about the best way to solve problems.” Japanese students “uncover math’s procedures, properties and proofs for themselves;” American students regurgitate rules spoon fed to them by teachers.  When the innovations of the 1980s and 1990s were proposed, Japan “was able to shift a country full of teachers to a new approach.” American teachers dug in and clung to traditional teaching.  The upshot of all of this?  Japan scores at the top of international math tests; the U.S. scores near the middle of the pack or worse.

The story is wrong.  It goes wrong by embracing six myths.


Myth #1: Japan Scores Higher Than the U.S. on Math Tests Because Japanese Teachers Teach Differently

Green provides no evidence that instructional differences are at the heart of American and Japanese achievement differences.  Indeed, she provides no evidence, other than the assertions of advocates of the teaching practices extolled in the article, that Japanese teachers changed their instructional practices in the 1990s, or that American teachers did not change theirs, or that any change that occurred in math instruction in either country had an impact on student achievement. 

Green relies on the Trends in International Mathematics and Science Study (TIMSS) 1995 Video Study to document differences in Japanese and American teaching styles.[2] She fails to tell readers of a crucial limitation of that study.  The TIMSS video study did not collect data on how much kids learned during lessons.  Interesting differences were indeed apparent between Japanese and American instruction (German math teachers were also part of the study), but the study could not assess whether Japanese students learned more math as a result.  Within-country comparisons might have been especially revealing.  If Japanese kids who were exposed to “reform” teaching strategies learned more than Japanese kids exposed to traditional instruction, and if great care were taken to make sure the two groups were equal on characteristics related to learning, then that would suggest the choice of instructional regime might be driving the learning differences.  Given the study’s limitations, that analysis could not be conducted.

The 1995 TIMSS collected survey data separate from the video study that can shed light on the question.  Eighth grade math teachers were queried: how often do you ask students to do reasoning tasks?  Table 1 shows the frequency of teachers’ answers, along with the average TIMSS score of students for each response category (in parentheses).

Teachers’ Reports on How Often They Ask Students to Do Reasoning Tasks

Never or Almost Never

Some Lessons

Most Lessons

Every Lesson

Japan

0%

7%   (594)

55% (604)

37% (608)

U.S.

0%

24% (495)

50% (498)

26% (514)

Source: Table 5.11, IEA Third International Mathematics and Science Study (TIMSS), 1994-1995, page 160.

Note that the data support the view that eighth grade Japanese teachers emphasize reasoning more often than U.S. teachers.  But the data also suggest that the difference can only explain a trivial amount of the Japan-U.S. test score gap.  The gap hovers around 100 points across response categories, comparable to the overall gap reported in 1995 (Japan scored 605, the U.S., 500).  The within-country difference between teachers who include reasoning tasks in every lesson versus teachers who only present them in some lessons is only 14 scale score points in Japan and 19 points in the U.S.  Indeed, even if 100% of U.S. teachers had said they emphasize reasoning in every lesson—and the 514 TIMSS score for the category held firm—the achievement gap between the two countries would only contract negligibly.  This suggests the overall test score difference between the two countries is driven by other factors.


Myth #2: Factors Outside School Are Unimportant to Japanese Math Success

What are those other factors?  Green dismisses cultural differences or the contribution of instruction outside school to Japanese math achievement.  This is puzzling.  There is no discussion of Japanese parents drilling children in math at home or of the popularity of Kumon centers that focus on basic skills.[3] And juku gets not a single mention in Green’s article.  Juku, commonly known as “cram school,” is the private, after-school instruction that most Japanese students receive, especially during middle school as they prepare for high school entrance exams.  Jukus are famous for focusing on basic skills, drill and practice, and memorization.[4] Japanese public schools have the luxury of off-loading these instructional burdens to jukus. 

An alternative hypothesis to Green’s story is this: perhaps because of jukus Japanese teachers can take their students’ fluency with mathematical procedures for granted and focus lessons on problem solving and conceptual understanding.  American teachers, on the other hand, must teach procedural fluency or it is not taught at all.


Myth #3: American Kids Hate Math, Japanese Kids Love It

Green’s article depicts American math classrooms as boring, unhappy places and Japanese classrooms as vibrant and filled with joy.  She cites no data other than her own impressions from classroom observations and the assertions of advocates of reform-oriented instruction.  It is odd that she didn’t examine the Program for International Assessment (PISA) or TIMSS data on enjoyment because both assessments routinely survey students from randomly-sampled classrooms and ask whether they enjoy learning mathematics.[5]

American students consistently report enjoying math more than Japanese students.  In response to the statement, “I look forward to my mathematics lessons,” posed on PISA, the percentage of U.S. 15-year-olds agreeing in 2012 was 45.4%, compared to 33.7% in Japan.  To the prompt, “I do mathematics because I enjoy it,” the percentage agreeing was 36.6% in the U.S. and 30.8% in Japan.[6] The differences between countries are statistically significant.

TIMSS asks younger students whether they like learning math.[7] Among 8th graders, the American results are pretty grim.  Only 19% say they enjoy learning math, while 40% do not like it, more than a 2-to-1 ratio disliking the subject.  But American students are downright giddy compared to students in Japan.  Only 9% of Japanese 8th graders say they like learning math and 53% do not like it, almost a whopping 6-to-1 ratio of disliking to liking.  Fourth graders in both countries like the subject more than eighth graders, but in the U.S. the like to dislike ratio is about 2-to-1 (45% to 22%) and in Japan it’s close to even (29% to 23%).[8]

Green’s impressions are based on observations in non-randomly selected classrooms.  They suggest that American students dislike math and Japanese students love it.  But empirical evidence collected by more scientific methods finds exactly the opposite.


Myth #4:  The History of International Test Scores Supports Math Reform

Japanese and American math scores are headed in opposite directions, but the trend is not what you’d guess after reading the New York Times article.  Japan’s scores are going down, and U.S. scores are going up.  The first international assessment of math, a precursor to today’s TIMSS test, took place in 1964.  Twelve nations took part. Japan ranked second, the U.S. eleventh (outscoring only Sweden).[9] If the scores are converted to standard deviation units (SD), Japan scored 0.9 SD higher than the U.S (all scores in this section refer to eighth grade). 

Jump ahead about five decades.  On the 2011 TIMSS, Japan still outscored the U.S., but by a smaller amount: 0.61 SD.  Most of the narrowing occurred after 1995.  From 1995 to 2011, the average scale score for Japan’s eighth graders fell 11 points (from 581 to 570) while the U.S. eighth graders gained 17 points (from 492 to 509).  Japan’s decline and the U.S.’s increase are both statistically significant.

This pokes a huge hole in Green’s story.  She attributes Japan’s high math proficiency to teaching reforms adopted in the 1980s and 1990s, but does not acknowledge that Japan was doing quite well—and even better than today relative to the U.S. —on international math tests in the 1960s.  If Japan now outscores the U.S. because of superior teaching, how could it possibly have performed better on math tests in the 1960s?  According to Green, the 1960s were the bad old days of Japanese math instruction focused on rote learning.  And what about the decline in Japan’s math achievement since 1995?  Is this really the nation we should look to for guidance on improving math instruction?


Myth #5: The Failure of 1990s Math Reform was the Failure to Change Teaching

Green blames the demise of American math reform in the 1990s on the failure to adequately prepare teachers for change.  She does not even mention “the math wars,” the intense political battles that were fought in communities across the country when new math programs were introduced.  California deserves attention since Green holds up the 1985 California math framework as an example of that era’s push towards “teaching for understanding.” 

The 1985 and 1992 California math frameworks were indeed crowning achievements of progressive math reformers, but as new textbooks and programs began trickling into schools, a coalition of parents and mathematicians arose in vehement opposition.  The charge was that the frameworks—and their close cousin, the 1989 NCTM standards—contained weak mathematical content.  One reform program, Mathland, attained notoriety for replacing textbooks with kits of manipulatives and for delaying or omitting the teaching of standard algorithms.[10]

In 1999, new state standards were written by four mathematicians from Stanford University.  The standards repudiated the previous state framework and the NCTM standards.  Although math reformers in California opposed the new standards, they could not claim that the authors lacked a conceptual understanding of mathematics or viewed math as the robotic execution of procedures.  The standards focused on clearly stated content objectives for each grade level, and avoided recommending instructional strategies.  They encouraged the development of computation skills, conceptual understanding, and problem solving. 

The notion that classroom teachers’ blind devotion to procedures or memorization led to the failure of 1990s math reform in the U.S. is ahistorical.  Indeed, Green cites no historical accounts of that period to support the claim.  Moreover, the suggestion that teachers were left on their own to figure out how to change their teaching is inaccurate.  Throughout the 1990s, the NCTM standards were used as a template for the development of standards and assessments in states across the land.  Education school professors in the late 1990s overwhelmingly supported math reform.[11]

The federal government deployed powerful resources to promote math reform and the National Science Foundation spent hundreds of millions of dollars training teachers in three different systemic reform initiatives.  The National Assessment of Educational Progress (NAEP) rewrote its math framework and redesigned its math test to reflect the NCTM standards.  In 1999, the U.S. Department of Education endorsed several reform-oriented math programs.   But a petition signed by over 200 mathematicians, educators, and scientists appeared in the Washington Post on November 18, 1999 renouncing the list of recommended programs..

Math reform in the U.S. is typically the offspring of government power wedded to education school romanticism.  David Klein has written a succinct account of twentieth century American math reforms.  E.D. Hirsch’s intellectual history of curricular reform attributes the periodic rise of progressive movements to the ideological “thought world” that dominates education schools.  Contrary to Elizabeth Green’s account, these histories conclude that math reform movements have repeatedly failed not because of stubborn teachers who cling to tired, old practices but because the reforms have been—there are no other words for it—just bad ideas.

Myth #6: The Common Core Targets Changes in Teaching 

Algorithms are procedures.  When the Common Core states that elementary students will learn standard algorithms—the conventional methods for adding, subtracting, multiplying, and dividing numbers—it is saying students will learn procedures.  Fluency with basic facts (e.g., 6 + 7 = 13, 18 – 9 = 9) is attained through memorization.  Nothing in the Common Core discourages memorization.  The primary authors of the Common Core math standards, William McCallum and Jason Zimba, have been clear that the Common Core is neutral on pedagogy, with teachers free to choose the instructional strategies—traditional or progressive or whatever—that they deem best.[12] The Common Core is about content, not pedagogy.  As the Common Core State Standards (CCSS) website adamantly proclaims, “Teachers know best about what works in the classroom. That is why these standards establish what students need to learn, but do not dictate how teachers should teach. Instead, schools and teachers decide how best to help students reach the standards.”[13]

That does not mean the Common Core won’t be used to promote constructivist pedagogy or to suppress traditional instruction. The protests of CCSS authors that the standards are being misinterpreted may not be enough.  The danger emanates from what I’ve previously described as “dog whistles” embedded in the Common Core.[14] The CCSS math documents were crafted to comprise ideas (CCSS advocates would say the best ideas) from both traditional and progressive perspectives in the “math wars.”  That is not only politically astute, but it also reflects the current state of research on effective mathematical instruction.  Scholarly reviews of the literature have raised serious objections to constructivism.  The title of an influential 2006 review published in Educational Psychologist says it all, “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential and Inquiry-Based Teaching.”[15] Unfortunately, the Common Core—and in particular the Standards for Mathematical Practice—contain enough short-hand terms related to constructivist pedagogy that, when heard by the true believers of inquiry-based math reform, can be taken as license for the imposition of their ideology on teachers.  

In its one-sided support for a particular style of math instruction, Elizabeth Green’s article acts as a megaphone for these dog whistles, the misguided notions that, although seemingly innocuous to most people, are packed with meaning for partisans of inquiry-based learning.  Green’s article is based on bad science, bad history, and unfortunate myths that will lead us away from, rather than closer to, the improvement of math instruction in the United States.



[1] Green’s choice of math reforms to list—all of which, except for the Common Core, tried to change how math is taught—is bound to mislead one into thinking that math reform’s implementation problems are primarily related to instruction.

[2] James W. Stigler and James Hiebert. The Teaching Gap. 1999.

[3] A 1994 Chicago Tribune article describes a local student who happily gets after-school Kumon lessons.  Note the reference to schools of that time “forging a path toward a greater understanding of math concepts.” 

[4] Ironically, an op-ed published in August 2014 in the New York Times on hagwons, the Korean version of jukus, attributes Korea’s high PISA scores to hagwon instruction.  It is inexplicable that hagwon instruction could mean so much to Korea’s test score success in this article but juku instruction does not even warrant mention in an article on Japan’s high scores.

[5] I devoted a section of the 2006 Brown Center Report to “the happiness factor” in education.

[6] OECD. PISA 2012 Results: What Students Know and Can Do. Student Performance in Mathematics, Reading and Sciences. Table III.3.4f.

[7] Ina V.S. Mullis, Michael O. Martin, Pierre Foy, and Alka Arora. TIMSS 2011 International Results in Mathematics. Chapter 8, Exhibits 8.1 (page 330) and 8.2 (p. 332).

[8] The between-country differences in liking math are statistically significant.

[9] International Study of Achievement in Mathematics:  A Comparison of Twelve Countries (Vols. 1–2), edited by T. Husén (New York: John Wiley & Sons, 1967).

[10] At its peak, Mathland’s publisher claimed that the program was the most popular in California.  Today, it is not published.

[11] A snippet from a 1997 survey of education professors conducted by Public Agenda: The process of learning is more important to education professors than whether or not students absorb specific knowledge. Nearly 9 in 10 (86%) say when K-12 teachers assign math or history questions, it is more important for kids to struggle with the process of finding the right answers than knowing the right answer. “We have for so many years said to kids ‘What’s 7+5?’ as if that was the important thing. The question we should be asking is ‘Give me as many questions whose answer is 12…,’” said a Chicago professor who was interviewed for this study.

[12] McCallum on CCSS: “They just say what we want students to learn.” And Jason Zimba on misinterpreting the Practice Standards to diminish traditional content:  “I sometimes worry that talking about the practice standards can be a way to avoid talking about focus and specific math content. Until we see fewer topics and a strong focus on arithmetic in elementary grades, we really aren’t seeing the standards being implemented.”

[13] http://www.corestandards.org/about-the-standards/frequently-asked-questions/#faq-2316

[14] https://www.brookings.edu/research/podcasts/2014/04/the-common-core-state-standards

[15] Paul A. Kirschner, John Sweller, and Richard E. Clark. “Why Minimal Guidance During Instruction Does Not Work: An Analysis of the Failure of Constructivist, Discovery, Problem-Based, Experiential and Inquiry-Based Teaching.” Educational Psychologist.  2006.

The Brookings Institution is committed to quality, independence, and impact.
We are supported by a diverse array of funders. In line with our values and policies, each Brookings publication represents the sole views of its author(s).