Anti-reformists expend tons of energy and rhetoric focusing on Everyday Math, Investigations and Connected Math. If they only spent the same time and energy on supporting the programs and texts that they believe are better for students. But instead these anti-reformist post blogs, websites, and petitions to rid districts of NSF programs without regard for what replaces it as if it doesn’t matter what replaces them as long as its not an NSF program. It is just as amazing how these critics consistently and conveniently ignore the facts that:
All three programs have been extensively studied and all have shown to have a positive impact on student achievement – particularly in the domains of conceptual understanding and problem solving; and
None of the four major brother-sister traditional programs (Houghton-Mifflin/McDougal-Littell, Harcourt/Holt, Scott Foresman/Prentice Hall, and McGraw Hill/Glencoe) have been subject to any kind of similar evaluation.
Robert Reys (Curators' Professor of Mathematics Education at the University of Missouri) writes:
It has been suggested that standards-based mathematics curricula don't have a research base of student mathematical performance to support their use. Clearly, much research remains to be done and reported. However, the statement implies that traditional programs, which still make up the overwhelming majority of the programs in use, have a sterling record of success in promoting mathematics learning. Moreover, it ignores decades of poor performance documented by the National Assessment of Educational Progress (NAEP) and by three international assessments, the latest being the Third International Mathematics and Science Study (TIMSS). Furthermore, the lack of knowledge and understanding of mathematics discussed by Liping Ma is the by-product of mathematics programs that were in place long before standards-based mathematics curricula existed.
People who demand research to document the effectiveness of reform curricula are either unaware of the history of student performance using the traditional curricula or choose to ignore more than 30 years of widely reported results. In fact, to assume that traditional mathematics programs have shown themselves to be successful is, according to James Hiebert, "ignoring the largest database we have." Hiebert goes on to say, "The evidence indicates that the traditional curriculum and instructional methods in the United States are not serving our students well.”
In arguing against the use of standards-based, NSF-supported curricula, some have alleged that children were used as guinea pigs for untried programs. This argument has strong emotional appeal. Who wants a child to be used as a guinea pig? Critics have advocated "stricter controls to prevent schools from using untested programs without the informed consent of parents and students. This claim is ironic on at least two counts. First, the traditional mathematics curriculum supported by the critics has not been tested for effectiveness, unless international assessments are used as the measure, in which case these curricula fall far short. Second, there has been unprecedented field-testing of these NSF-supported curricula over the past decade. They have been piloted, revised, field-tested in real classrooms, and revised again prior to their commercial availability. Data continue to be systematically collected, and feedback from the field is reflected in later editions. Research reporting on student achievement in a variety of grades is beginning to emerge. To criticize these curricula because of the philosophy they embody or the mathematical content of the materials is one thing. To suggest that they have not been extensively field-tested with teachers and students is blatantly untrue and irresponsible.
In fact, with very few exceptions, most widely used mathematics textbooks are not carefully researched and field-tested with children before they are sold to school districts. I base this assertion on my personal experience as well as on years of interacting with textbook authors. My own experience in co-authoring a textbook series was that, while every effort was made to create a product of the highest quality, publication deadlines made it impossible to do extensive field-testing. Lessons were written and edited and occasionally focus-tested with teachers. However, because of the market-driven nature of the textbook business, it was not possible to have teachers use the materials for several years and revise them based on student performance and feedback from teachers.
Calls for testing and documenting the impact of mathematics curricula will surely continue, and they should. However, the bar should be set at the same height for all publishers. If the developers of standards-based mathematics curricula are required to document the impact of those materials on student performance, then the same criterion should be applied to all companies producing mathematics textbooks for the same market. Of course, the prospects for such a requirement are dim.
No comments:
Post a Comment