Written by: Mitchell Robinson
Primary Source : Keep Talking, February 2, 2016
There are three aspects of American life and culture about which I am unapologetically–and nearly equally–passionate: education, politics, and sports.
While I am sure that the 3 areas share many similarities, it’s the differences among these arenas that has attracted my enthusiasm. Education, specifically music education, has been my career path for over 30 years, and I’m deeply committed to making sure that everyone has access to music learning and making opportunities throughout their lives. I also appreciate the intricacies of policy and politics, especially as they intersect with education, and have devoted an increasing amount of my professional energies to pursuing more equitable policy development in education and music education in recent years. And I love the excitement and teamwork of sports, whether as a player, coach or fan.
One commonality that binds these three arenas, however, has become increasingly clear over the past several years: the preoccupation with “prediction” over “reflection” in each domain.
As an avid sports radio listener, I was struck with the failure of virtually every “expert” on my daily sports talk radio shows to accurately call the outcome of a recent football playoff game. In the run up to the AFC Championship game, hardly a single prognosticator on ESPN’s slate of talk shows chose the Denver Broncos to defeat the New England Patriots, basing their picks on reams and reams of data, “measurables,” and statistics. The ages of the quarterbacks, their yards per attempt, defensive ratings and rankings, and head-to-head matches that–allegedly–favored the dominant Pats to win the game were discussed endlessly. These predictions went on for the entire week leading up to the game, with 11 of the 13 “experts” repeating the “accepted” dogma surrounding the matchup, leaving virtually no room for dissenting viewpoints or competing analyses. By the day of the game, it seemed like a waste of time to actually play the game–the outcome was assured. Except that no one told the Broncos they couldn’t win the game–which, of course, they did, by a score of 20-18 in a thriller that was decided by a missed 2 point conversion attempt in the final seconds.
Tuning in to the Iowa caucuses the other night provided an eerily similar experience. The vast majority of polls leading up to Monday night showed that Donald Trump’s lead in Iowa was substantial and growing, and that he would win convincingly once the votes were tallied in the Republican race. On the Democratic side, the political “experts” were predicting that Hillary Clinton’s political “machine” and “strong ground game” would combine to produce a solid victory over the “disorganized” Bernie Sanders, who at one point in the campaign was behind Ms. Clinton by as many as 50 percentage points in the polls. Several million dollars has been spent on these polls, and the results have been bolstered by telephone marketing and door-to-door campaigns. As we have been lectured to by CNN, Fox, and MSNBC ad nauseam throughout the campaign “season,” modern political campaigns are “sophisticated, data driven operations” designed by well-paid campaign consultants and professional political “operatives.” And these operations don’t come cheap: Jeb Bush alone has raised over $100 million to date–which earned him exactly 1 delegate and less than 3% of the votes in Iowa–and some estimates suggest that between $5-7 billion will be spent on the 2016 campaign when all is said and done.
By Monday night the narrative in each race would be substantially altered, with Ted Cruz earning a solid victory over Mr. Trump for the Republicans, and Sec. Clinton and Sen. Sanders finishing in a virtual dead heat in the Democratic primary. How could so many pundits be so wrong? How could so many polls indicate inaccurate results?
In each case, considerable time, money and resources were expended in predicting the outcomes of events that are extraordinarily complicated, complex, and dependent on human actors, actions and interactions. Try our best, we know that our chances of accurately predicting what team will beat the other, or what candidate will emerge victorious from an 8 person field is not much better than chance. Yet we persist in our belief that “we know better,” and that our powers of observation, or intuition, will somehow allow us to divine the results–and we even bet large sums of money on our hunches.
The analogy in education is our seemingly sole focus on lesson plans as the documentation, and evaluation evidence, with respect to teaching. The typical lesson plan includes information on the proposed purpose of the lesson, a pedagogical sequence of steps to accomplish the lesson’s goals, and perhaps some assessment strategies to help determine whether the students actually learned what was taught. All of this would be well and good if teaching was a predictable set of actions and responses–but as anyone that has spent a day in a classroom can tell you, it’s not.
I often explain lesson planning and teaching to my students like this: The process of lesson planning is like playing tennis against a wall. You hit the ball against the wall, and can accurately predict the return path of the ball. You can practice your forehand, then your backhand, secure in the knowledge that the ball will come off the wall predictably and consistently, stroke after stroke.
Teaching, on the other hand, is like playing tennis against a wily opponent. You hit the ball across the net, expecting a nice, easy return that you can volley back to your opponent–but your opponent has other ideas, and slices the ball down the line, whistling past your outstretched racket for a winner. There’s nothing predictable or consistent about playing tennis this way–and there’s no do-overs, or practice volleys, either. (Teaching middle school, by the way, is even more challenging–it’s like playing tennis against 30 opponents–each armed with a different piece of sporting equipment, and playing by different rules. You hit the ball across the net, one of your opponents grabs the ball out of the air, throws it to another opponent, and then both of them jump over the net to your side, steal the rest of the tennis balls and throw them over the fence and out of the court.)
[Disclaimer: none of this is to diminish the importance of preparation in teaching. Thorough preparation is critically important for effective teaching, and its importance can not be underestimated. Preparing for teaching, however, is a fundamentally different operation than lesson planning. Preparation is about deep knowledge, fluency of pedagogical strategies, and the development of a teaching “vocabulary,” or a pedagogical repertoire of “teaching moves” that enables teachers to respond “in the moment,” and to improvise in response to learners’ unpredictable actions and behaviors. Lesson planning is about the organization and delivery of content, and is informed by the teacher’s deep and thorough preparation for instruction.]
If we really want to know about a teacher’s effectiveness in the classroom, a better strategy than requiring that lesson plans be written and submitted before the actual teaching episode would be to ask the teacher to submit revised versions of her or his plans *after* the actual lesson was taught and observed. These lesson plans would include updated information about what actually occurred during the lesson, any adjustments made by the teacher to the intended sequence of teaching steps from the original plan, and a detailed reflection of what went well, what didn’t go as well as planned, and what the teacher would do differently if she or he could teach the lesson again.
Rather than placing the emphasis on “predicting” how the imagined lesson will go, our focus should be on how the actual lesson went, and to engage teachers in critical reflection of their practice. Our current focus on “predictive” lesson plans is nothing more than an elaborate game we play with our evaluators of forecasting what “might” happen under hypothetical conditions, whereas shifting our focus to a more reflective stance would allow teachers to work in tandem with their administrators in refining their teaching practice by reflective thinking and collaborative inquiry and problem solving.
Lesson plans written before the lesson should be used by the teacher as an organizational guide–providing reminders about material to be covered during the lesson, specific strategies to be used, and repertoire selections–not as evaluative documentation to be used in determining a teacher’s rating or ranking.
This focus on prediction over reflection is why our current teacher evaluation system is so irrevocably broken:
- It’s why the use of VAM (Value Added Measures) can not contribute valid or reliable data to a teacher’s effectiveness rating–because VAM is a predictive model based on comparing a set of actual student test scores against a hypothetical group of scores.
- It’s why relationships between teachers and administrators are currently more about accountability than the improvement of teaching.
- It’s why so many veteran teachers are leaving the classroom–because edTPA, NOTE, and teacher evaluation systems are designed to “catch bad teachers,” not to help teachers improve their practice and continuously grow as practitioners.
- It’s why enrollment in teacher preparation programs is on the decline, and fewer young persons want to enter the profession–because they see teachers devalued and treated like educational “drones,” rather than as self-actualized professionals dedicated to a career in the classroom.
If we want the results in education to be better than those in sports and politics, it may be time to shift our focus from predicting outcomes to reflecting on our practice. One is a game of chance; the other is guaranteed to succeed.
I know which approach I’m betting on.
Latest posts by Mitchell Robinson (see all)
- Some unpopular thoughts on teacher evaluation - June 11, 2017
- When teacher silencing becomes dangerous… - April 13, 2017
- Are “Big Data” and “metrics” the new religion in education reform? - March 19, 2017