Monthly Archives: March 2015

Future Vision 2025 Teachers: No more “just” a Teacher

What future are we aiming at? This series of 6 posts, Future Vision 2025, describes some of my personal education mission milestones. These are not predictions, they are aspirational. They are framed as significant differences one could see or make by 2025. What’s noticeably different in 2025 when one examines students, parents, teachers, learning, assessment, media & society? How and when these milestones are reached are not addressed. Some milestones are indicated by the emergence of something ‘new’ (at least at robust scale), others by the fading away of something familiar and comfortable.

Teachers 2025


In 2025, teaching as a profession is gaining respect.

It is gaining respect because the drumbeat from frustration with test scores failure has been stilled. The drumbeat has been stilled by clearly improved performance, both on domestic measures and in international comparisons. Key have been NAEP scores are improving markedly, as well as rising U.S. rankings in the international comparisons of PISA and TIMSS.

The drumbeat has also been stilled by an overall sense of progress and improvement: the educational playing field has been made more level through a smarter policy of enlightened self-interest. For example, government goals to provide quality early childhood education experiences, regardless of any parent’s economic ability to provide them, are by now as prevalent as health and nutritional programs were in 2015.

The beat has been stilled by data showing that the floor of the “achievement gap” is rising dramatically, at scale, across the U.S. Moreover, for the upper edge of the “gap”, all is not flat. Proficient or advanced students are also gaining through deep learning which plumbs far beyond just good scores. All students are growing their talents more than ever before.

Teachers encourage their student’s thirst for deeper learning via dramatically more engaging digital learning environments. The last ten years have, finally, empirically confirmed teachers’ belief that all students can learn challenging material. The experience of teaching practice  itself, with the latest digital tools, organically fills gaps in teachers’ own understanding in real time. And the goals of school itself are more tangibly clear and relevant. In the area of mathematics, for example, teachers understand that the meta-purpose of math education is to provide children with flexible, powerful raw thinking machinery for future general learning and problem-solving.

Teachers as a group are more autonomous than ever, skillfully wielding powerful digital tools to productively engage every learner. Publisher integrated content and tools suites have very obviously matured far beyond what any individual teacher would ever dream of putting together themselves via Google. Teacher job satisfaction is markedly up – because teachers are achieving their own goals for more of their own students: positively influencing lives.

Teacher pre-service training and professional development programs of course assume that teachers will be provided with requisite, powerful digital tools. So this training gives them the expectations and distinctions to recognize which tools are appropriate and effective for which purposes. Freshly-minted teachers are more quickly effective in the classroom. Experienced, creative teachers have more opportunities than ever before to focus on their highest level of value-add, via customization, enrichment, and knowing their individual students, having trading in all their prior low level management of classroom, content, and data.

Teacher-practitioners have earned this newfound level of respect from their students, from parents, from administrators, from the community, and, importantly, feel it deeply within. No more, “I’m just a teacher.”

If you tried to take digital content and tools away from teachers, they would go on strike.

Tagged , , , , ,

Hold Content Accountable Too: a scalable Method

This post originally was published on Tom VanderArk’s “VanderArk on Innovation” blog on Edweek. It was also published on GettingSmart. The following is an edited version.

Specific programs and content, not just teachers and ‘teacher quality’, must be held accountable for student outcomes. A recent study published by WestEd shows how, given certain program conditions, cost-effective and rigorous student test score evaluations of a digitally-facilitated program can now be pursued, annually, at any time in any state.

Historically, the glare of the student results spotlight has been so intensely focused on teachers alone, that the programs and content ‘provided’ to teachers have often not even been recorded. Making the case for the vital importance of paying attention is this scathing white paperChoosing Blindly, Instructional Materials, Teacher Effectiveness, and the Common Core, from the Brown Center on Educational Policy’s Matthew Chingos and Grover Whitehurst.  The good news is: digital programs operated by education publishers for schools organically generate a record of where and when they were used.

Today’s diversity of choices in digital content – choices about scope, vehicles, approaches & instructional design – is far greater than the past’s teacher-selection-committee picking among “Big 3” publishers’ textbook series. This wide variety means content can no longer be viewed as a commodity; as if it were merely a choice among brands of gasoline. Some of this new content may run smoothly in your educational system, yet some may sputter and stall, while others may achieve substantially more than normal mileage or power.

It is important to take advantage of this diversity, important to search for more powerful content. The status quo has not been able to deliver results improvements in a timely manner at scale. And spearheaded by goals embodied in the Common Core, we are targeting  much deeper student understanding, while retaining last decade’s goals of demonstrably reaching all students. In this pursuit, year after year, the teachers and students stay the same. What can change are the content and programs they use; ‘programs’ including the formal training programs we provide to our teachers.

But how do you tell what works? This has been extremely challenging in the education field, due in equal measures to a likely lack of programs that do work significantly better, to the immense and hard-to-replicate variations in program use and school cultures, and to the high cost, complexity, and delay inherent in conventional rigorous, experimental evaluations.

But. There is a cost-effective, universally applicable way for a large swath of content or programs to be rigorously evaluated: do they add value vs. business-as-usual. The method is straightforward, requires no pre-planning, can be applied in arrears, and is replicable across years, states, and program-types. It can cover every school in a state, thus taking into account all real-world variability, and it’s seamless across districts, aggregating up to hundreds of schools.

To be evaluated via this method, the program must be:

  1. able to generate digital records of where/when/how-much it was used at a grade
  2. in a grade-level and subject (e.g. 3-8 math) that posts public grade-average test scores
  3. a full curriculum program (so that summative assessments are valid)
  4. in use at 100% of the classrooms/teachers in each grade (so that grade-average assessment numbers are valid)
  5. new to the grade (i.e. evaluating the first one or two years of use)
  6. adopted at sufficient “n” within a state (e.g. a cohort of ~25 or more school sites)

Every program, in every state, every year, that meets the above criteria can be studied, whether for the first time or to validate continuing effectiveness. The data is waiting in state and NCES research files to be used, in conjunction with publisher records of school/grade program usage. This example illustrates a quasi-experimental study to high standards of rigor.

It may be too early for this level of accountability to be palatable for many programs just yet. Showing robust, positive results requires the program itself be actually capable of generating differential program efficacy. And of course some program outcomes are not measured via standardized test scores. There will be many findings of small effect sizes, many implementations which fail, and much failure to show statistical significance. External factors may confound the findings. Program publishers would need to report out failures as well as successes. But the alternative is to continue in ignorance, rely only on peer word-of-mouth recommendations, or make do with a handful of small ‘gold-standard’ studies on limited contexts.

The potential to start applying this method now for many programs exists. Annual content evaluations can become a market norm, giving content an annual seat at the accountability table alongside teachers, and stimulating competition to improve content and its implementation.


Tagged , , , , , , ,
%d bloggers like this: