Tag Archives: blended learning

Future Vision 2025 Assessments: It’s in the Practice

What future are we aiming at? This series of 6 posts, Future Vision 2025, describes some of my personal education mission milestones. These are not predictions, they are aspirational. They are framed as significant differences one could see or make by 2025. What’s noticeably different in 2025 when one examines students, parents, teachers, learning, assessment, media & society? How and when these milestones are reached are not addressed. Some milestones are indicated by the emergence of something ‘new’ (at least at robust scale), others by the fading away of something familiar and comfortable.

Assessment 2025

In the 1970’s, I remember taking the Iowa Test of Basic Skills in math & English, in a few grades, for a few hours.

By 2015 a Council of Great City Schools evaluation showed students undergo standardized testing for 20-25 hours per year, not to mention testing prep time. By the time they graduate, students have been administered about 112 exams. Now, this is great fodder for the program evaluation work I do now, understanding what is working, how much and for whom. It would be impossible at scale without plenty of universal standardized test data. But in the future, given digital content, the 20-25 hours per year of standardized testing can be eliminated while retaining the benefits of the information they used to provide. This reduction of non-learning-added time is in more than just the test hours, it includes eliminating the prep hours for the style of test. And most importantly, this implodes the paradigm that test scores are the purpose, and test day is the culmination, of the school year’s efforts.

By 2025, “sitting tests” in March and April has been replaced by a continual assessment of knowledge and ability throughout the school year, via organic student interaction with the digital learning activities themselves. These activities each week still include practicing solving many problems, aka “doing problem sets.” The information generated from the digital “Practice” IS the new “Assessment.” Indeed summative standardized tests were essentially a review problem set, given in a huge dose at the end of the year. In 2025, each week every student’s use of digital content indicates mastery of that week’s content…or not. Gaps are identified as they occur, and are filled before moving on. You may ask, thinking back to cramming for a final, what about the retention that summative tests checked? In 2025, the digital content and practice adaptively checks retention of key prior knowledge for each individual student, intelligently spiraling problems back and forth to build fluency.

Moreover, beyond the conventional goal of “producing the right answer,” 2025’s digital device interface and pattern recognition assesses student strategy. Tablets collect, and the backend cloud parses and interprets, student handwriting and diagrams. “Show your work” is digitized and thus comprehensively purposeful. The information gleaned evaluates methods and strategies, and yes even productivity and speed. Insightful and actionable feedback on all of these is provided in real-time to the teacher and especially to the student. Why a student “isn’t getting it” becomes detailed and transparent. In 2025 haven’t just replaced “right answer” to “right strategy” though; it’s a different paradigm. Mastery is not tied to one “right” strategy, but it is about learning and applying strategies and methods that are productive – efficiency in thought, effort, and time.

In 2025 comprehensive content breadth and mastery of all techniques, what used to be the summative test’s job, has been measured in this digital, formative way throughout the school year. Indeed because of the continual feedback and intelligent spiraling, it has been not just measured, but refined and improved throughout the year towards fluency. There is still however a “final.” The benefits of a deadline to display one’s complete picture of a complex and broad topic are maintained. But because all that ongoing broad content mastery is already well known, the “final” can focus on a specific “narrow” area of interest. The final can  be a performance – authentic, creative and rigorous and very human which shows off the learner’s ability to communicate, and to creatively transfer to different domains.

Yes, I mean that a middle schooler’s integrated math “final” in 2025 can be a performance, hard to make, challenging to deliver, but fun and maybe even beautiful to watch.

Authentic Performance

Tagged , , , , ,

Future Vision 2025 Learning: The Actual Revolution!

What future are we aiming at? This series of 6 posts, Future Vision 2025, describes some of my personal education mission milestones. These are not predictions, they are aspirational. They are framed as significant differences one could see or make by 2025. What’s noticeably different in 2025 when one examines students, parents, teachers, learning, assessment, media & society? How and when these milestones are reached are not addressed. Some milestones are indicated by the emergence of something ‘new’ (at least at robust scale), others by the fading away of something familiar and comfortable.

Learning 2025

The Learning Revolution, it turns out, was about the Learners themselves. It was about their purpose and what they expect while learning. Yes, 21st Century tech was needed to catalyze and scale the Learning Revolution. But the revolution wasn’t about the delivery mechanisms; not about devices or Web X.0. It was about the process of learning not “feeling” the same. A student from 2015, if dropped into a 2025 learning situation,  would likely be far out of their comfort zone.

What’s different for the Learner during the actual learning moments?

Learners expect that what they are learning should make sense to them. They have confidence in their ability to learn material, even if it seems incomprehensible at first. They’ve gained this confidence through personal experience of multiple successful learning breakthroughs, gained through 21st Century learning environments. So, they expect to be able to tell the difference between true, evidence-supported knowledge and unsupported conjecture or false conclusions.

Learning is consciously Learner-directed. Learners understand there are different depths of understanding. Learners decide to what depth they choose to learn any given item or area, based on their own, personal individual purposes. Learner purposes range from immediate problem-resolution, to eager curiosity, to a desire for a professional, life-long “ownership” of the content. Learners understand transferability and seek it: the agility to re-apply any bit of newly gained knowledge or skill to a different, non-routine scenario. Learners crave fluent and precise communication of knowledge. Learners can distinguish in themselves how well or deeply they have learned – and make adjustments, consciously trading off depth and speed.

Yet there is still a familiar, strong, formal educational structure and framework. It’s not just you 1:1 with Wikipedia, Khan, Google, Siri, Alexa, or Cortana. The support structure needed and sought varies with the learner’s desired depth, but it ensures appropriate range, breadth, comprehension, and connectedness. And crucially provides a social mode. “School” is of course still required to lead 5 to 18 year olds to an appropriately broad range and depth of domain literacies.

Learning has gone experiential (learning by doing). In every content area, learners are able to leverage their built-in sensory perception-action cycle. They test hypotheses, sometimes organically, sometimes consciously, via real-time, rigorously accurate feedback. The provision of this multitude of specific, experiential learning environments is where 21st Century tech has been crucial: enabling design of and access to animated simulations and informative feedback. Experiential learning environments provide concrete scenarios first, in every field and at every level. Every learning modality includes as much visually-presented information as publishers can figure out how to provide. Abstract symbolic representations follow in the wake of concrete conceptual grasp.

Learners expect deeper learning to be a lifelong, fun & satisfying, activity. The pleasure of achieving deeper, accurate understanding has become evident to “the masses.”

To many of those still hanging onto positions of power through demagoguery, confusion, lies, distractions, and fear-mongering, this gradual enlightenment of the masses is the ultimate subversive disruption.

Relativity Video

“Visualization of Einstein’s special relativity,” udiprod

Tagged , , , , ,

Why Not: 3 Ingredients Enable Universal, Annual Digital Program Evaluations

This post originally appeared in an EdSurge guide, Measuring Efficacy in Ed Tech. Similar content, from a perspective about sharing accountability that teachers alone have shouldered, is in this prior post.

Curriculum-wide programs purchased by districts need to show that they work. Even products aimed mainly at efficiency or access should at minimum show that they can maintain status quo results. Rigorous evaluations have been complex, expensive and time-consuming at the student-level. However, given a digital math or reading program that has reached a scale of 30 or more sites statewide, there is a straightforward yet rigorous evaluation method using public, grade-average proficiencies, which can be applied post-adoption. The method enables not only districts, but also publishers to hold their programs accountable for results, in any year and for any state.

Three ingredients come together to enable this cost-effective evaluation method: annual school grade-average proficiencies in math and reading for each grade posted by each state, a program adopted across all classrooms in each using grade at each school, and digital records of grade average program usage. In my experience, school cohorts of 30 or more sites using a program across a state can be statistically evaluated. Once methods and state posted data are in place, the marginal cost and time per state-level evaluation can be as little as a few man-weeks.

A recently published WestEd study of MIND Research Institute’s ST Math, a supplemental digital math curriculum using visualization (disclosure: I am Chief Strategist for MIND Research) validates and exemplifies this method of evaluating grade-average changes longitudinally, aggregating program usage across 58 districts and 212 schools. In alignment with this methodological validation, in 2014 MIND began evaluating all new implementations of its elementary grade ST Math program in any state with 20 or more implementing grades (from grades 3, 4, and 5).

Clearly, evaluations of every program, every year have not been the prior market norm: it wasn’t possible before annual assessment and school proficiency posting requirements, and wasn’t possible before digital program usage measurements. Moreover, the education market has greatly discounted the possibility that curriculum makes all that much difference to outcomes, to the extent of not even trying to uniformly record what programs are being used by what schools. (Choosing Blindly: Instructional Materials, Teacher Effectiveness, and the Common Core by Matthew Chingos and Russ Whitehurst crisply and logically highlights this “scandalous lack of information” on usage and evaluation of instructional materials, as well as pointing out the high value of improving knowledge in this area.)

But publishers themselves are now in a position, in many cases, to aggregate their own digital program usage records from schools across districts, and generate timely, rigorous, standardized evaluations of their own products, using any state’s posted grade-level assessment data. It may be too early or too risky for many publishers. Currently, even just one rigorous, student-level study can serve as sufficient proof for a product. It’s an unnecessary risk for publishers to seek more universal, annual product accountability. It would be as surprising as if, were the anonymized data available, a fitness company started evaluating and publishing its overall average annual fitness impact on club member cohorts, by usage. By observation of the health club market, this level of accountability is neither a market requirement, nor even dreamed of. No reason for those providers to take on extra accountability.

But while we may accept that member-paid health clubs are not accountable for average health improvements, we need not accept that digital content’s contribution to learning outcomes in public schools goes unaccounted for. And universal content evaluation, enabled for digital programs, can launch a continuous improvement cycle, both for content publishers and for supporting teachers.

Once rigorous program evaluations start becoming commonplace, there will be many findings which lack statistical significance, and even some outright failures. Good to know. We will find that some local district implementation choices, as evidenced by digital usage patterns, turn out to be make-or-break for any given program’s success. Where and when robust teacher and student success is found, and as confidence is built, programs and implementation expertise can also start to be baked into sustained district pedagogical strategies and professional development.

Tagged , , , , , , ,

Future Vision 2025 Teachers: No more “just” a Teacher

What future are we aiming at? This series of 6 posts, Future Vision 2025, describes some of my personal education mission milestones. These are not predictions, they are aspirational. They are framed as significant differences one could see or make by 2025. What’s noticeably different in 2025 when one examines students, parents, teachers, learning, assessment, media & society? How and when these milestones are reached are not addressed. Some milestones are indicated by the emergence of something ‘new’ (at least at robust scale), others by the fading away of something familiar and comfortable.

Teachers 2025


In 2025, teaching as a profession is gaining respect.

It is gaining respect because the drumbeat from frustration with test scores failure has been stilled. The drumbeat has been stilled by clearly improved performance, both on domestic measures and in international comparisons. Key have been NAEP scores are improving markedly, as well as rising U.S. rankings in the international comparisons of PISA and TIMSS.

The drumbeat has also been stilled by an overall sense of progress and improvement: the educational playing field has been made more level through a smarter policy of enlightened self-interest. For example, government goals to provide quality early childhood education experiences, regardless of any parent’s economic ability to provide them, are by now as prevalent as health and nutritional programs were in 2015.

The beat has been stilled by data showing that the floor of the “achievement gap” is rising dramatically, at scale, across the U.S. Moreover, for the upper edge of the “gap”, all is not flat. Proficient or advanced students are also gaining through deep learning which plumbs far beyond just good scores. All students are growing their talents more than ever before.

Teachers encourage their student’s thirst for deeper learning via dramatically more engaging digital learning environments. The last ten years have, finally, empirically confirmed teachers’ belief that all students can learn challenging material. The experience of teaching practice  itself, with the latest digital tools, organically fills gaps in teachers’ own understanding in real time. And the goals of school itself are more tangibly clear and relevant. In the area of mathematics, for example, teachers understand that the meta-purpose of math education is to provide children with flexible, powerful raw thinking machinery for future general learning and problem-solving.

Teachers as a group are more autonomous than ever, skillfully wielding powerful digital tools to productively engage every learner. Publisher integrated content and tools suites have very obviously matured far beyond what any individual teacher would ever dream of putting together themselves via Google. Teacher job satisfaction is markedly up – because teachers are achieving their own goals for more of their own students: positively influencing lives.

Teacher pre-service training and professional development programs of course assume that teachers will be provided with requisite, powerful digital tools. So this training gives them the expectations and distinctions to recognize which tools are appropriate and effective for which purposes. Freshly-minted teachers are more quickly effective in the classroom. Experienced, creative teachers have more opportunities than ever before to focus on their highest level of value-add, via customization, enrichment, and knowing their individual students, having trading in all their prior low level management of classroom, content, and data.

Teacher-practitioners have earned this newfound level of respect from their students, from parents, from administrators, from the community, and, importantly, feel it deeply within. No more, “I’m just a teacher.”

If you tried to take digital content and tools away from teachers, they would go on strike.

Tagged , , , , ,

Hold Content Accountable Too: a scalable Method

This post originally was published on Tom VanderArk’s “VanderArk on Innovation” blog on Edweek. It was also published on GettingSmart. The following is an edited version.

Specific programs and content, not just teachers and ‘teacher quality’, must be held accountable for student outcomes. A recent study published by WestEd shows how, given certain program conditions, cost-effective and rigorous student test score evaluations of a digitally-facilitated program can now be pursued, annually, at any time in any state.

Historically, the glare of the student results spotlight has been so intensely focused on teachers alone, that the programs and content ‘provided’ to teachers have often not even been recorded. Making the case for the vital importance of paying attention is this scathing white paperChoosing Blindly, Instructional Materials, Teacher Effectiveness, and the Common Core, from the Brown Center on Educational Policy’s Matthew Chingos and Grover Whitehurst.  The good news is: digital programs operated by education publishers for schools organically generate a record of where and when they were used.

Today’s diversity of choices in digital content – choices about scope, vehicles, approaches & instructional design – is far greater than the past’s teacher-selection-committee picking among “Big 3” publishers’ textbook series. This wide variety means content can no longer be viewed as a commodity; as if it were merely a choice among brands of gasoline. Some of this new content may run smoothly in your educational system, yet some may sputter and stall, while others may achieve substantially more than normal mileage or power.

It is important to take advantage of this diversity, important to search for more powerful content. The status quo has not been able to deliver results improvements in a timely manner at scale. And spearheaded by goals embodied in the Common Core, we are targeting  much deeper student understanding, while retaining last decade’s goals of demonstrably reaching all students. In this pursuit, year after year, the teachers and students stay the same. What can change are the content and programs they use; ‘programs’ including the formal training programs we provide to our teachers.

But how do you tell what works? This has been extremely challenging in the education field, due in equal measures to a likely lack of programs that do work significantly better, to the immense and hard-to-replicate variations in program use and school cultures, and to the high cost, complexity, and delay inherent in conventional rigorous, experimental evaluations.

But. There is a cost-effective, universally applicable way for a large swath of content or programs to be rigorously evaluated: do they add value vs. business-as-usual. The method is straightforward, requires no pre-planning, can be applied in arrears, and is replicable across years, states, and program-types. It can cover every school in a state, thus taking into account all real-world variability, and it’s seamless across districts, aggregating up to hundreds of schools.

To be evaluated via this method, the program must be:

  1. able to generate digital records of where/when/how-much it was used at a grade
  2. in a grade-level and subject (e.g. 3-8 math) that posts public grade-average test scores
  3. a full curriculum program (so that summative assessments are valid)
  4. in use at 100% of the classrooms/teachers in each grade (so that grade-average assessment numbers are valid)
  5. new to the grade (i.e. evaluating the first one or two years of use)
  6. adopted at sufficient “n” within a state (e.g. a cohort of ~25 or more school sites)

Every program, in every state, every year, that meets the above criteria can be studied, whether for the first time or to validate continuing effectiveness. The data is waiting in state and NCES research files to be used, in conjunction with publisher records of school/grade program usage. This example illustrates a quasi-experimental study to high standards of rigor.

It may be too early for this level of accountability to be palatable for many programs just yet. Showing robust, positive results requires the program itself be actually capable of generating differential program efficacy. And of course some program outcomes are not measured via standardized test scores. There will be many findings of small effect sizes, many implementations which fail, and much failure to show statistical significance. External factors may confound the findings. Program publishers would need to report out failures as well as successes. But the alternative is to continue in ignorance, rely only on peer word-of-mouth recommendations, or make do with a handful of small ‘gold-standard’ studies on limited contexts.

The potential to start applying this method now for many programs exists. Annual content evaluations can become a market norm, giving content an annual seat at the accountability table alongside teachers, and stimulating competition to improve content and its implementation.


Tagged , , , , , , ,

The Digital Learning Revolution is not Glossy. (Or LTE.)

The Digital Learning Revolution Will Not Be Glossy. Or LTE.

First posted on Sums&Solutions blog.
Part one of a multi-part series

The true Digital Learning Revolution has not yet arrived. If you go into a classroom and see every student with an iPad on wifi, full 1:1, you are not necessarily seeing a Digital Learning Revolution. Counting what type and how glossy and how many are the digital devices is not how you tell.

Because the Digital Learning Revolution is not about digitizing conventional learning. Nor even about increasing access.

It’s not about digitized problem sets – even if they are gamified. Not even if the problems are scored instantly; nor even if the problem sequence can be varied based on responses (aka “adaptive learning”). Textbook-like problems presented digitally, no matter how entertainingly wrapped in back-story, music, interesting side-bar links, procedural hints and immersive 3-D exploration, are still just this: use previously memorized patterns and procedures to get THE right answer.

It’s not about digitized asynchronous lectures. By their nature they are not interactive. They are passive. Yes, even if talking heads and filmed overhead grease pen scrawls have moved from VHS-access in the 70’s to YouTube-access 40 years later, lectures are not the Digital Learning Revolution.

And it’s especially not about the advent of the latest digital hardware vehicles. Tsunamis of digital hardware have washed into many classrooms, many times. From Apple IIe’s in the 1980’s to Apple iPad II’s in the teens. With interactive whiteboards somewhere in between. First off, the change in how most subjects were taught day to day was minimal. Worse, it did not become the “new normal” for students or teachers to even just use them day to day. There was no killer app. No deep penetration. No Digital Learning Revolution – yet.

Of course, revolutionizing the learning itself depends on the content IN the digital vehicles, a point powerfully made in this excellent white paperChoosing Blindly, Instructional Materials, Teacher Effectiveness, and the Common Core, from the Brown Center on Educational Policy’s Chingos and Whitehurst.  And if that content is just a digitization of the conventional, then no matter how glossy and retina-resolution the screen, no matter how anywhere or anytime or speedy the access, the learning will still be “conventional” learning. By the way, how well has a focus on conventional learning, a focus where the content is considered a commodity, done over the last four decades?

Note moreover, that a narrow view of digital content + student, without taking into account the teacher’s interaction with new content and a new learning process, is also not the Digital Learning Revolution. Because as Chingos and Whitehurst point out, the Digital Learning Revolution occurs at the intersection of the student, and the content, and the teacher. So new digital vehicles, even conveying radically different content (such as interactive videogames), or, rather, especially when carrying radically different content, will not achieve the Digital Learning Revolution … without a comprehensive re-tooling of teacher understandings, processes, and goals.

Beyond Hardware

What about the other major digital game-changer of the 21st Century, you say – what about digitized access? Searchable access to the world’s libraries of content? Anywhere anytime access to the cloud through cheap personal hand-held devices?

You are a participant in that access revolution. So, look around you, what is your experience? Have you experienced, or seen a Digital Learning Revolution? A communication revolution to be sure – connectivity is off the charts. And it’s certainly a revolution in “find something, cut, and paste”. A plethora of small, disconnected written nuggets delivering instant gratification for quick trivia questions. Consumption from the cloud is off the charts. But, when you are looking for depth, you have not yet seen a revolution of learning. As I blogged here re speed v. depth, and here re googling.

The digital access revolution did not bring the Digital Learning Revolution along for the ride.

Again, the key is content. And that a Learning Revolution must involve three interacting components: student, content, and teacher. As I blogged here re blended learning. A Learning Revolution requires the teacher for social, evaluative, motivational, and yes, human communication. The Digital Learning Revolution will require humans. The best sort of humans: teachers who help others grow and improve

In the next installment: well anyhow, we should expect digital content for free, right?

Tagged , , , , , ,

Is “Doogie Howser” acceleration through formal school our ideal?

An iconic TV series in the early 90’s features a teenager, Doogie Howser, who “earned a perfect score on the SAT at the age of six, completed high school in nine weeks at the age of nine, graduated from Princeton University in 1983 at age 10.” Then, he slowed down and took four looong years to get through medical school – I guess he had to slow down because scripting a tween as the M.D. lead of a hospital sitcom didn’t hit the target advertiser demographic.

“Completed high school in nine weeks.” Let that sink in deep for, like, a millisecond. Did that just slip by? Did it strain credulity? Did it sound great? Are you excited that self-paced digital learning can make this accelerated ‘learning’ happen for more and more Doogies in the near future? If only we can break out of that 19th century assembly-line seat-time mentality..

Hey, earlier is just better, right? I clearly remember attending a LAAMP-sponsored community meeting at an auditorium in Occidental College in 2000, where attending teachers were excitedly being informed from the stage that new state standards meant their students would be “tested on high level content they currently only get to in college.” I was stunned by this earlier-is-better strategy, thinking that we hadn’t really nailed most student’s learning of the good old high school stuff quite yet. By 2006 some unintended but damaging consequences of this value judgement that earlier is better were described by Tom Loveless of the Brookings Institute’s Center for Educational Policy.

[In 2013 California dropped its earliest-in-the-nation requirement that all eighth graders take algebra. Here is the LAAMP final report on disappointing results from its 6 year $53M project in Los Angeles.]

Here is my rant: Stop this faster is better one dimensional talk. No more, “…so with self-paced online schooling Suzie could get her competency based GED at age 12!” Just like Doogie, our hero and model for success.

Having started UCLA at age 15 myself, I feel quite the laggard compared to this ideal. But then I also keenly experienced the social downside risk to acceleration – no sports, no girlfriends, no prom. Too young + too academically successful = stick to your geek math club friends. Seriously folks, scrambling the social institution that is high school for 14-18 year olds needs to be a consideration when you hear “…well when they finish the 3rd grade software, just advance ’em right to 4th and then if they can to 5th.” And one does hear it. What’s that you say? What about the non-digital aspects of learning? “Nah, it doesn’t matter where the teacher or the rest of their peers are! Speed is a goal! Get with it!”

Yes it’s exciting that some types of digital content enable quicker uptake. I’m saying quicker uptake as a primary goal is dangerous.

Here’s an analogy about learning in another domain, a more physical domain I think we intuitively understand better. Suppose you were taking guitar lessons. But rather than once a week in a guitar shop with a  teacher, and near daily practice, you were watching lessons on YouTube. And say there was a 20 hour playlist of 40 half-hour video lessons. Wow! That is easily watchable in a week! What’s that you say? Sure, I passed the online ‘competency’ quizzes and final. It was easy, man! Bam! Done! I just learned guitar! High school in 9 weeks!

Really? What was the purpose of “learning” guitar? The learner’s purpose, your purpose? Were you supposed to be able to, like, really play it? Have your fingers fret clean notes with automaticity? Feel motivated to play; enjoy playing? Experience playing in a band and making music? ‘Own’ playing it as a lifelong skill? Learn to read sheet music so that you could play new songs and more easily learn some other instrument too? Learn more about music in general? Understand key guitar features and why they work how they do? Launch you on a quest to conquer more and more challenging guitar playing, way beyond your training?

Or was the purpose more just a ‘badge’ to display. “Got in in one week!” “Got through 3rd grade math online in one month!” Hey, I got a good score, a good grade, what more do you want?

What gets lost in the conversation about acceleration is GOING DEEP in learning. There is another dimension to accelerate besides calendar time: accelerate diving deep. Use digital content to dive deeper. Don’t promote 12 year old college students as an ideal. Please, the social costs are too high. If you are testing for competency, test DEEP. No badge until you successfully perform a duo gig and earn tips at the local coffee shop.

Take extra time you find to dive deeper into 3rd grade math. As MIND Research Institute co-founder Dr. Matthew Peterson said, if a 3rd grader gets quickly through 3rd grade digital content then, “let’s get them a Ph.D. in 3rd grade math! And then a Nobel Prize in 4th grade math!” The vertical dimension, depth of learning, is bottomless in every content area.  For example, fractions concepts can open up a deeper exploration of rationals and irrationals.

Are you satisfied with how ‘deep’ the current learning demanded to beat the system is? How do you feel about the current pacing through content? Is what you want from digital tools to rip through inch-deep learning much faster?

I’m here to say, personally as well as pedagogically, the future can’t be about getting American history dates and people and quadratic formulae crammed into some 9 week high school frenzy. It’s not about 14 year olds in Ph.D. programs. Let’s use digital tools in order to get more powerful learning. Aim to go deep, not fast.

Tagged , , , , , ,

Digital 1-2-3s Make Math Sense for Preschool Kids

Every parent can see that birth to 5 is a whirlwind of learning. Many parents strive to include informal learning activities like the ABC’s.  But you may be surprised to learn that no aspect of early education is more important to a child’s academic future than mathematicsResearch from Greg Duncan at the University of California, Irvine shows that early math skills in 5 year-olds are the single greatest predictor of later achievement.

So at a recent early childhood education conference in Chicago, I was excited to see policy leaders, researchers, corporations and foundations rallying around the importance of supporting our youngest learners, including in math.  Their vision for accomplishing it … well, I found that less exciting as the only presentation focused on digital content for 4 year olds was my own.

Understandably. The vast majority of digital content “out there” for kids is of low educational quality. I enjoy Sponge-Bob, if not Disney princesses, as much as anyone. But having a 4 year old gesture her way through random edutainment apps is hardly the “transformation” of learning you’ve been hoping for. And yet, digital content is ideal for rapid scale-up, and every year we “wait” for a non-digital solution to reach scale, we miss out on yet another cohort of 4 million more 4 year-olds in the U.S.

So how do you judge digital program quality? First, look for a program that is radically different. Second, look for early, consistent, rigorous results. At the K-5 level, there is a digital, neuroscience-based math program —  MIND Research Institute’s ST Math, that has shown potential for radical transformation of learning. ST Math has successfully doubled and tripled annual growth in math proficiency for Grade 2-5 students on state tests, as it presents math concepts as a full in-school curriculum of visual, language-free puzzles of virtual onscreen manipulatives.

If there exists a proven math program that teaches math visually, without requiring language proficiency or even reading skills, then what better age to apply it to than pre-readers – especially ones who don’t necessarily speak any English! ST Math is currently being piloted in select teacher-led, site-based Pre-K classrooms in Los Angeles. Imagine a teacher working with a 4-year old digital native, who is using a tablet to get literally “hands-on” with number sense.

If we want to level the education playing field before traditional schooling even starts, and lay a solid foundation across the nation for lifelong success in STEM fields, we need to start young and be bold. Digital, unconventional, deeper-learning tools like ST Math may be the transformation you’ve been looking for.

A version of this blog was published in the September issue of District Administration.

Tagged , , , , ,

“There’s no achievement gap in videogames” – Quentin Lawson

I don’t want to learn how to play most videogames. By videogames I am thinking involved console games like Call of Duty, or MLB the Show. As a 50-something, that may not be surprising as I’m past the “shoot-em-up” or “race-car” ages (well maybe not real race cars). But truthfully, I would enjoy being able to give my my teenage sons a decent playing partner. The thing is, I know there would be a long and challenging learning curve. Because the learning is discovery/exploratory. And it’s not trivial or short, there is a lot to pick up. For me, it would be both mentally challenging and take significant amounts of time. And I already feel I have enough mental challenge-per-week to sink a battleship. I’m not looking for more. And I can’t afford to have significantly more time, or energy, sucked out of my days.

My point is: learning a console videogame like these is not easy, it takes focus, it takes effort, it takes mental agility, it takes perseverance, it takes time. Sorta like learning anything complex.

And here’s the point of this blog post: it would be ludicrous for anyone to propose that there’s an achievement gap for children to learn videogames. It would also be ludicrous to say that there is an engagement issue – at least for males with the games I mentioned. And it would be beyond incredible to say that kids have a lack of perseverance at solving the game’s problem scenarios.

I attribute this observation to Quentin Lawson, Executive Director of NABSE, the National Alliance of Black School Educators. I was demo’ing for him how all math concepts could be introduced a visual puzzles on a computer, which could be interacted with and animated to understand how to, for example, add fractions. He saw how this was like a videogame and, with young Black male students in mind, noted in an offhand way that “there’s no achievement gap in videogames,” so this could level the playing field. I have been quoting Quentin ever since.

Because how could anyone imagine that success for any child in learning any videogame could depend on:

  • their parents’ education level
  • their parents’ wealth
  • their neighborhood
  • the quality of their friends
  • how much their parents could “tutor” them on the game
  • their own success in school so far
  • the language they speak at home
  • or any other “subgroup” factor

It would be ludicrous; at the least I can’t imagine any such attributes being used by anyone as excuses why children couldn’t win at the game.

So, if productively engaging with challenging core content, like algebra, in a deep and mathematically rigorous way, that requires learner interaction and experiential learning, that starts easy and is gradually scaffolded, that develops problem-solving, perseverance, and confidence in ability to “win”, can be made into a videogame-like experience, then teachers can build-upon, cement and interconnect that mode of learning into deeper understanding and skills, without concern for any digital content achievement gap.

Tagged , , , , , , , , , , , ,

Instructional Software: Just a Cleanup Activity after Ineffective Teaching??

How instructional software is positioned in the minds of many educators and others is outdated, and misleading.

The conventional model of instructional software (I am thinking “core” subject like mathematics) began, naturally, as a digital extension of conventional teaching. So: practice problems, read onscreen, with multiple choice answers, instant grading and perhaps with some gamification of scores. But if you didn’t “need” the “extra teaching,” you didn’t “need” the instructional software. So it was optional: for some students, some of the time.

There were two logical models for deciding for whom and when to use instructional software: use it for remedial students, or periodic diagnostic tests (eventually also online) for everyone, to determine which skills the student needed more practice in; then assign instructional software just for those specific skills. The metaphor is “filling the holes in the Swiss cheese.”

Implicit in those models was that some students did not need any instructional software: those who learned sufficiently from the standard, no-software-involved, teaching. The instructional software served a role of “cleaning-up” whatever gaps were left unfilled or incomplete after the normal teaching. By observation then,  the regular teaching on its own was ineffective in achieving the learning goal for some students some of the time. (The reason it was ineffective could include many things outside of the teacher’s control, of course.)

Despite the recent emergence of “blended learning” as a desirable future model of combining digital content with teacher & chalkboard learning, at present the preponderance of students still use zero instructional software in their studies. And frequently, even in 2013’s “state-of-the-art” blended learning examples, the role of the digital content is still essentially more practice, like a digitization of homework reps, albeit with intelligent sourcing of problems and with instant scoring.

Similarly, in many 2013 RFP’s the instructional software is specified for RTI tier 2 interventions for struggling students only. This means that not only do the RTI tier 1 “OK” students not need any digital component in the normal course of their learning, it’s not even seen as a way to prevent “OK” students from slipping into tier 2.

All of the above makes sense if you see the role of instructional software as just enabling “more.” More of what teachers ideally, technically “could,” but in the real world can’t, deliver because of the constraints of scarce time, and thus the impossibility of differentiating teaching to productively engage each learner and suit the pace of each learner. So the instructional software provides more time for those students and situations who just didn’t get enough time from conventional teaching.

But consider: more time for students has been tried, and tried, and doesn’t get game-changing results. By game-changing I mean ensuring that every student understands and gains content mastery and confidence in a subject – like math. If more of the same did work for challenging situations, then the mere, but very expensive, application of additional teacher time (double-block, repeated courses, pull-outs) would be shown to “fix” the problem. Which in math, certainly, it doesn’t — not at a scale and cost which can be universal and sustained (i.e. beyond a one-on-one tutorial). So instructional software’s role to give “more of the same” is not a fix.

This pigeon-holing of instructional software as for “clean up” is too limiting. If that’s your model, you wouldn’t even think of buying — or making — instructional software that has fundamental and vital value for every student. Fundamental and vital is how we view… textbooks. Lawsuits are filed and won to ensure that every student has a textbook. When the day comes that a lawsuit is filed, fought and won to ensure that every student has effective instructional software we will know that the pigeon-holing is over.

Here’s an analogy of this positioning problem to the world of exercise and health. It’s as if instructional software is seen as physical therapy, rather than as physical conditioning. It’s as if it’s just for those who are in some way injured, or chronically weak, rather than for everyone who wants to get in shape. You get diagnosed for your injury, perhaps a shoulder tweak, you do your therapy reps with rubber bands, and one happy day you’re healthy enough to quit doing the P.T., forever.

The future, additional role of instructional software is as a vital component of the learning environment, for every student and teacher. It’s like joining and then diligently using a gym’s facilities and moreover its trainers, motivation and social aspects. Properly designed and trained and supported, it’s a gym program that gets everyone more fit. No one gets to “test out”. No one gets to work “just on their weaknesses.”

And it’s not implicit that “ineffective teaching” is the raison d’etre for instructional software. This is turned completely inside-out: instructional software, in the hands of a teacher, makes teaching and learning more powerful and effective generally, throughout the school year: differentiating to reach every student (including the strongest), engaging and motivating each student at an appropriate level and pace, and providing multiple opportunities for the teacher to assess, diagnose, and consolidate student learning.

Tagged , , , , , , , ,
%d bloggers like this: