Tag Archives: ST Math

Why Not: 3 Ingredients Enable Universal, Annual Digital Program Evaluations

This post originally appeared in an EdSurge guide, Measuring Efficacy in Ed Tech. Similar content, from a perspective about sharing accountability that teachers alone have shouldered, is in this prior post.

Curriculum-wide programs purchased by districts need to show that they work. Even products aimed mainly at efficiency or access should at minimum show that they can maintain status quo results. Rigorous evaluations have been complex, expensive and time-consuming at the student-level. However, given a digital math or reading program that has reached a scale of 30 or more sites statewide, there is a straightforward yet rigorous evaluation method using public, grade-average proficiencies, which can be applied post-adoption. The method enables not only districts, but also publishers to hold their programs accountable for results, in any year and for any state.

Three ingredients come together to enable this cost-effective evaluation method: annual school grade-average proficiencies in math and reading for each grade posted by each state, a program adopted across all classrooms in each using grade at each school, and digital records of grade average program usage. In my experience, school cohorts of 30 or more sites using a program across a state can be statistically evaluated. Once methods and state posted data are in place, the marginal cost and time per state-level evaluation can be as little as a few man-weeks.

A recently published WestEd study of MIND Research Institute’s ST Math, a supplemental digital math curriculum using visualization (disclosure: I am Chief Strategist for MIND Research) validates and exemplifies this method of evaluating grade-average changes longitudinally, aggregating program usage across 58 districts and 212 schools. In alignment with this methodological validation, in 2014 MIND began evaluating all new implementations of its elementary grade ST Math program in any state with 20 or more implementing grades (from grades 3, 4, and 5).

Clearly, evaluations of every program, every year have not been the prior market norm: it wasn’t possible before annual assessment and school proficiency posting requirements, and wasn’t possible before digital program usage measurements. Moreover, the education market has greatly discounted the possibility that curriculum makes all that much difference to outcomes, to the extent of not even trying to uniformly record what programs are being used by what schools. (Choosing Blindly: Instructional Materials, Teacher Effectiveness, and the Common Core by Matthew Chingos and Russ Whitehurst crisply and logically highlights this “scandalous lack of information” on usage and evaluation of instructional materials, as well as pointing out the high value of improving knowledge in this area.)

But publishers themselves are now in a position, in many cases, to aggregate their own digital program usage records from schools across districts, and generate timely, rigorous, standardized evaluations of their own products, using any state’s posted grade-level assessment data. It may be too early or too risky for many publishers. Currently, even just one rigorous, student-level study can serve as sufficient proof for a product. It’s an unnecessary risk for publishers to seek more universal, annual product accountability. It would be as surprising as if, were the anonymized data available, a fitness company started evaluating and publishing its overall average annual fitness impact on club member cohorts, by usage. By observation of the health club market, this level of accountability is neither a market requirement, nor even dreamed of. No reason for those providers to take on extra accountability.

But while we may accept that member-paid health clubs are not accountable for average health improvements, we need not accept that digital content’s contribution to learning outcomes in public schools goes unaccounted for. And universal content evaluation, enabled for digital programs, can launch a continuous improvement cycle, both for content publishers and for supporting teachers.

Once rigorous program evaluations start becoming commonplace, there will be many findings which lack statistical significance, and even some outright failures. Good to know. We will find that some local district implementation choices, as evidenced by digital usage patterns, turn out to be make-or-break for any given program’s success. Where and when robust teacher and student success is found, and as confidence is built, programs and implementation expertise can also start to be baked into sustained district pedagogical strategies and professional development.

Tagged , , , , , , ,

Is “Doogie Howser” acceleration through formal school our ideal?

An iconic TV series in the early 90’s features a teenager, Doogie Howser, who “earned a perfect score on the SAT at the age of six, completed high school in nine weeks at the age of nine, graduated from Princeton University in 1983 at age 10.” Then, he slowed down and took four looong years to get through medical school – I guess he had to slow down because scripting a tween as the M.D. lead of a hospital sitcom didn’t hit the target advertiser demographic.

“Completed high school in nine weeks.” Let that sink in deep for, like, a millisecond. Did that just slip by? Did it strain credulity? Did it sound great? Are you excited that self-paced digital learning can make this accelerated ‘learning’ happen for more and more Doogies in the near future? If only we can break out of that 19th century assembly-line seat-time mentality..

Hey, earlier is just better, right? I clearly remember attending a LAAMP-sponsored community meeting at an auditorium in Occidental College in 2000, where attending teachers were excitedly being informed from the stage that new state standards meant their students would be “tested on high level content they currently only get to in college.” I was stunned by this earlier-is-better strategy, thinking that we hadn’t really nailed most student’s learning of the good old high school stuff quite yet. By 2006 some unintended but damaging consequences of this value judgement that earlier is better were described by Tom Loveless of the Brookings Institute’s Center for Educational Policy.

[In 2013 California dropped its earliest-in-the-nation requirement that all eighth graders take algebra. Here is the LAAMP final report on disappointing results from its 6 year $53M project in Los Angeles.]

Here is my rant: Stop this faster is better one dimensional talk. No more, “…so with self-paced online schooling Suzie could get her competency based GED at age 12!” Just like Doogie, our hero and model for success.

Having started UCLA at age 15 myself, I feel quite the laggard compared to this ideal. But then I also keenly experienced the social downside risk to acceleration – no sports, no girlfriends, no prom. Too young + too academically successful = stick to your geek math club friends. Seriously folks, scrambling the social institution that is high school for 14-18 year olds needs to be a consideration when you hear “…well when they finish the 3rd grade software, just advance ’em right to 4th and then if they can to 5th.” And one does hear it. What’s that you say? What about the non-digital aspects of learning? “Nah, it doesn’t matter where the teacher or the rest of their peers are! Speed is a goal! Get with it!”

Yes it’s exciting that some types of digital content enable quicker uptake. I’m saying quicker uptake as a primary goal is dangerous.

Here’s an analogy about learning in another domain, a more physical domain I think we intuitively understand better. Suppose you were taking guitar lessons. But rather than once a week in a guitar shop with a  teacher, and near daily practice, you were watching lessons on YouTube. And say there was a 20 hour playlist of 40 half-hour video lessons. Wow! That is easily watchable in a week! What’s that you say? Sure, I passed the online ‘competency’ quizzes and final. It was easy, man! Bam! Done! I just learned guitar! High school in 9 weeks!

Really? What was the purpose of “learning” guitar? The learner’s purpose, your purpose? Were you supposed to be able to, like, really play it? Have your fingers fret clean notes with automaticity? Feel motivated to play; enjoy playing? Experience playing in a band and making music? ‘Own’ playing it as a lifelong skill? Learn to read sheet music so that you could play new songs and more easily learn some other instrument too? Learn more about music in general? Understand key guitar features and why they work how they do? Launch you on a quest to conquer more and more challenging guitar playing, way beyond your training?

Or was the purpose more just a ‘badge’ to display. “Got in in one week!” “Got through 3rd grade math online in one month!” Hey, I got a good score, a good grade, what more do you want?

What gets lost in the conversation about acceleration is GOING DEEP in learning. There is another dimension to accelerate besides calendar time: accelerate diving deep. Use digital content to dive deeper. Don’t promote 12 year old college students as an ideal. Please, the social costs are too high. If you are testing for competency, test DEEP. No badge until you successfully perform a duo gig and earn tips at the local coffee shop.

Take extra time you find to dive deeper into 3rd grade math. As MIND Research Institute co-founder Dr. Matthew Peterson said, if a 3rd grader gets quickly through 3rd grade digital content then, “let’s get them a Ph.D. in 3rd grade math! And then a Nobel Prize in 4th grade math!” The vertical dimension, depth of learning, is bottomless in every content area.  For example, fractions concepts can open up a deeper exploration of rationals and irrationals.

Are you satisfied with how ‘deep’ the current learning demanded to beat the system is? How do you feel about the current pacing through content? Is what you want from digital tools to rip through inch-deep learning much faster?

I’m here to say, personally as well as pedagogically, the future can’t be about getting American history dates and people and quadratic formulae crammed into some 9 week high school frenzy. It’s not about 14 year olds in Ph.D. programs. Let’s use digital tools in order to get more powerful learning. Aim to go deep, not fast.

Tagged , , , , , ,

Digital 1-2-3s Make Math Sense for Preschool Kids

Every parent can see that birth to 5 is a whirlwind of learning. Many parents strive to include informal learning activities like the ABC’s.  But you may be surprised to learn that no aspect of early education is more important to a child’s academic future than mathematicsResearch from Greg Duncan at the University of California, Irvine shows that early math skills in 5 year-olds are the single greatest predictor of later achievement.

So at a recent early childhood education conference in Chicago, I was excited to see policy leaders, researchers, corporations and foundations rallying around the importance of supporting our youngest learners, including in math.  Their vision for accomplishing it … well, I found that less exciting as the only presentation focused on digital content for 4 year olds was my own.

Understandably. The vast majority of digital content “out there” for kids is of low educational quality. I enjoy Sponge-Bob, if not Disney princesses, as much as anyone. But having a 4 year old gesture her way through random edutainment apps is hardly the “transformation” of learning you’ve been hoping for. And yet, digital content is ideal for rapid scale-up, and every year we “wait” for a non-digital solution to reach scale, we miss out on yet another cohort of 4 million more 4 year-olds in the U.S.

So how do you judge digital program quality? First, look for a program that is radically different. Second, look for early, consistent, rigorous results. At the K-5 level, there is a digital, neuroscience-based math program —  MIND Research Institute’s ST Math, that has shown potential for radical transformation of learning. ST Math has successfully doubled and tripled annual growth in math proficiency for Grade 2-5 students on state tests, as it presents math concepts as a full in-school curriculum of visual, language-free puzzles of virtual onscreen manipulatives.

If there exists a proven math program that teaches math visually, without requiring language proficiency or even reading skills, then what better age to apply it to than pre-readers – especially ones who don’t necessarily speak any English! ST Math is currently being piloted in select teacher-led, site-based Pre-K classrooms in Los Angeles. Imagine a teacher working with a 4-year old digital native, who is using a tablet to get literally “hands-on” with number sense.

If we want to level the education playing field before traditional schooling even starts, and lay a solid foundation across the nation for lifelong success in STEM fields, we need to start young and be bold. Digital, unconventional, deeper-learning tools like ST Math may be the transformation you’ve been looking for.

A version of this blog was published in the September issue of District Administration.

Tagged , , , , ,

Transforming the Education Market: Look to Non-Profits

This piece was originally published on the Huffington Post on May 3.

The convergence in the last 20 years of advancements in computer, cognitive & neuro-sciences has made game-changing educational programs a possibility. In the area of mathematics education, inventors can now see a path to give teachers powerful yet easy to use, radically different digital tools to get all students to be proficient in math and even algebra. The breakthroughs go beyond math for math’s sake, beyond proficiency on tests, to the fundamental purpose of math education for all students: every person possessing the powerful thinking, analytic and problem-solving skills that mathematical literacy promises. To put a number it, a 2011 Organisation for Economic Co-operation and Development study estimated that even modestly increased student math skills would add over $40 trillion to the U.S. economy over current students’ lifetimes.

But the K-12 education market is a poster child for unhealthy markets. Products are developed to politicized market specifications, which are far below their potential. Digital solutions from the past 30 years have not worked, so expectations are low. Curriculum is treated as a commodity, and not even evaluated. Not only do most educators not seriously expect teacher tools or content to make a game-changing difference, but K-12 purchasers are looking for approaches familiar to what they experienced in school. The market even lacks understanding of the indicators of program quality and effectiveness necessary to meet the market spec of standardized test success, so simple or secondary features hold sway instead.

In this market, I believe the leaders achieving radically higher educational goals are mission-driven, not-for-profit organizations. Non-profits, almost by definition, need support beyond current market forces. Visionary and savvy business social investments, though relatively small change to $1B businesses, can be and should be vital support for continuous invention and prove-out.

Many business leaders are looking to “help education” through their corporate social responsibility strategies. Unfortunately, based on my experience inside a non-profit education researcher and publisher, what most businesspeople will be looking for is just marginal improvement. There is an unconscious acceptance of the familiar, of “inch deep” learning rather than breakthrough deeper learning. There is a focus on increasing speed and reducing cost. There is a lack of appreciation for the vital role of the teacher.

So, for those visionary businesses and foundations interested in accelerating a quantum leap forward for all teachers and students in K-12 education outcomes (their future workforce/customers), I’d like to suggest that the following considerations are crucial:

1)  Look for non-profits pursuing radically different approaches. Digitizing existing content and approaches is just more of the same, even if ported onto a glossy-screened touch-tablet. People are actually inventing new learning tools, content and processes — like inventing powered flight. Real transformation is going to use teaching and learning models that seem and look and feel radically different from how you learned. Search for that.

2) Look for truly scalable approaches. These three questions will help you evaluate a program’s scalability:

i) Are they applicable to all teachers and students, from gifted to struggling to English learners?

ii) Are they aimed at the heart of the problem right now: at all schools and teachers and training processes as-is? Think Los Angeles, New York, Chicago, Houston and D.C. public schools?

Don’t just focus on and invest in the fringes of the school market, which will take years if ever to reach the majority of students in the community, and the nation. Each year, 4 million more children are passing through an untransformed pipeline.

iii) Can they scale up fast and without limit, driven ultimately by non-philanthropic funds? That is, do the economics of the solution enable eventual demand and resources from the main market, i.e. government public schools, to adopt, scale and sustain?

While some non-profits aimed at breakthrough transformation will be in the early stages of research and invention, you can also look for others with solid evidence that their program delivers results, and that there will be market demand and that it is economically scalable.

There are examples of businesses following all of these principles in their social investment in education. Corporate foundations, CEOs and chairs of Cisco, Broadcom, Emulex, Microsemi, PwC, Bank of America, Chevron and others  have come together to support a breakthrough math program using instructional software to tap students’ visual reasoning. You would not recognize math taught this way, it is that different.

The lowest performing elementary schools in Orange County were provided grants to launch this math program. Over 80% of those schools are now participating. A 45,000-student district serving predominantly economically disadvantaged English learners, Santa Ana Unified, went district-wide and closed its “achievement gap” with the California state average. And the targeted schools at a county-wide level are greatly outpacing similar schools in Academic Performance Index growth. The proven results have attracted district funding at over 1,000 additional sites. Over 500,000 students are being served, and scale-up is economical.

For any company interested in helping education, keep an eye out for the pioneering non-profits that fit this profile. Your social investment will then be poised to go beyond “help,” to transform and scale to millions.

Tagged , , , , , ,

“There’s no achievement gap in videogames” – Quentin Lawson

I don’t want to learn how to play most videogames. By videogames I am thinking involved console games like Call of Duty, or MLB the Show. As a 50-something, that may not be surprising as I’m past the “shoot-em-up” or “race-car” ages (well maybe not real race cars). But truthfully, I would enjoy being able to give my my teenage sons a decent playing partner. The thing is, I know there would be a long and challenging learning curve. Because the learning is discovery/exploratory. And it’s not trivial or short, there is a lot to pick up. For me, it would be both mentally challenging and take significant amounts of time. And I already feel I have enough mental challenge-per-week to sink a battleship. I’m not looking for more. And I can’t afford to have significantly more time, or energy, sucked out of my days.

My point is: learning a console videogame like these is not easy, it takes focus, it takes effort, it takes mental agility, it takes perseverance, it takes time. Sorta like learning anything complex.

And here’s the point of this blog post: it would be ludicrous for anyone to propose that there’s an achievement gap for children to learn videogames. It would also be ludicrous to say that there is an engagement issue – at least for males with the games I mentioned. And it would be beyond incredible to say that kids have a lack of perseverance at solving the game’s problem scenarios.

I attribute this observation to Quentin Lawson, Executive Director of NABSE, the National Alliance of Black School Educators. I was demo’ing for him how all math concepts could be introduced a visual puzzles on a computer, which could be interacted with and animated to understand how to, for example, add fractions. He saw how this was like a videogame and, with young Black male students in mind, noted in an offhand way that “there’s no achievement gap in videogames,” so this could level the playing field. I have been quoting Quentin ever since.

Because how could anyone imagine that success for any child in learning any videogame could depend on:

  • their parents’ education level
  • their parents’ wealth
  • their neighborhood
  • the quality of their friends
  • how much their parents could “tutor” them on the game
  • their own success in school so far
  • the language they speak at home
  • or any other “subgroup” factor

It would be ludicrous; at the least I can’t imagine any such attributes being used by anyone as excuses why children couldn’t win at the game.

So, if productively engaging with challenging core content, like algebra, in a deep and mathematically rigorous way, that requires learner interaction and experiential learning, that starts easy and is gradually scaffolded, that develops problem-solving, perseverance, and confidence in ability to “win”, can be made into a videogame-like experience, then teachers can build-upon, cement and interconnect that mode of learning into deeper understanding and skills, without concern for any digital content achievement gap.

Tagged , , , , , , , , , , , ,

Instructional Software: Just a Cleanup Activity after Ineffective Teaching??

How instructional software is positioned in the minds of many educators and others is outdated, and misleading.

The conventional model of instructional software (I am thinking “core” subject like mathematics) began, naturally, as a digital extension of conventional teaching. So: practice problems, read onscreen, with multiple choice answers, instant grading and perhaps with some gamification of scores. But if you didn’t “need” the “extra teaching,” you didn’t “need” the instructional software. So it was optional: for some students, some of the time.

There were two logical models for deciding for whom and when to use instructional software: use it for remedial students, or periodic diagnostic tests (eventually also online) for everyone, to determine which skills the student needed more practice in; then assign instructional software just for those specific skills. The metaphor is “filling the holes in the Swiss cheese.”

Implicit in those models was that some students did not need any instructional software: those who learned sufficiently from the standard, no-software-involved, teaching. The instructional software served a role of “cleaning-up” whatever gaps were left unfilled or incomplete after the normal teaching. By observation then,  the regular teaching on its own was ineffective in achieving the learning goal for some students some of the time. (The reason it was ineffective could include many things outside of the teacher’s control, of course.)

Despite the recent emergence of “blended learning” as a desirable future model of combining digital content with teacher & chalkboard learning, at present the preponderance of students still use zero instructional software in their studies. And frequently, even in 2013’s “state-of-the-art” blended learning examples, the role of the digital content is still essentially more practice, like a digitization of homework reps, albeit with intelligent sourcing of problems and with instant scoring.

Similarly, in many 2013 RFP’s the instructional software is specified for RTI tier 2 interventions for struggling students only. This means that not only do the RTI tier 1 “OK” students not need any digital component in the normal course of their learning, it’s not even seen as a way to prevent “OK” students from slipping into tier 2.

All of the above makes sense if you see the role of instructional software as just enabling “more.” More of what teachers ideally, technically “could,” but in the real world can’t, deliver because of the constraints of scarce time, and thus the impossibility of differentiating teaching to productively engage each learner and suit the pace of each learner. So the instructional software provides more time for those students and situations who just didn’t get enough time from conventional teaching.

But consider: more time for students has been tried, and tried, and doesn’t get game-changing results. By game-changing I mean ensuring that every student understands and gains content mastery and confidence in a subject – like math. If more of the same did work for challenging situations, then the mere, but very expensive, application of additional teacher time (double-block, repeated courses, pull-outs) would be shown to “fix” the problem. Which in math, certainly, it doesn’t — not at a scale and cost which can be universal and sustained (i.e. beyond a one-on-one tutorial). So instructional software’s role to give “more of the same” is not a fix.

This pigeon-holing of instructional software as for “clean up” is too limiting. If that’s your model, you wouldn’t even think of buying — or making — instructional software that has fundamental and vital value for every student. Fundamental and vital is how we view… textbooks. Lawsuits are filed and won to ensure that every student has a textbook. When the day comes that a lawsuit is filed, fought and won to ensure that every student has effective instructional software we will know that the pigeon-holing is over.

Here’s an analogy of this positioning problem to the world of exercise and health. It’s as if instructional software is seen as physical therapy, rather than as physical conditioning. It’s as if it’s just for those who are in some way injured, or chronically weak, rather than for everyone who wants to get in shape. You get diagnosed for your injury, perhaps a shoulder tweak, you do your therapy reps with rubber bands, and one happy day you’re healthy enough to quit doing the P.T., forever.

The future, additional role of instructional software is as a vital component of the learning environment, for every student and teacher. It’s like joining and then diligently using a gym’s facilities and moreover its trainers, motivation and social aspects. Properly designed and trained and supported, it’s a gym program that gets everyone more fit. No one gets to “test out”. No one gets to work “just on their weaknesses.”

And it’s not implicit that “ineffective teaching” is the raison d’etre for instructional software. This is turned completely inside-out: instructional software, in the hands of a teacher, makes teaching and learning more powerful and effective generally, throughout the school year: differentiating to reach every student (including the strongest), engaging and motivating each student at an appropriate level and pace, and providing multiple opportunities for the teacher to assess, diagnose, and consolidate student learning.

Tagged , , , , , , , ,

Assessments are the Ultimate Game-Changer

“Real change in teaching and learning requires real change in assessment” – Justin Reich.

This arcsparks post is 100% inspired by EdWeek’s Justin Reich’s excellent post “Everything Hinges on Assessment“, which powerfully backs up, and shows examples in support of, his quote above. My post below was also left as a comment on Justin’s blog.

In other words one might say, to profoundly change the game we need to change the scoring. Then, how people play the game to win has to change too.

And the only scoring on K-12 frequent enough, direct, clear, important to all the players, and timely enough to matter to anyone at all is tests, whether in-course or end-of-course.

I”ve been with a non-profit working for 11 years to provide a digital tool for a changed game that very few if any are seriously playing yet.

Specifically math instructional software for blended learning to ensure that students deeply understand math concepts. In the absence of hard-edged assessment, that tests the highest-order conceptual student outcome results we’re spec’ing into our instructional design, we rely on the soft evaluations made by educators on a personal level – i.e. on specific visionary educators, whether at district, site, or class level, seriously insisting on more than is tested for.

Assuming that our program and the educator are actually successful together, this is still an unstable situation, susceptible to specific educators leaving the district/school, etc. to be replaced by another with de facto lower goals for student learning. To put it another way, sustained learning above a floor-level set by the assessments is an unstable exercise in defying gravity.

At my shop, our experience is that a good program will show results on any level of quality assessment of the content, and we are perfectly happy to see results show up (and they do) on fill-in-the-bubble assessments. Nonetheless our strategy is to design for max conceptual understanding, regardless of the assessment.

Quality in instructional materials and programs is extremely expensive – much more expensive than the market thinks it is. Economics alone would dictate designing as cheaply as possible, to the lowest quality required to meet the assessment spec: designed for the test. Fortunately for my shop, we’re a non-profit so our bottom line is learning, not earning and we avoid this temptation.

Bottom line: couldn’t agree more with the points in this post; dramatically raising the assessment’s requirements on student learning will be a profound, irresistable, stable game-changer for educators, students, parents, and all publishers.

Tagged , , ,

What’s that in your Blender? 5 Key Factors of Digital Content

This is the first of I hope many guest blog posts for my friends at Getting Smart. About me: I’m observing the education market conversation from the perspective of a non-profit digital content publisher with a focus on math. I’ve had the luxury for the past 10 years of laser-focus on how to make what’s now known as “blended learning” work on just one subject area. While we’ve grown to serve over 450,000 students and 14,000 teachers, we’ve drilled down to a pretty deep perspective I would like to share.

What’s your main purpose for blended learning? Is it improving learning resources efficiency/cost and time/access? Or is it improving the learning itself? Much of the attention and excitement about blended learning is on the former, with time-and-motion descriptions of where the teacher, the student, and the computer exist during the day.

The addition of digital content to the mix of place and time is a rich area of innovation and practice. In 2012, the Innosight Institute’s Heather Staker and Michael B. Horn revised their pioneering taxonomy of blended learning, classifying blended learning implementations. The attributes include modality (digital or face-to-face), location (lab, class or home), time (fixed schedule/pace vs. fluid), and content (fixed or customized). The 11 derived types of blended learning are school-centric: labels are exemplified by specific school examples. Explanatory diagrams show physical layouts of computers, teachers and students. And from these diagrams the potential for raising efficiency in use of learning resources — clock time, student time, and teacher time — is readily apparent.

Yet as a digital content publisher, my organization is focused on the other potential for blended learning: dramatically improving the learning itself. I mean more comprehension and sense-making, better transfer of knowledge and higher retention of new information.

This requires us to add another perspective on what’s being blended, specifically on instructional interaction as described by Matthew M. Chingos and Russ Whitehurst in their recent report from the Brown Center on Education Policy. They succinctly remind us that where the rubber hits the road in learning is the student’s direct interaction with the teacher and/or instructional materials. The instructional materials used by the teacher greatly influence the teacher/student interaction. Here is where the digital ingredient in the blender can be a game-changer when it comes to the quality of learning. Curriculum is not a commodity; quality and efficacy of curriculum is highly variable. Chingos and Whitehurst dramatically point out the “scandalous lack of information” at all user-levels, as if the instructional materials used are irrelevant.

So, let me briefly introduce five key factors to consider for blended learning, from this learning-centric perspective of instructional interaction between teacher, student, and digital content.

Note that instructional interaction doesn’t “care” where or when it is. It’s about “what” it is. One modality is the student interacting directly with digital content (i.e. without the teacher). For web-delivered digital content, which I will assume, clock and location drop out of the picture – the interaction is the same whether the access is during or after school, in classroom or lab or home or library. Another modality is the student-to-teacher interaction, which could be either a conversation face-to-face during a scheduled time, or an ad hoc conversation over Skype. The point is the students and teachers are engaged in a conversation around learning, not the time or place.

Factor 1: By its 1:1 nature, student interaction with digital content is self-paced. Even essentially passive interactions like studying a digital textbook on a tablet or viewing a video on YouTube can be more valuable because they can be paused and reviewed by the student. Of course active interactions like games add an additional self-pace dimension of correctly solving a problem to proceed in the game.

Factor 2: Digital content can be much more than conventional-practice-on-a-computer of previously introduced procedures. It can be a way to introduce and explain concepts, whether in advance of, in parallel with, or even after they are introduced by a teacher. Yes, from my perspective of seeking better learning, there is always also a teacher ingredient in the mix. As Bob Wise, Alliance for Excellent Education President said at the SIIA Ed Tech Business Forum last November, to get better learning, “High Tech requires High Teach.”

Factor 3: Digital content can be highly interactive. Of course interactive means more than clicking a “next scene” arrow. Interaction means the student needs to respond to some problem-solving scenario, then see the results of her response. For example, that could be solving a math puzzle. Given appropriate strategy and quality of the digital content, this is a “minds-on” interaction about the academic subject matter, not just gameplay.

Factor 4: Digital content can provide immediate feedback. The quality of that feedback can vary widely. At the low end, but still a quantum improvement over text/paper/pencil, is the standard “red x” wrong or “green checkmark” right. At the high end, digital content can be used to provide immediate instructive feedback – an explicit explanation of why a solution was wrong, or why it was right. This instructive feedback facilitates a student’s learning (whether confirming a solution or showing what-to-correct) from each posed answer.

Factor 5: Digital content can provide an adaptive or custom sequence of learning objects for each individual. This can range from a beginning of year pre-assessment determining a grade-level syllabus, to real-time on-the-fly adjustment up or down of difficulty levels as needed, to longer term pattern recognition of student misconceptions, assigning specific corrective content.

Finally, consider this recently released IES report about math problem-solving. Aimed at curriculum developers as well as educators, its recommendations emphasize the teacher’s role in promoting deeper learning. I agree. A vital ingredient in the digital blender, to raise learning quality, is the teacher. The same content students are using 1:1 can inform and be used by the teacher, at the point of instructional interaction. The potential impact is enormous. As Chingos and Whitehurst say, “We can expect both theoretically and based on existing research that instructional materials either reduce the variability in performance across teachers, raise the overall performance level of the entire distribution of teachers, or both.”

Along with all the excitement and buzz around blended learning, to go beyond learning efficiencies, keep an eye out for game-changing aspects of digital content, for student and teacher use, to achieve deeper learning.

Tagged , , , , ,
%d bloggers like this: