Tag Archives: instructional software

Future Vision 2025 Assessments: It’s in the Practice

What future are we aiming at? This series of 6 posts, Future Vision 2025, describes some of my personal education mission milestones. These are not predictions, they are aspirational. They are framed as significant differences one could see or make by 2025. What’s noticeably different in 2025 when one examines students, parents, teachers, learning, assessment, media & society? How and when these milestones are reached are not addressed. Some milestones are indicated by the emergence of something ‘new’ (at least at robust scale), others by the fading away of something familiar and comfortable.

Assessment 2025

In the 1970’s, I remember taking the Iowa Test of Basic Skills in math & English, in a few grades, for a few hours.

By 2015 a Council of Great City Schools evaluation showed students undergo standardized testing for 20-25 hours per year, not to mention testing prep time. By the time they graduate, students have been administered about 112 exams. Now, this is great fodder for the program evaluation work I do now, understanding what is working, how much and for whom. It would be impossible at scale without plenty of universal standardized test data. But in the future, given digital content, the 20-25 hours per year of standardized testing can be eliminated while retaining the benefits of the information they used to provide. This reduction of non-learning-added time is in more than just the test hours, it includes eliminating the prep hours for the style of test. And most importantly, this implodes the paradigm that test scores are the purpose, and test day is the culmination, of the school year’s efforts.

By 2025, “sitting tests” in March and April has been replaced by a continual assessment of knowledge and ability throughout the school year, via organic student interaction with the digital learning activities themselves. These activities each week still include practicing solving many problems, aka “doing problem sets.” The information generated from the digital “Practice” IS the new “Assessment.” Indeed summative standardized tests were essentially a review problem set, given in a huge dose at the end of the year. In 2025, each week every student’s use of digital content indicates mastery of that week’s content…or not. Gaps are identified as they occur, and are filled before moving on. You may ask, thinking back to cramming for a final, what about the retention that summative tests checked? In 2025, the digital content and practice adaptively checks retention of key prior knowledge for each individual student, intelligently spiraling problems back and forth to build fluency.

Moreover, beyond the conventional goal of “producing the right answer,” 2025’s digital device interface and pattern recognition assesses student strategy. Tablets collect, and the backend cloud parses and interprets, student handwriting and diagrams. “Show your work” is digitized and thus comprehensively purposeful. The information gleaned evaluates methods and strategies, and yes even productivity and speed. Insightful and actionable feedback on all of these is provided in real-time to the teacher and especially to the student. Why a student “isn’t getting it” becomes detailed and transparent. In 2025 haven’t just replaced “right answer” to “right strategy” though; it’s a different paradigm. Mastery is not tied to one “right” strategy, but it is about learning and applying strategies and methods that are productive – efficiency in thought, effort, and time.

In 2025 comprehensive content breadth and mastery of all techniques, what used to be the summative test’s job, has been measured in this digital, formative way throughout the school year. Indeed because of the continual feedback and intelligent spiraling, it has been not just measured, but refined and improved throughout the year towards fluency. There is still however a “final.” The benefits of a deadline to display one’s complete picture of a complex and broad topic are maintained. But because all that ongoing broad content mastery is already well known, the “final” can focus on a specific “narrow” area of interest. The final can  be a performance – authentic, creative and rigorous and very human which shows off the learner’s ability to communicate, and to creatively transfer to different domains.

Yes, I mean that a middle schooler’s integrated math “final” in 2025 can be a performance, hard to make, challenging to deliver, but fun and maybe even beautiful to watch.

Authentic Performance

Advertisements
Tagged , , , , ,

Future Vision 2025 Learning: The Actual Revolution!

What future are we aiming at? This series of 6 posts, Future Vision 2025, describes some of my personal education mission milestones. These are not predictions, they are aspirational. They are framed as significant differences one could see or make by 2025. What’s noticeably different in 2025 when one examines students, parents, teachers, learning, assessment, media & society? How and when these milestones are reached are not addressed. Some milestones are indicated by the emergence of something ‘new’ (at least at robust scale), others by the fading away of something familiar and comfortable.

Learning 2025

The Learning Revolution, it turns out, was about the Learners themselves. It was about their purpose and what they expect while learning. Yes, 21st Century tech was needed to catalyze and scale the Learning Revolution. But the revolution wasn’t about the delivery mechanisms; not about devices or Web X.0. It was about the process of learning not “feeling” the same. A student from 2015, if dropped into a 2025 learning situation,  would likely be far out of their comfort zone.

What’s different for the Learner during the actual learning moments?

Learners expect that what they are learning should make sense to them. They have confidence in their ability to learn material, even if it seems incomprehensible at first. They’ve gained this confidence through personal experience of multiple successful learning breakthroughs, gained through 21st Century learning environments. So, they expect to be able to tell the difference between true, evidence-supported knowledge and unsupported conjecture or false conclusions.

Learning is consciously Learner-directed. Learners understand there are different depths of understanding. Learners decide to what depth they choose to learn any given item or area, based on their own, personal individual purposes. Learner purposes range from immediate problem-resolution, to eager curiosity, to a desire for a professional, life-long “ownership” of the content. Learners understand transferability and seek it: the agility to re-apply any bit of newly gained knowledge or skill to a different, non-routine scenario. Learners crave fluent and precise communication of knowledge. Learners can distinguish in themselves how well or deeply they have learned – and make adjustments, consciously trading off depth and speed.

Yet there is still a familiar, strong, formal educational structure and framework. It’s not just you 1:1 with Wikipedia, Khan, Google, Siri, Alexa, or Cortana. The support structure needed and sought varies with the learner’s desired depth, but it ensures appropriate range, breadth, comprehension, and connectedness. And crucially provides a social mode. “School” is of course still required to lead 5 to 18 year olds to an appropriately broad range and depth of domain literacies.

Learning has gone experiential (learning by doing). In every content area, learners are able to leverage their built-in sensory perception-action cycle. They test hypotheses, sometimes organically, sometimes consciously, via real-time, rigorously accurate feedback. The provision of this multitude of specific, experiential learning environments is where 21st Century tech has been crucial: enabling design of and access to animated simulations and informative feedback. Experiential learning environments provide concrete scenarios first, in every field and at every level. Every learning modality includes as much visually-presented information as publishers can figure out how to provide. Abstract symbolic representations follow in the wake of concrete conceptual grasp.

Learners expect deeper learning to be a lifelong, fun & satisfying, activity. The pleasure of achieving deeper, accurate understanding has become evident to “the masses.”

To many of those still hanging onto positions of power through demagoguery, confusion, lies, distractions, and fear-mongering, this gradual enlightenment of the masses is the ultimate subversive disruption.

Relativity Video

“Visualization of Einstein’s special relativity,” udiprod

Tagged , , , , ,

Why Not: 3 Ingredients Enable Universal, Annual Digital Program Evaluations

This post originally appeared in an EdSurge guide, Measuring Efficacy in Ed Tech. Similar content, from a perspective about sharing accountability that teachers alone have shouldered, is in this prior post.

Curriculum-wide programs purchased by districts need to show that they work. Even products aimed mainly at efficiency or access should at minimum show that they can maintain status quo results. Rigorous evaluations have been complex, expensive and time-consuming at the student-level. However, given a digital math or reading program that has reached a scale of 30 or more sites statewide, there is a straightforward yet rigorous evaluation method using public, grade-average proficiencies, which can be applied post-adoption. The method enables not only districts, but also publishers to hold their programs accountable for results, in any year and for any state.

Three ingredients come together to enable this cost-effective evaluation method: annual school grade-average proficiencies in math and reading for each grade posted by each state, a program adopted across all classrooms in each using grade at each school, and digital records of grade average program usage. In my experience, school cohorts of 30 or more sites using a program across a state can be statistically evaluated. Once methods and state posted data are in place, the marginal cost and time per state-level evaluation can be as little as a few man-weeks.

A recently published WestEd study of MIND Research Institute’s ST Math, a supplemental digital math curriculum using visualization (disclosure: I am Chief Strategist for MIND Research) validates and exemplifies this method of evaluating grade-average changes longitudinally, aggregating program usage across 58 districts and 212 schools. In alignment with this methodological validation, in 2014 MIND began evaluating all new implementations of its elementary grade ST Math program in any state with 20 or more implementing grades (from grades 3, 4, and 5).

Clearly, evaluations of every program, every year have not been the prior market norm: it wasn’t possible before annual assessment and school proficiency posting requirements, and wasn’t possible before digital program usage measurements. Moreover, the education market has greatly discounted the possibility that curriculum makes all that much difference to outcomes, to the extent of not even trying to uniformly record what programs are being used by what schools. (Choosing Blindly: Instructional Materials, Teacher Effectiveness, and the Common Core by Matthew Chingos and Russ Whitehurst crisply and logically highlights this “scandalous lack of information” on usage and evaluation of instructional materials, as well as pointing out the high value of improving knowledge in this area.)

But publishers themselves are now in a position, in many cases, to aggregate their own digital program usage records from schools across districts, and generate timely, rigorous, standardized evaluations of their own products, using any state’s posted grade-level assessment data. It may be too early or too risky for many publishers. Currently, even just one rigorous, student-level study can serve as sufficient proof for a product. It’s an unnecessary risk for publishers to seek more universal, annual product accountability. It would be as surprising as if, were the anonymized data available, a fitness company started evaluating and publishing its overall average annual fitness impact on club member cohorts, by usage. By observation of the health club market, this level of accountability is neither a market requirement, nor even dreamed of. No reason for those providers to take on extra accountability.

But while we may accept that member-paid health clubs are not accountable for average health improvements, we need not accept that digital content’s contribution to learning outcomes in public schools goes unaccounted for. And universal content evaluation, enabled for digital programs, can launch a continuous improvement cycle, both for content publishers and for supporting teachers.

Once rigorous program evaluations start becoming commonplace, there will be many findings which lack statistical significance, and even some outright failures. Good to know. We will find that some local district implementation choices, as evidenced by digital usage patterns, turn out to be make-or-break for any given program’s success. Where and when robust teacher and student success is found, and as confidence is built, programs and implementation expertise can also start to be baked into sustained district pedagogical strategies and professional development.

Tagged , , , , , , ,

Future Vision 2025 Teachers: No more “just” a Teacher

What future are we aiming at? This series of 6 posts, Future Vision 2025, describes some of my personal education mission milestones. These are not predictions, they are aspirational. They are framed as significant differences one could see or make by 2025. What’s noticeably different in 2025 when one examines students, parents, teachers, learning, assessment, media & society? How and when these milestones are reached are not addressed. Some milestones are indicated by the emergence of something ‘new’ (at least at robust scale), others by the fading away of something familiar and comfortable.

Teachers 2025

Drumbeats

In 2025, teaching as a profession is gaining respect.

It is gaining respect because the drumbeat from frustration with test scores failure has been stilled. The drumbeat has been stilled by clearly improved performance, both on domestic measures and in international comparisons. Key have been NAEP scores are improving markedly, as well as rising U.S. rankings in the international comparisons of PISA and TIMSS.

The drumbeat has also been stilled by an overall sense of progress and improvement: the educational playing field has been made more level through a smarter policy of enlightened self-interest. For example, government goals to provide quality early childhood education experiences, regardless of any parent’s economic ability to provide them, are by now as prevalent as health and nutritional programs were in 2015.

The beat has been stilled by data showing that the floor of the “achievement gap” is rising dramatically, at scale, across the U.S. Moreover, for the upper edge of the “gap”, all is not flat. Proficient or advanced students are also gaining through deep learning which plumbs far beyond just good scores. All students are growing their talents more than ever before.

Teachers encourage their student’s thirst for deeper learning via dramatically more engaging digital learning environments. The last ten years have, finally, empirically confirmed teachers’ belief that all students can learn challenging material. The experience of teaching practice  itself, with the latest digital tools, organically fills gaps in teachers’ own understanding in real time. And the goals of school itself are more tangibly clear and relevant. In the area of mathematics, for example, teachers understand that the meta-purpose of math education is to provide children with flexible, powerful raw thinking machinery for future general learning and problem-solving.

Teachers as a group are more autonomous than ever, skillfully wielding powerful digital tools to productively engage every learner. Publisher integrated content and tools suites have very obviously matured far beyond what any individual teacher would ever dream of putting together themselves via Google. Teacher job satisfaction is markedly up – because teachers are achieving their own goals for more of their own students: positively influencing lives.

Teacher pre-service training and professional development programs of course assume that teachers will be provided with requisite, powerful digital tools. So this training gives them the expectations and distinctions to recognize which tools are appropriate and effective for which purposes. Freshly-minted teachers are more quickly effective in the classroom. Experienced, creative teachers have more opportunities than ever before to focus on their highest level of value-add, via customization, enrichment, and knowing their individual students, having trading in all their prior low level management of classroom, content, and data.

Teacher-practitioners have earned this newfound level of respect from their students, from parents, from administrators, from the community, and, importantly, feel it deeply within. No more, “I’m just a teacher.”

If you tried to take digital content and tools away from teachers, they would go on strike.

Tagged , , , , ,

Hold Content Accountable Too: a scalable Method

This post originally was published on Tom VanderArk’s “VanderArk on Innovation” blog on Edweek. It was also published on GettingSmart. The following is an edited version.

Specific programs and content, not just teachers and ‘teacher quality’, must be held accountable for student outcomes. A recent study published by WestEd shows how, given certain program conditions, cost-effective and rigorous student test score evaluations of a digitally-facilitated program can now be pursued, annually, at any time in any state.

Historically, the glare of the student results spotlight has been so intensely focused on teachers alone, that the programs and content ‘provided’ to teachers have often not even been recorded. Making the case for the vital importance of paying attention is this scathing white paperChoosing Blindly, Instructional Materials, Teacher Effectiveness, and the Common Core, from the Brown Center on Educational Policy’s Matthew Chingos and Grover Whitehurst.  The good news is: digital programs operated by education publishers for schools organically generate a record of where and when they were used.

Today’s diversity of choices in digital content – choices about scope, vehicles, approaches & instructional design – is far greater than the past’s teacher-selection-committee picking among “Big 3” publishers’ textbook series. This wide variety means content can no longer be viewed as a commodity; as if it were merely a choice among brands of gasoline. Some of this new content may run smoothly in your educational system, yet some may sputter and stall, while others may achieve substantially more than normal mileage or power.

It is important to take advantage of this diversity, important to search for more powerful content. The status quo has not been able to deliver results improvements in a timely manner at scale. And spearheaded by goals embodied in the Common Core, we are targeting  much deeper student understanding, while retaining last decade’s goals of demonstrably reaching all students. In this pursuit, year after year, the teachers and students stay the same. What can change are the content and programs they use; ‘programs’ including the formal training programs we provide to our teachers.

But how do you tell what works? This has been extremely challenging in the education field, due in equal measures to a likely lack of programs that do work significantly better, to the immense and hard-to-replicate variations in program use and school cultures, and to the high cost, complexity, and delay inherent in conventional rigorous, experimental evaluations.

But. There is a cost-effective, universally applicable way for a large swath of content or programs to be rigorously evaluated: do they add value vs. business-as-usual. The method is straightforward, requires no pre-planning, can be applied in arrears, and is replicable across years, states, and program-types. It can cover every school in a state, thus taking into account all real-world variability, and it’s seamless across districts, aggregating up to hundreds of schools.

To be evaluated via this method, the program must be:

  1. able to generate digital records of where/when/how-much it was used at a grade
  2. in a grade-level and subject (e.g. 3-8 math) that posts public grade-average test scores
  3. a full curriculum program (so that summative assessments are valid)
  4. in use at 100% of the classrooms/teachers in each grade (so that grade-average assessment numbers are valid)
  5. new to the grade (i.e. evaluating the first one or two years of use)
  6. adopted at sufficient “n” within a state (e.g. a cohort of ~25 or more school sites)

Every program, in every state, every year, that meets the above criteria can be studied, whether for the first time or to validate continuing effectiveness. The data is waiting in state and NCES research files to be used, in conjunction with publisher records of school/grade program usage. This example illustrates a quasi-experimental study to high standards of rigor.

It may be too early for this level of accountability to be palatable for many programs just yet. Showing robust, positive results requires the program itself be actually capable of generating differential program efficacy. And of course some program outcomes are not measured via standardized test scores. There will be many findings of small effect sizes, many implementations which fail, and much failure to show statistical significance. External factors may confound the findings. Program publishers would need to report out failures as well as successes. But the alternative is to continue in ignorance, rely only on peer word-of-mouth recommendations, or make do with a handful of small ‘gold-standard’ studies on limited contexts.

The potential to start applying this method now for many programs exists. Annual content evaluations can become a market norm, giving content an annual seat at the accountability table alongside teachers, and stimulating competition to improve content and its implementation.

 

Tagged , , , , , , ,

The Digital Learning Revolution is not Glossy. (Or LTE.)

The Digital Learning Revolution Will Not Be Glossy. Or LTE.

First posted on Sums&Solutions blog.
Part one of a multi-part series

The true Digital Learning Revolution has not yet arrived. If you go into a classroom and see every student with an iPad on wifi, full 1:1, you are not necessarily seeing a Digital Learning Revolution. Counting what type and how glossy and how many are the digital devices is not how you tell.

Because the Digital Learning Revolution is not about digitizing conventional learning. Nor even about increasing access.

It’s not about digitized problem sets – even if they are gamified. Not even if the problems are scored instantly; nor even if the problem sequence can be varied based on responses (aka “adaptive learning”). Textbook-like problems presented digitally, no matter how entertainingly wrapped in back-story, music, interesting side-bar links, procedural hints and immersive 3-D exploration, are still just this: use previously memorized patterns and procedures to get THE right answer.

It’s not about digitized asynchronous lectures. By their nature they are not interactive. They are passive. Yes, even if talking heads and filmed overhead grease pen scrawls have moved from VHS-access in the 70’s to YouTube-access 40 years later, lectures are not the Digital Learning Revolution.

And it’s especially not about the advent of the latest digital hardware vehicles. Tsunamis of digital hardware have washed into many classrooms, many times. From Apple IIe’s in the 1980’s to Apple iPad II’s in the teens. With interactive whiteboards somewhere in between. First off, the change in how most subjects were taught day to day was minimal. Worse, it did not become the “new normal” for students or teachers to even just use them day to day. There was no killer app. No deep penetration. No Digital Learning Revolution – yet.

Of course, revolutionizing the learning itself depends on the content IN the digital vehicles, a point powerfully made in this excellent white paperChoosing Blindly, Instructional Materials, Teacher Effectiveness, and the Common Core, from the Brown Center on Educational Policy’s Chingos and Whitehurst.  And if that content is just a digitization of the conventional, then no matter how glossy and retina-resolution the screen, no matter how anywhere or anytime or speedy the access, the learning will still be “conventional” learning. By the way, how well has a focus on conventional learning, a focus where the content is considered a commodity, done over the last four decades?

Note moreover, that a narrow view of digital content + student, without taking into account the teacher’s interaction with new content and a new learning process, is also not the Digital Learning Revolution. Because as Chingos and Whitehurst point out, the Digital Learning Revolution occurs at the intersection of the student, and the content, and the teacher. So new digital vehicles, even conveying radically different content (such as interactive videogames), or, rather, especially when carrying radically different content, will not achieve the Digital Learning Revolution … without a comprehensive re-tooling of teacher understandings, processes, and goals.

Beyond Hardware

What about the other major digital game-changer of the 21st Century, you say – what about digitized access? Searchable access to the world’s libraries of content? Anywhere anytime access to the cloud through cheap personal hand-held devices?

You are a participant in that access revolution. So, look around you, what is your experience? Have you experienced, or seen a Digital Learning Revolution? A communication revolution to be sure – connectivity is off the charts. And it’s certainly a revolution in “find something, cut, and paste”. A plethora of small, disconnected written nuggets delivering instant gratification for quick trivia questions. Consumption from the cloud is off the charts. But, when you are looking for depth, you have not yet seen a revolution of learning. As I blogged here re speed v. depth, and here re googling.

The digital access revolution did not bring the Digital Learning Revolution along for the ride.

Again, the key is content. And that a Learning Revolution must involve three interacting components: student, content, and teacher. As I blogged here re blended learning. A Learning Revolution requires the teacher for social, evaluative, motivational, and yes, human communication. The Digital Learning Revolution will require humans. The best sort of humans: teachers who help others grow and improve

In the next installment: well anyhow, we should expect digital content for free, right?

Tagged , , , , , ,

Digital 1-2-3s Make Math Sense for Preschool Kids

Every parent can see that birth to 5 is a whirlwind of learning. Many parents strive to include informal learning activities like the ABC’s.  But you may be surprised to learn that no aspect of early education is more important to a child’s academic future than mathematicsResearch from Greg Duncan at the University of California, Irvine shows that early math skills in 5 year-olds are the single greatest predictor of later achievement.

So at a recent early childhood education conference in Chicago, I was excited to see policy leaders, researchers, corporations and foundations rallying around the importance of supporting our youngest learners, including in math.  Their vision for accomplishing it … well, I found that less exciting as the only presentation focused on digital content for 4 year olds was my own.

Understandably. The vast majority of digital content “out there” for kids is of low educational quality. I enjoy Sponge-Bob, if not Disney princesses, as much as anyone. But having a 4 year old gesture her way through random edutainment apps is hardly the “transformation” of learning you’ve been hoping for. And yet, digital content is ideal for rapid scale-up, and every year we “wait” for a non-digital solution to reach scale, we miss out on yet another cohort of 4 million more 4 year-olds in the U.S.

So how do you judge digital program quality? First, look for a program that is radically different. Second, look for early, consistent, rigorous results. At the K-5 level, there is a digital, neuroscience-based math program —  MIND Research Institute’s ST Math, that has shown potential for radical transformation of learning. ST Math has successfully doubled and tripled annual growth in math proficiency for Grade 2-5 students on state tests, as it presents math concepts as a full in-school curriculum of visual, language-free puzzles of virtual onscreen manipulatives.

If there exists a proven math program that teaches math visually, without requiring language proficiency or even reading skills, then what better age to apply it to than pre-readers – especially ones who don’t necessarily speak any English! ST Math is currently being piloted in select teacher-led, site-based Pre-K classrooms in Los Angeles. Imagine a teacher working with a 4-year old digital native, who is using a tablet to get literally “hands-on” with number sense.

If we want to level the education playing field before traditional schooling even starts, and lay a solid foundation across the nation for lifelong success in STEM fields, we need to start young and be bold. Digital, unconventional, deeper-learning tools like ST Math may be the transformation you’ve been looking for.

A version of this blog was published in the September issue of District Administration.

Tagged , , , , ,

Instructional Software: Just a Cleanup Activity after Ineffective Teaching??

How instructional software is positioned in the minds of many educators and others is outdated, and misleading.

The conventional model of instructional software (I am thinking “core” subject like mathematics) began, naturally, as a digital extension of conventional teaching. So: practice problems, read onscreen, with multiple choice answers, instant grading and perhaps with some gamification of scores. But if you didn’t “need” the “extra teaching,” you didn’t “need” the instructional software. So it was optional: for some students, some of the time.

There were two logical models for deciding for whom and when to use instructional software: use it for remedial students, or periodic diagnostic tests (eventually also online) for everyone, to determine which skills the student needed more practice in; then assign instructional software just for those specific skills. The metaphor is “filling the holes in the Swiss cheese.”

Implicit in those models was that some students did not need any instructional software: those who learned sufficiently from the standard, no-software-involved, teaching. The instructional software served a role of “cleaning-up” whatever gaps were left unfilled or incomplete after the normal teaching. By observation then,  the regular teaching on its own was ineffective in achieving the learning goal for some students some of the time. (The reason it was ineffective could include many things outside of the teacher’s control, of course.)

Despite the recent emergence of “blended learning” as a desirable future model of combining digital content with teacher & chalkboard learning, at present the preponderance of students still use zero instructional software in their studies. And frequently, even in 2013’s “state-of-the-art” blended learning examples, the role of the digital content is still essentially more practice, like a digitization of homework reps, albeit with intelligent sourcing of problems and with instant scoring.

Similarly, in many 2013 RFP’s the instructional software is specified for RTI tier 2 interventions for struggling students only. This means that not only do the RTI tier 1 “OK” students not need any digital component in the normal course of their learning, it’s not even seen as a way to prevent “OK” students from slipping into tier 2.

All of the above makes sense if you see the role of instructional software as just enabling “more.” More of what teachers ideally, technically “could,” but in the real world can’t, deliver because of the constraints of scarce time, and thus the impossibility of differentiating teaching to productively engage each learner and suit the pace of each learner. So the instructional software provides more time for those students and situations who just didn’t get enough time from conventional teaching.

But consider: more time for students has been tried, and tried, and doesn’t get game-changing results. By game-changing I mean ensuring that every student understands and gains content mastery and confidence in a subject – like math. If more of the same did work for challenging situations, then the mere, but very expensive, application of additional teacher time (double-block, repeated courses, pull-outs) would be shown to “fix” the problem. Which in math, certainly, it doesn’t — not at a scale and cost which can be universal and sustained (i.e. beyond a one-on-one tutorial). So instructional software’s role to give “more of the same” is not a fix.

This pigeon-holing of instructional software as for “clean up” is too limiting. If that’s your model, you wouldn’t even think of buying — or making — instructional software that has fundamental and vital value for every student. Fundamental and vital is how we view… textbooks. Lawsuits are filed and won to ensure that every student has a textbook. When the day comes that a lawsuit is filed, fought and won to ensure that every student has effective instructional software we will know that the pigeon-holing is over.

Here’s an analogy of this positioning problem to the world of exercise and health. It’s as if instructional software is seen as physical therapy, rather than as physical conditioning. It’s as if it’s just for those who are in some way injured, or chronically weak, rather than for everyone who wants to get in shape. You get diagnosed for your injury, perhaps a shoulder tweak, you do your therapy reps with rubber bands, and one happy day you’re healthy enough to quit doing the P.T., forever.

The future, additional role of instructional software is as a vital component of the learning environment, for every student and teacher. It’s like joining and then diligently using a gym’s facilities and moreover its trainers, motivation and social aspects. Properly designed and trained and supported, it’s a gym program that gets everyone more fit. No one gets to “test out”. No one gets to work “just on their weaknesses.”

And it’s not implicit that “ineffective teaching” is the raison d’etre for instructional software. This is turned completely inside-out: instructional software, in the hands of a teacher, makes teaching and learning more powerful and effective generally, throughout the school year: differentiating to reach every student (including the strongest), engaging and motivating each student at an appropriate level and pace, and providing multiple opportunities for the teacher to assess, diagnose, and consolidate student learning.

Tagged , , , , , , , ,

What’s that in your Blender? 5 Key Factors of Digital Content

This is the first of I hope many guest blog posts for my friends at Getting Smart. About me: I’m observing the education market conversation from the perspective of a non-profit digital content publisher with a focus on math. I’ve had the luxury for the past 10 years of laser-focus on how to make what’s now known as “blended learning” work on just one subject area. While we’ve grown to serve over 450,000 students and 14,000 teachers, we’ve drilled down to a pretty deep perspective I would like to share.

What’s your main purpose for blended learning? Is it improving learning resources efficiency/cost and time/access? Or is it improving the learning itself? Much of the attention and excitement about blended learning is on the former, with time-and-motion descriptions of where the teacher, the student, and the computer exist during the day.

The addition of digital content to the mix of place and time is a rich area of innovation and practice. In 2012, the Innosight Institute’s Heather Staker and Michael B. Horn revised their pioneering taxonomy of blended learning, classifying blended learning implementations. The attributes include modality (digital or face-to-face), location (lab, class or home), time (fixed schedule/pace vs. fluid), and content (fixed or customized). The 11 derived types of blended learning are school-centric: labels are exemplified by specific school examples. Explanatory diagrams show physical layouts of computers, teachers and students. And from these diagrams the potential for raising efficiency in use of learning resources — clock time, student time, and teacher time — is readily apparent.

Yet as a digital content publisher, my organization is focused on the other potential for blended learning: dramatically improving the learning itself. I mean more comprehension and sense-making, better transfer of knowledge and higher retention of new information.

This requires us to add another perspective on what’s being blended, specifically on instructional interaction as described by Matthew M. Chingos and Russ Whitehurst in their recent report from the Brown Center on Education Policy. They succinctly remind us that where the rubber hits the road in learning is the student’s direct interaction with the teacher and/or instructional materials. The instructional materials used by the teacher greatly influence the teacher/student interaction. Here is where the digital ingredient in the blender can be a game-changer when it comes to the quality of learning. Curriculum is not a commodity; quality and efficacy of curriculum is highly variable. Chingos and Whitehurst dramatically point out the “scandalous lack of information” at all user-levels, as if the instructional materials used are irrelevant.

So, let me briefly introduce five key factors to consider for blended learning, from this learning-centric perspective of instructional interaction between teacher, student, and digital content.

Note that instructional interaction doesn’t “care” where or when it is. It’s about “what” it is. One modality is the student interacting directly with digital content (i.e. without the teacher). For web-delivered digital content, which I will assume, clock and location drop out of the picture – the interaction is the same whether the access is during or after school, in classroom or lab or home or library. Another modality is the student-to-teacher interaction, which could be either a conversation face-to-face during a scheduled time, or an ad hoc conversation over Skype. The point is the students and teachers are engaged in a conversation around learning, not the time or place.

Factor 1: By its 1:1 nature, student interaction with digital content is self-paced. Even essentially passive interactions like studying a digital textbook on a tablet or viewing a video on YouTube can be more valuable because they can be paused and reviewed by the student. Of course active interactions like games add an additional self-pace dimension of correctly solving a problem to proceed in the game.

Factor 2: Digital content can be much more than conventional-practice-on-a-computer of previously introduced procedures. It can be a way to introduce and explain concepts, whether in advance of, in parallel with, or even after they are introduced by a teacher. Yes, from my perspective of seeking better learning, there is always also a teacher ingredient in the mix. As Bob Wise, Alliance for Excellent Education President said at the SIIA Ed Tech Business Forum last November, to get better learning, “High Tech requires High Teach.”

Factor 3: Digital content can be highly interactive. Of course interactive means more than clicking a “next scene” arrow. Interaction means the student needs to respond to some problem-solving scenario, then see the results of her response. For example, that could be solving a math puzzle. Given appropriate strategy and quality of the digital content, this is a “minds-on” interaction about the academic subject matter, not just gameplay.

Factor 4: Digital content can provide immediate feedback. The quality of that feedback can vary widely. At the low end, but still a quantum improvement over text/paper/pencil, is the standard “red x” wrong or “green checkmark” right. At the high end, digital content can be used to provide immediate instructive feedback – an explicit explanation of why a solution was wrong, or why it was right. This instructive feedback facilitates a student’s learning (whether confirming a solution or showing what-to-correct) from each posed answer.

Factor 5: Digital content can provide an adaptive or custom sequence of learning objects for each individual. This can range from a beginning of year pre-assessment determining a grade-level syllabus, to real-time on-the-fly adjustment up or down of difficulty levels as needed, to longer term pattern recognition of student misconceptions, assigning specific corrective content.

Finally, consider this recently released IES report about math problem-solving. Aimed at curriculum developers as well as educators, its recommendations emphasize the teacher’s role in promoting deeper learning. I agree. A vital ingredient in the digital blender, to raise learning quality, is the teacher. The same content students are using 1:1 can inform and be used by the teacher, at the point of instructional interaction. The potential impact is enormous. As Chingos and Whitehurst say, “We can expect both theoretically and based on existing research that instructional materials either reduce the variability in performance across teachers, raise the overall performance level of the entire distribution of teachers, or both.”

Along with all the excitement and buzz around blended learning, to go beyond learning efficiencies, keep an eye out for game-changing aspects of digital content, for student and teacher use, to achieve deeper learning.

Tagged , , , , ,
%d bloggers like this: