viernes, 28 de septiembre de 2012

ALEC and corporate fingerprints are all over national push for online learning

A proposed law would require all Minnesota high school students to take at least one online class before graduating.

Last week, this space carried a post recapping the strange brew of education-related measures that had survived their respective legislative committees and were headed to the floors of the state Senate and House. There is much more to be said about one of the more curious measures, House File 2127, sponsored by Burnsville Republican Pam Myhra.

A short refresher: The bill would require all Minnesota students, starting four years from now, to have taken at least one online-only course in order to graduate from high school. As first introduced, it would have allowed those courses to take place in virtual classrooms to be located anywhere and be operated by employees of for-profit companies who might or might not be licensed teachers.

As it moved through various committees, HF 2127 was amended to require all courses be taught by licensed Minnesota educators, offered by approved operators and include digital coursework done in schools. It was massaged into something Minnesota’s larger districts, most of which already offer digital courses, are now OK with, although it will pose myriad challenges in Greater Minnesota.

If it passes, Minnesota will become one of a handful of states to mandate student participation in online learning, which is, to put it mildly, a booming industry in search of customers.

Consider, for example, Tennessee’s adoption last year of the Virtual Public Schools Act, model legislation created by the American Legislative Exchange Council (ALEC), the super-secretive, super-conservative group which has birthed much of the nearly identical anti-labor legislation that has swept through statehouses nation-wide over the last two years.

Yes, ALEC has made appearances in this space, too, but we think the best primer is the one produced by the education advocacy group Parents United. Corporations, foundations and think tanks pay thousands of dollars to join ALEC, which charges lawmakers — most of them Republicans — $50 a year to join.

Lawmakers are treated to expenses-paid policy confabs at ritzy resort destinations where they are given model bills drafted by private-sector participants.

The model law adopted in Tennessee was created by two ALEC committees chaired by executives from two large, for-profit corporate providers of virtual education, Connections Academy and K-12, according to Phi Delta Kappan, via Education Week.

Shortly after passage, K-12 won a no-bid contract from Union County School District to open a school that is in operation this year. Tennessee lawmakers also decided to shutter the state’s successful online education program.

Some 2,000 students applied for admission to the Tennessee Virtual Academy last fall, many of them homeschoolers. Others were recruited at meetings held in Chattanooga’s poorest neighborhoods. The school receives about $5,300 per pupil; K-12’s CEO was paid more than $2.6 million last year and its CFO $1.7 million.

There’s more. According to The New York Times, K-12 was founded by a former banker from Goldman Sachs and pundit William Bennett, Ronald Reagan’s secretary of education and the author of "The Book of Virtues: A Treasury of Great Moral Stories" and bankrolled by disgraced junk bond king Mike Milken.

How, you are wondering, does K-12 do, for about $5,000 a head in Tennessee what Minnesota’s urban districts struggle to do with more than twice that tuition? One of its Arizona programs outsourced the correcting of student essays to India, a practice that apparently didn’t work well and was abandoned.

ALEC is at work on those cost-cutting measures, drafting model bills aimed at collective bargaining, teacher compensation, licensure, local school boards, vouchers, tax credits and a host of other “reforms” that incorporate privatization.

Last fall, Rep. Myhra confirmed to the late, lamented Minnesota Independent that she is an ALEC member. Indeed, she sits on its tax and fiscal policy task force.

K-12, meanwhile, operates four virtual public schools in Minnesota. Connections Academy operates one here.

Beth Hawkins writes Learning Curve, a blog about education, for MinnPost and also covers a variety of other public policy topics.

Taken From MinnPost

domingo, 23 de septiembre de 2012

The Internet? We Built That

Who created the Internet and why should we care? These questions, so often raised during the Bush-Gore election in 2000, have found their way back into the political debate this season — starting with one of the most cited texts of the preconvention campaign, Obama’s so-called “you didn’t build that” speech. “The Internet didn’t get invented on its own,” Obama argued, in the lines that followed his supposed gaffe.

“Government research created the Internet so that all the companies could make money off the Internet.” In other words: business uses the Internet, but government made it happen.

About a week after Obama’s speech, The Wall Street Journal’s Gordon Crovitz took on those lines from Obama’s speech, claiming it was an “urban legend” that the government built the Internet. Credit for the early networking innovations, Crovitz argued, belonged to private-sector companies like Xerox and Apple. It was no accident, he observed, that the Net languished in relative obscurity for two decades until private corporations and venture capitalists turned their focus to it.

So what had once seemed to be a relatively stable narrative grounding has in recent months erupted with all sorts of political tremors. For most of the past two decades, the story of the Internet’s origins followed a fairly standardized plot: the Internet was originally developed by computer scientists whose research was heavily financed by the federal government, most notably through Darpa, the research arm of the Defense Department. Some narratives emphasized the decentralized network architecture designed by Paul Baran to survive a nuclear strike; others gave credit to the British programmer Tim Berners-Lee, whose World Wide Web gave the Internet a more accessible hypertextual layer. And of course there were all those Al Gore jokes.

The renewed political stakes in the details of this origin story are obvious. If you believe Big Government built the most important communications platform of our time, then that success is a powerful riposte to all the standard claims about bureaucratic inefficiencies and incompetence. Government might be able to out-innovate the private sector, given the right focus and commitment (and freedom from being beholden to stockholders). But if you believe that the Internet’s success is largely attributable to the private sector, all the usual libertarian homilies remain untarnished.

So was the Internet created by Big Government or Big Capital? The answer is: Neither. This is what’s most notable about the debate over the Net’s origins: it misses the most interesting part of the story. We live in a world that assumes that the most important and original products in society — bridges, cars, iPads, hospitals, 787s, houses — are created either by states or by corporations. And yet, against all odds, the Internet came from somewhere else entirely.

Like many of the bedrock technologies that have come to define the digital age, the Internet was created by — and continues to be shaped by — decentralized groups of scientists and programmers and hobbyists (and more than a few entrepreneurs) freely sharing the fruits of their intellectual labor with the entire world. Yes, government financing supported much of the early research, and private corporations enhanced and commercialized the platforms. But the institutions responsible for the technology itself were neither governments nor private start-ups. They were much closer to the loose, collaborative organizations of academic research. They were networks of peers.

Peer networks break from the conventions of states and corporations in several crucial respects. They lack the traditional economic incentives of the private sector: almost all of the key technology standards are not owned by any one individual or organization, and a vast majority of contributors to open-source projects do not receive direct compensation for their work. (The Harvard legal scholar Yochai Benkler has called this phenomenon “commons-based peer production.”) And yet because peer networks are decentralized, they don’t suffer from the sclerosis of government bureaucracies. Peer networks are great innovators, not because they’re driven by the promise of commercial reward but rather because their open architecture allows others to build more easily on top of existing ideas, just as Berners-Lee built the Web on top of the Internet, and a host of subsequent contributors improved on Berners-Lee’s vision of the Web.

Now imagine, for the sake of argument, that some Dr. Evil invented a kind of targeted magnetic-pulse device that could home in on peer-produced software; one push of the button, and every single line of code that had been created through open-source collaborative networks would instantly vanish. What would happen if that button were pushed?

For starters, the Internet and the Web would instantly evaporate. Every Android smartphone, every iPad, iPhone and Mac would go dark. A massive section of our energy infrastructure would cease to function. The global stock markets would go offline for weeks, if not longer. Planes would drop out of the sky. It would be an event on the scale of a world war or a pandemic.

In other words, it’s impossible to overstate the importance of peer production to the modern digital world. Peer networks created and maintain the Linux operating system on which Android smartphones are based; the UNIX kernel that Mac OS X and iOS devices use; and the Apache software that powers most Web servers in the world (not to mention the millions of entries that now populate Wikipedia). What sounds on the face of it like the most utopian of collectivist fantasies — millions of people sharing their ideas with no ownership claims — turns out to have made possible the communications infrastructure of our age.

It’s not enough to say that peer networks are an interesting alternative to states and markets. The state and the market are now fundamentally dependent on peer networks in ways that would have been unthinkable just 20 years ago.

Why is this distinction worth making? Why should we avoid the easy explanations of a government-built Internet versus one animated by private-sector entrepreneurs?

One reason is that there is a growing number of individuals and organizations who believe the digital success of peer networks can be translated into the “real” world. Peer networks laid the foundation for the scientific revolution during the Enlightenment, via the formal and informal societies and coffeehouse gatherings where new research was shared. The digital revolution has made it clear that peer networks can work wonders in the modern age. New organizations are using peer-network approaches to attack low-tech problems.

Consider the way Kickstarter has used networks of smaller funders to help solve the problem of supporting creative projects. Only three years old, Kickstarter is now on track to distribute more money this year than the National Endowment for the Arts.

But there is another, more subtle reason to stress the peer-network version of the Internet’s origins. We have an endless supply of folklore about heroic entrepreneurs who changed the world with their vision and their force of will. But as a society we lack master narratives of creative collaboration.

When we talk about change being driven by mass collaboration, it’s often in the form of protest movements: civil rights or marriage equality. That’s a tradition worth celebrating, but it’s only part of the story. The Internet (and all the other achievements of peer networks) is not a story about changing people’s attitudes or widening the range of human tolerance. It’s a story, instead, about a different kind of organization, neither state nor market, that actually builds things, creating new tools that in turn enhance the way states and markets work.

In the lines that followed his “you didn’t build that” comment, Obama managed to champion a collaborative ethos in much more eloquent terms: “The point is, is that when we succeed, we succeed because of our individual initiative, but also because we do things together. There are some things, just like fighting fires, we don’t do on our own. I mean, imagine if everybody had their own fire service. That would be a hard way to organize fighting fires. So we say to ourselves, ever since the founding of this country: you know what, there are some things we do better together.”

Obama is right, of course; life is full of things we do better together. But what the Internet and its descendants teach us is that there are now new models for doing things together, success stories that prove convincingly that you don’t need bureaucracies to facilitate public collaboration, and you don’t need the private sector to innovate.

That should be the story we tell our kids when they ask who invented the Internet. Yes, we should tell them about the long-view government spending that paid the initial salaries, and the entrepreneurs who figured out a way to make the new medium commercially viable. But we shouldn’t bury the lead. The Internet was built, first and foremost, by another network, this one made up not of servers but of human minds: open, decentralized, peer.

Steven Johnson is the author of “Future Perfect: The Case for Progress in a Networked Age,” published this month.
A version of this article appeared in print on September 23, 2012, on page MM48 of the Sunday Magazine with the headline: The More We Get Together.

jueves, 20 de septiembre de 2012

On-line education is using a flawed Creative Commons license

-- Richard Stallman

Prominent universities are using a nonfree license for their digital educational works. That is bad already, but even worse, the license they are using has a serious inherent problem.

When a work is made for doing a practical job, the users must have control over the job, so they need to have control over the work. This applies to software, and to educational works too. For the users to have this control, they need certain freedoms (see, and we say the work is "free" (or "libre", to emphasize we are not talking about price). For works that moght be used in commercial contexts, the requisite freedom includes commercial use, redistribution and modification.

Creative Commons publishes six principal licenses. Two are free/libre licenses: the Sharealike license CC-BY-SA is a free/libre license with (copyleft, and the Attribution license (CC-BY) is a free/libre license without copyleft. The other four are nonfree, either because they don't allow modification (ND, Noderivs) or because they don't allow commercial use (NC, Nocommercial).

In my view, nonfree licenses are ok for works of art/entertainment, or that present personal viewpoints (such as this article itself). Those works aren't meant for doing a practical job, so the argument about the users' control does not apply. Thus, I do not object if they are published with the CC-BY-NC-ND license, which allows only noncommercial redistribution of exact copies.

Use of this license for a work does not mean that you can't possibly publish that work commercially or with modifications. The license doesn't give permission for that, but you could ask the copyright holder for permission, perhaps offering a quid pro quo, and you might get it. It isn't automatic, but it isn't impossible.

However, two of the nonfree CC licenses lead to the creation of works that can't in practice be published commercially, because there is no feasible way to ask for permission. These are CC-BY-NC and CC-BY-NC-SA, the two CC licenses that permit modification but not commercial use.

The problem arises because, with the Internet, people can easily (and lawfully) pile one noncommercial modification on another. Over decades this will result in works with contributions from hundreds or even thousands of people.

What happens if you would like to use one of those works commercially? How could you get permission? You'd have to ask all the substantial copyright holders. Some of them might have contributed years before and be impossible to find. Some might have contributed decades before, and might well be dead, but their copyrights won't have died with them. You'd have to find and ask their heirs, supposing it is possible to identify those. In general, it will be impossible to clear copyright on the works that these licenses invite people to make.

This is a form of the well-known "orphan works" problem, except exponentially worse; when combining works that had many contributors, the resulting work can be orphaned many times over before it is born.

To eliminate this problem would require a mechanism that involves asking _someone_ for permission (otherwise the NC condition turns into a nullity), but doesn't require asking _all the contributors_ for permission. It is easy to imagine such mechanisms; the hard part is to convince the community that one such mechanisms is fair and reach a consensus to accept it.

I hope that can be done, but the CC-BY-NC and CC-BY-NC-SA licenses, as they are today, should be avoided.

Unfortunately, one of them is used quite a lot. CC-BY-NC-SA, which allows noncommercial publication of modified versions under the same license, has become the fashion for online educational works. MIT's "Open Courseware" got it stared, and many other schools followed MIT down the wrong path. Whereas in software "open source" means "probably free, but I don't dare talk about it so you'll have to check for yourself," in many online education projects "open" means "nonfree for sure".

Even if the problem with CC-BY-NC-SA and CC-BY-NC is fixed, they still won't be the right way to release educational works meant for doing practical jobs. The users of these works, teachers and students, must have control over the works, and that requires making them free. I urge Creative Commons to state that works meant for practical jobs, including educational resources and reference works as well as software, should be released under free/libre licenses only.

Educators, and all those who wish to contribute to on-line educational works: please do not to let your work be made non-free. Offer your assistance and text to educational works that carry free/libre licenses, preferably copyleft licenses so that all versions of the work must respect teachers' and students' freedom. Then invite educational activities to use and redistribute these works on that freedom-respecting basis, if they will. Together we can make education a domain of freedom.

Copyright 2012 Richard Stallman Released under the Creative Commons Attribution Noderivs 3.0 license.

Taken From Richard Stallman

martes, 18 de septiembre de 2012

A Robot With a Delicate Touch

Rodney A. Brooks with Baxter, a robot he developed with an array of safety mechanisms and sensors.

BOSTON — If you grab the hand of a two-armed robot named Baxter, it will turn its head and a pair of cartoon eyes — displayed on a tablet-size computer-screen “face” — will peer at you with interest.

The sensation that Baxter conveys is not creepy, but benign, perhaps even disarmingly friendly. And that is intentional.

Baxter, the first product of Rethink Robotics, an ambitious start-up company in a revived manufacturing district here, is a significant bet that robots in the future will work directly with humans in the workplace.

That is a marked shift from today’s machines, which are kept safely isolated from humans, either inside glass cages or behind laser-controlled “light curtains,” because they move with Terminator-like speed and accuracy and could flatten any human they encountered.

By contrast, Baxter, which comes encased in plastic and has a nine-foot “wingspan,” is relatively slow and imprecise in the way it moves. And it has an elaborate array of safety mechanisms and sensors to protect the human workers it assists.

Here in a brick factory that was once one of the first electrified manufacturing sites in New England, Rodney A. Brooks, the legendary roboticist who is Rethink’s founder, proves its safety by placing his head in the path of Baxter’s arm while it moves objects on an assembly line.

The arm senses his head and abruptly stops moving with a soft clunk. Dr. Brooks, unfazed, points out that the arm is what roboticists call “compliant”: intended to sense unexpected obstacles and adjust itself accordingly.

The $22,000 robot that Rethink will begin selling in October is the clearest evidence yet that robotics is more than a laboratory curiosity or a tool only for large companies with vast amounts of capital. The company is betting it can broaden the market for robots by selling an inexpensive machine that can collaborate with human workers, the way the computer industry took off in the 1980s when the prices of PCs fell sharply and people without programming experience could start using them right out of the box.

“It feels like a true Macintosh moment for the robot world,” said Tony Fadell, the former Apple executive who oversaw the development of the iPod and the iPhone.

Baxter will come equipped with a library of simple tasks, or behaviors — for example, a “common sense” capability to recognize it must have an object in its hand before it can move and release it.

Although it will be possible to program Baxter, the Rethink designers avoid the term. Instead they talk about “training by demonstration.” For example, to pick up an object and move it, a human will instruct the robot by physically moving its arm and making it grab the object.

The robot’s redundant layers of safety mechanisms include a crown of sonar sensors ringing its head that automatically slows its movements whenever a human approaches. Its computer-screen face turns red to let workers know that it is aware of their presence.

And each robot has a large red “e-stop” button, causing immediate shutdown, even though Dr. Brooks says it is about as necessary as the Locomotive Acts, the 19th-century British laws requiring that early automobiles be preceded by a walker waving a red flag.

Soon, Dr. Brooks predicts, robots will be mingling with humans, routinely and safely. “With the current standards, we have to have it,” he said of the e-stop button. “But at some point we have to get over it.”

What kind of work will Baxter and its ilk perform? Rethink, which is manufacturing Baxter in New Hampshire, has secretly tested prototypes at a handful of small companies around the country where manufacturing and assembly involve repetitive tasks. It estimates that the robots can work for the equivalent of about $4 an hour.

“It fit in with our stable of equipment and augmented the robots we already have,” said Chris Budnick, president of Vanguard Plastics, a 30-person company in Southington, Conn., that makes custom-molded components.

Employees whose menial tasks are done by robots are not being laid off, he said, but assigned to jobs that require higher-level skills — including training the robots to work on manufacturing lines with short production runs where the tasks change frequently.

“Our folks loved it and they felt very comfortable with it,” Mr. Budnick said. “Even the older folks didn’t perceive it as a threat.”

Other efforts are under way to design robots that interact safely with human workers. Universal Robots, a Danish firm, has introduced a robot arm that does not need to be put in a glass cage — though the system requires a skilled programmer to operate.

And late last year Javier Movellan, director of the Machine Perception Laboratory at the University of California, San Diego, traveled to Tijuana, Mexico, where he took videos of workers in factories where jobs have been outsourced from the United States.

He wanted to study how the workers used their hands in an array of tasks, from woodworking to making automobile parts. After he returned to the United States, Mr. Movellan analyzed the videos with other scientists and realized that assembly workers used their hands in ways fundamentally different from those of today’s grasping robots.

“For humans it is very difficult to repeat the same movement twice,” Mr. Movellan said. “If they grasp an object, they will do it differently each time.”

In contrast to the fixed repetitive tasks performed by today’s robot arms and hands, scientists at the University of California, San Diego, and the University of Washington have built several prototype hands with pliable fingers that can move as quickly as the humans’.

The research group has set up collaborative arrangements with the Mexican factories, known as maquiladoras where they will be able to test their new robots.

“Despite decades of automation, there are relatively few types of tasks that have been automated,” said Emanuel Todorov, a cognitive scientist at Washington.

This is now changing rapidly as a new wave of manufacturing robots appears, driven by the collapsing cost of computing and the rapid emergence of inexpensive sensors that give robots new powers of vision and touch.

“The big hot button in the robotics industry is to get people and robots to work together,” said David Bourne, a roboticist at Carnegie Mellon University. “The big push is to make robots safe for people to work around.”

Rethink itself has made a significant effort to design a robot that mimics biological systems. The concept is called behavioral robotics, a design approach that was pioneered by Dr. Brooks in the 1990s and was used by NASA to build an early generation of vehicles that explored Mars.

Dr. Brooks first proposed the idea in 1989 in a paper titled “Fast, Cheap and Out of Control: A Robot Invasion of the Solar System.” Rather than sending a costly system that had a traditional and expensive artificial intelligence based control system, fleets of inexpensive systems could explore like insects. It helped lead to Sojourner, an early Mars vehicle.

The next generation of robots will increasingly function as assistants to human workers, freeing them for functions like planning, design and troubleshooting.

Rethink’s strategy calls for the robot to double as a “platform,” a computerized system that other developers can add both hardware devices and software applications for particular purposes. It is based on open-source software efforts — including the Robot Operating System, or ROS, developed by the Silicon Valley company Willow Garage, and a separate project called OpenCV, or Open Source Computer Vision Library.

That will make it possible for independent developers to extend the system in directions that Rethink hasn’t considered, much in the same way the original Apple II computer had slots for additional peripheral cards.

“We will publish an interface for the end of the wrist,” Dr. Brooks said. That will mean that while Baxter comes with a simple hand, or “end effector,” it will be able to adapt the system with more complex and capable hands that will be able to perform tasks that require greater dexterity.

A version of this article appeared in print on September 18, 2012, on page D1 of the New York edition with the headline: A Robot With a Reassuring Touch.

A Robot With a Delicate Touch -

A Robot With a Delicate Touch -

"BOSTON — If you grab the hand of a two-armed robot named Baxter, it will turn its head and a pair of cartoon eyes — displayed on a tablet-size computer-screen “face” — will peer at you with interest."

'via Blog this'

Innovations on Campus

Embed the image above on your site

Taken From Online Universities

domingo, 16 de septiembre de 2012

How Computerized Tutors Are Learning to Teach Humans

Neil Heffernan was listening to his fiancée, Cristina Lindquist, tutor one of her students in mathematics when he had an idea. Heffernan was a graduate student in computer science, and by this point — the summer of 1997 — he had been working for two years with researchers at Carnegie Mellon University on developing computer software to help students improve their skills. But he had come to believe that the programs did little to assist their users. They were built on elaborate theories of the student mind — attempts to simulate the learning brain. Then it dawned on him: what was missing from the programs was the interventions teachers made to promote and accelerate learning. Why not model a computer program on a human tutor like Lindquist?

Over the next few months, Heffernan videotaped Lindquist, who taught math to middle-school students, as she tutored, transcribing the sessions word for word, hoping to isolate what made her a successful teacher. A look at the transcripts suggests the difficulties he faced. Lindquist’s tutoring sessions were highly interactive: a single hour might contain more than 400 lines of dialogue. She asked lots of questions and probed her student’s answers. She came up with examples based on the student’s own experiences. She began sentences, and her student completed them. Their dialogue was anything but formulaic.

Lindquist: Do you know how to calculate average driving speed?

Student: I think so, but I forget.

Lindquist: Well, average speed — as your mom drove you here, did she drive the same speed the whole time?

Student: No.

Lindquist: But she did have an average speed. How do you think you calculate the average speed?

Student: It would be hours divided by 55 miles.

Lindquist: Which way is it? It’s miles per hour. So which way do you divide?

Student: It would be 55 miles divided by hours.

As the session continued, Lindquist gestured, pointed, made eye contact, modulated her voice. “Cruising!” she exclaimed, after the student answered three questions in a row correctly. “Did you see how I had to stop and think?” she inquired, modeling how to solve a problem. “I can see you’re getting tired,” she commented sympathetically near the end of the session. How could a computer program ever approximate this?

In a 1984 paper that is regarded as a classic of educational psychology, Benjamin Bloom, a professor at the University of Chicago, showed that being tutored is the most effective way to learn, vastly superior to being taught in a classroom. The experiments headed by Bloom randomly assigned fourth-, fifth- and eighth-grade students to classes of about 30 pupils per teacher, or to one-on-one tutoring. Children tutored individually performed two standard deviations better than children who received conventional classroom instruction — a huge difference.

Affluent American parents have since come to see the disparity Bloom identified as a golden opportunity, and tutoring has ballooned into a $5 billion industry. Among middle- and high-school students enrolled in New York City’s elite schools, tutoring is a common practice, and the most sought-after tutors can charge as much as $400 an hour.

But what of the pupils who could most benefit from tutoring — poor, urban, minority? Bloom had hoped that traditional teaching could eventually be made as effective as tutoring. But Heffernan was doubtful. He knew firsthand what it was like to grapple with the challenges of the classroom. After graduating from Amherst College, he joined Teach for America and was placed in an inner-city middle school in Baltimore. Some of his classes had as many as 40 students, all of them performing well below grade level. Discipline was a constant problem. Heffernan claims he set a school record for the number of students sent to the principal’s office. “I could barely control the class, let alone help each student,” Heffernan told me. “I wasn’t ever going to make a dent in this country’s educational problems by teaching just a few classes of students at a time.”

Heffernan left teaching, hoping that some marriage of education and technology might help “level the playing field in American education.” He decided that the only way to close the persistent “achievement gap” between white and minority, high- and low-income students was to offer universal tutoring — to give each student access to his or her own Cristina Lindquist. While hiring a human tutor for every child would be prohibitively expensive, the right computer program could make this possible.

So Heffernan forged ahead, cataloging more than two dozen “moves” Lindquist made to help her students learn (“remind the student of steps they have already completed,” “encourage the student to generalize,” “challenge a correct answer if the tutor suspects guessing”). He incorporated many of these tactics into a computerized tutor — called “Ms. Lindquist” — which became the basis of his doctoral dissertation. When he was hired as an assistant professor at Worcester Polytechnic Institute in Massachusetts, Heffernan continued to work on the program, joined in his efforts by Lindquist, now his wife, who also works at W.P.I. Together they improved the tutor, which they renamed ASSISTments (it assists students while generating an assessment of their progress). Seventeen years after Heffernan first set up his video camera, the computerized tutor he designed has been used by more than 100,000 students, in schools all over the country. “I look at this as just a start,” he told me. But, he added confidently, “we are closing the gap with human tutors.”

Grafton Middle School, a public school in a prosperous town a few miles outside Worcester, has been using ASSISTments since 2010. Last spring, I visited the home of Tyler Rogers, a tall boy with reddish-blond hair who was just finishing seventh grade at Grafton and who used the program to do his math homework. (While ASSISTments has made a few limited forays into tutoring other subjects, it is almost entirely dedicated to teaching math.) His teachers described him as “conscientious” and “mature,” but he had struggled in his pre-algebra class that year. “Sometime last fall, it started to get really hard,” he said as he opened his laptop.

Tyler breezed through the first part of his homework, but 10 questions in he hit a rough patch. “Write the equation in function form: 3x-y=5,” read the problem on the screen. Tyler worked the problem out in pencil first and then typed “5-3x” into the box. The response was instantaneous: “Sorry, wrong answer.” Tyler’s shoulders slumped. He tried again, his pencil scratching the paper. Another answer — “5/3x” — yielded another error message, but a third try, with “3x-5,” worked better. “Correct!” the computer proclaimed.

ASSISTments incorporates many of the findings made by researchers who, spurred by the 1984 Bloom study, set out to discover what tutors do that is so helpful to student learning. First and foremost, they concluded, tutors provide immediate feedback: they let students know whether what they’re doing is right or wrong. Such responsiveness keeps students on track, preventing them from wandering down “garden paths” of unproductive reasoning.

The second important service tutors provide, researchers discovered, is guiding students’ efforts, offering nudges in the right direction. ASSISTments provides this, too, in the form of a “hint” button. Tyler chose not to use it that evening, but if he had, he would have been given a series of clues to the right answer, “scaffolded” to support his own problem-solving efforts. For the answer “5-3x,” the computer responded: “You need to take a closer look at your signs. Notice there is a minus in front of the ‘y.’ ”

Tyler’s father, Chris Rogers, who manages complex networks of computers for a living, is pleased that his son’s homework employs technology. “Everyone works with computers these days,” he told me later. “Tyler might as well get used to using them now.” But his mother, Andrea, is more skeptical. Andrea is studying for a master’s in education and plans to become an elementary-school teacher. She is not opposed to the use of educational technology, but she objects to the flat affect of ASSISTments. In contrast to a human tutor, who has a nearly infinite number of potential responses to a student’s difficulties, the program is equipped with only a few. If a solution to a problem is typed incorrectly — say, with an extra space — the computer stubbornly returns the “Sorry, incorrect answer” message, though a human would recognize the answer as right. “In the beginning, when Tyler was first learning to use ASSISTments, there was a lot of frustration,” Andrea says. “I would sit there with him for hours, helping him. A computer can’t tell when you’re confused or frustrated or bored.”

Heffernan, as it happens, is working on that. Dealing with emotion — helping students regulate their feelings, quelling frustration and rousing flagging morale — is the third important function that human tutors fulfill. So Heffernan, along with several researchers at W.P.I. and other institutions, is working on an emotion-sensitive tutor: a computer program that can recognize and respond to students’ moods. One of his collaborators on the project is Sidney D’Mello, an assistant professor of psychology and computer science at the University of Notre Dame.

“The first thing we had to do is identify which emotions are important in tutoring, and we found that there are three that really matter: boredom, frustration and confusion,” D’Mello said. “Then we had to figure out how to accurately measure those feelings without interrupting the tutoring process.” His research has relied on two methods of collecting such data: applying facial-expression recognition software to spot a furrowed brow or an expression of slack disengagement; and using a special chair with posture sensors to tell whether students are leaning forward with interest or lolling back in boredom. Once the student’s feelings are identified, the thinking goes, the computerized tutor could adjust accordingly — giving the bored student more challenging questions or reviewing fundamentals with the student who is confused.

Of course, as D’Mello puts it, “we can’t install a $20,000 butt-sensor chair in every school in America.” So D’Mello, along with Heffernan, is working on a less elaborate, less expensive alternative: judging whether a student is bored, confused or frustrated based only on the pattern of his or her responses to questions. Heffernan and a collaborator at Columbia’s Teachers College, Ryan Baker, an expert in educational data mining, determined that students enter their answers in characteristic ways: a student who is bored, for example, may go for long stretches without answering any problems (he might be talking to a fellow student, or daydreaming) and then will answer a flurry of questions all at once, getting most or all correct. A student who is confused, by contrast, will spend a lot of time on each question, resort to the hint button frequently and get many of the questions wrong.

“Right now we’re able to accurately identify students’ emotions from their response patterns at a rate about 30 percent better than chance,” Baker says. “That’s about where the video cameras and posture sensors were a few years ago, and we’re optimistic that we can get close to their current accuracy rates of about 70 percent better than chance.” Human judges of emotion, he notes, reach agreement on what other people are feeling about 80 percent of the time.

Heffernan is also experimenting with ways that computers can inject emotion into the tutoring exchange — by flashing messages of encouragement, for example, or by calling up motivational videos recorded by the students’ teachers. The aim, he says, is to endow his computerized tutor “with the qualities of humans that help other humans learn.”

But is humanizing computers really the best way to supply students with effective tutors? Some researchers, like Ken Koedinger, a professor of human-computer interaction and psychology at Carnegie Mellon University, take a different view from Heffernan’s: computerized tutors shouldn’t try to emulate humans, because computers may well be the superior teachers. Koedinger has been working on computerized tutors for almost three decades, using them not only to help students learn but also to collect data about how the learning process works. Every keystroke a student makes — every hesitation, every hint requested, every wrong answer — can be analyzed for clues to how the mind learns. A program Koedinger helped design, Cognitive Tutor, is currently used by more than 600,000 students in 3,000 school districts around the country, generating a vast supply of data for researchers to mine. (The program is owned by a company called Carnegie Learning, which was sold to the Apollo Group last year for $75 million; Apollo also owns the for-profit University of Phoenix.)

Koedinger is convinced that learning is so unfathomably complex that we need the data generated by computers to fully understand it. “We think we know how to teach because humans have been doing it forever,” he says, “but in fact we’re just beginning to understand how complicated it is to do it well.”

As an example, Koedinger points to the spacing effect. Decades of research have demonstrated that people learn more effectively when their encounters with information are spread out over time, rather than massed into one marathon study session. Some teachers have incorporated this finding into their classrooms — going over previously covered material at regular intervals, for instance. But optimizing the spacing effect is a far more intricate task than providing the occasional review, Koedinger says: “To maximize retention of material, it’s best to start out by exposing the student to the information at short intervals, gradually lengthening the amount of time between encounters.” Different types of information — abstract concepts versus concrete facts, for example — require different schedules of exposure. The spacing timetable should also be adjusted to each individual’s shifting level of mastery. “There’s no way a classroom teacher can keep track of all this for every kid,” Koedinger says. But a computer, with its vast stores of memory and automated record-keeping, can. Koedinger and his colleagues have identified hundreds of subtle facets of learning, all of which can be managed and implemented by sophisticated software.

Yet some educators maintain that however complex the data analysis and targeted the program, computerized tutoring is no match for a good teacher. It’s not clear, for instance, that Koedinger’s program yields better outcomes for students. A review conducted by the Department of Education in 2010 concluded that the product had “no discernible effects” on students’ test scores, while costing far more than a conventional textbook, leading critics to charge that Carnegie Learning is taking advantage of teachers and administrators dazzled by the promise of educational technology. Koedinger counters that “many other studies, mostly positive,” have affirmed the value of the Carnegie Learning program. “I’m confident that the program helps students learn better than paper-and-pencil homework assignments.”

Heffernan isn’t susceptible to the criticism that he is profiting from school districts, because he gives ASSISTments away free. And so far, the small number of preliminary, peer-reviewed studies he has conducted on his program support its value: one randomized controlled trial found that the use of the computerized tutor improved students’ performance in math by the equivalent of a full letter grade over the performance of pupils who used paper and pencil to do their homework.

But Heffernan does face one serious hurdle: any student who wishes to use ASSISTments needs a computer and Internet access. More than 20 percent of U.S. households are not equipped with a computer; about 30 percent have no broadband connection. Heffernan originally hoped to try ASSISTments out in Worcester’s mostly urban school district, but he had to scale back the program when he found that few students were consistently able to use a computer at home. So ASSISTments has mainly been adopted by affluent suburban schools like Grafton Middle School and Bellingham Memorial Middle School in Massachusetts — populated, Heffernan said ruefully, by students who already have the advantages of high-functioning schools and educated, involved parents. But, he told me brightly, he recently received a grant from the Department of Education to supply ASSISTments to almost 10,000 public-school students in Maine — a largely poor, largely rural state in which all schoolchildren nonetheless own a laptop, thanks to a state initiative. Heffernan hopes that by raising the Maine students’ test scores with ASSISTments, he will inspire more officials in states around the country to see the virtue of making tutoring universal.

The morning after I watched Tyler Rogers do his homework, I sat in on his math class at Grafton Middle School. As he and his classmates filed into the classroom, I talked with his teacher, Kim Thienpont, who has taught middle school for 10 years. “As teachers, we get all this training in ‘differentiated instruction’ — adapting our teaching to the needs of each student,” she said. “But in a class of 20 students, with a certain amount of material we have to cover each day, how am I really going to do that?”

ASSISTments, Thienpont told me, made this possible, echoing what I heard from another area math teacher, Barbara Delaney, the day before. Delaney teaches sixth-grade math in nearby Bellingham. Each time her students use the computerized tutor to do their homework, the program collects data on how well they’re doing: which problems they got wrong, how many times they used the hint button. The information is automatically collated into a report, which is available to Delaney on her own computer before the next morning’s class. (Reports on individual students can be accessed by their parents.) “With ASSISTments, I know none of my students are falling through the cracks,” Delaney told me.

After completing a few warm-up problems on their school’s iPod Touches­, the students turned to the front of the room, where Thienpont projected a spreadsheet of the previous night’s homework. Like stock traders going over the day’s returns, the students scanned the data, comparing their own grades with the class average and picking out the problems that gave their classmates trouble. (“If you got a question wrong, but a lot of other people got it wrong, too, you don’t feel so bad,” Tyler explained.)

Thienpont began by going over “common wrong answers” — incorrect solutions that many students arrived at by following predictable but mistaken lines of reasoning. Or perhaps, not so predictable. “Sometimes I’m flabbergasted by the thing all the students get wrong,” Thienpont said. “It’s often a mistake I never would have expected.” Human teachers and tutors are susceptible to what cognitive scientists call the “expert blind spot” — once we’ve mastered a body of knowledge, it’s hard to imagine what novices don’t know — but computers have no such mental block. Highlighting “common wrong answers” allows Thienpont to address shared misconceptions without putting any one student on the spot.

I saw another unexpected effect of computerized tutoring in Delaney’s Bellingham classroom. After explaining how to solve a problem that many got wrong on the previous night’s homework, Delaney asked her students to come up with a hint for the next year’s class. Students called out suggested clues, and after a few tries, they arrived at a concise tip. “Congratulations!” she said. “You’ve just helped next year’s sixth graders learn math.” When Delaney’s future pupils press the hint button in ASSISTments, the former students’ advice will appear.

Unlike the proprietary software sold by Carnegie Learning, or by education-technology giants like Pearson, ASSISTments was designed to be modified by teachers and students, in a process Heffernan likens to the crowd-sourcing that created Wikipedia. His latest inspiration is to add a button to each page of ASSISTments that will allow students to access a Web page where they can get more information about, say, a relevant math concept. Heffernan and his W.P.I. colleagues are now developing a system of vetting and ranking the thousands of math-related sites on the Internet.

For all his ambition, Heffernan acknowledges that this technology has limits. He has a motto: “Let computers do what computers are good at, and people do what people are good at.” Computers excel in following a precise plan of instruction. A computer never gets impatient or annoyed. But it never gets excited or enthusiastic either. Nor can a computer guide a student through an open-ended exploration of literature or history. It’s no accident that ASSISTments and other computerized tutoring systems have focused primarily on math, a subject suited to computers’ binary language. While a computer can emulate, and in some ways exceed, the abilities of a human teacher, it will not replace her. Rather, it’s the emerging hybrid of human and computer instruction — not either one alone — that may well transform education.

Near the end of my visit to Worcester, I told Heffernan about a scene I witnessed in Barbara Delaney’s class. She had divided her sixth graders into what she called “flexible groups” — groupings of students by ability that shift daily depending on the data collected in her ASSISTments report. She walked over to the group that struggled the most with the previous night’s homework and talked quietly with one girl who looked on the brink of tears. Delaney pointed to the girl’s notebook, then to the ASSISTments spreadsheet projected on a “smart” board at the front of the room. She touched the girl’s shoulder; the student lifted her face to her teacher and managed a crooked smile.

When I finished recounting the incident, Heffernan sat back in his chair. “That’s not anything we put into the tutoring system — that’s something Barbara brings to her students,” he remarked. “I wish we could put that in a box.”

Annie Murphy Paul is the author of “Origins: How the Nine Months Before Birth Shape the Rest of Our Lives” and is at work on a book about the science of learning.

Editor: Sheila Glaser

A version of this article appeared in print on September 16, 2012, on page MM50 of the Sunday Magazine with the headline: The Machines Are Taking Over.