Does the Quest to Program an Intelligent Robot Provide a Model for 21st Century Education?


Processing

Learning is a system with inputs, processes and outputs. There has been much discussion in recent years around how we need to improve the inputs so that the outputs become more relevant and meaningful in the modern world. We know we need to change how we teach so that our students can become confident, innovative, independent and high-functioning individuals in a globalized world. We call these initiatives twenty-first century education.

But seldom do we discuss the processes which take place between the inputs and outputs encoded in the systems view of education. We presume that if the inputs are teaching methodologies which are child-centered, relevant and focused, then the outputs will be kids who embody all those wonderful outcomes we all aim for (see image immediately below). But as to the actual processes involved, we say very little beyond touching on the importance of metacognition in learning and in trying to apply the lessons modern neuroscience has for twenty-first century learning. But there is more to the process of cognition than this. Much, much more. And I think one of the best ways to get at what these processes involve is to look at the fields of artificial intelligence and robotics.

How to Create an AI

The holy grail for working roboticists is the development of artificial intelligence. They have gotten extremely clever at developing sophisticated coding inputs, and have developed amazing technology for a robot to output these commands, but it is to the arcane world of processing that they have turned their attentions in order to make their robots function more intelligently. Put simply, roboticists are having to look ever more closely at the nature of human intelligence in order to replicate it in their machines. This is no easy task. To date, no-one has ever successfully passed what is known as the Turing test for artificial intelligence – a test which essentially has a panel of evaluators trying to distinguish a machine’s outputs from a human being’s.

I have no doubt that these scientists will succeed at creating an artificial intelligence in my lifetime. Whether it will be a Skynet or a Multivac is a matter for philosophers, writers, movie makers and government advisors. What fascinates me, and what should fascinate every other teacher is how, in their quest to replicate human intelligence, researchers are pulling it apart piece by piece in order to understand it better.

Because these researchers are concerned with what works, they draw their inspiration from far and wide, and they are not concerned with fluffy theories – they want practical results, and so they discard what doesn’t have any practical use immediately in favor of what does. And they draw from a wide range of fields including neuroscience, philosophy, psychology, mathematics, economics and educational research. After sublimating these down, roboticists are coming up with some beautifully simple insights into human intelligence – specifically how we best process inputs to turn them into useful outputs. These are lessons twenty-first century teachers would do well to keep themselves abreast of because, although we do not see our jobs as analogous to programming robots (at least in the old mechanical sense), we are very much concerned with intelligence, and how to improve it.

 

What the Quest to Develop AI Can Teach Us About Education

What follows draws on a recent talk I attended by a young South African roboticist named Benjamin Rosman. Benji obtained his Ph.D. at the Institute of Perception, Action and Behaviour (IPAB) in the School of Informatics at the University of Edinburgh. His research interests include artificial intelligence, decision theory and machine learning. Or, as he puts it:

My research interests focus on aspects of Intelligent Robotics, particularly intelligent decision making and learning deep structure in strategies for surviving in nonstationary and adversarial worlds. This includes investigating approaches to automated discovery of novel concepts, through state and action abstraction, as well as representation learning. A major theme of my work is transfer learning. http://www.benjaminrosman.com/research.html

Benjamin currently works as a Senior Researcher in the Mobile Intelligent Autonomous Systems Group, at the Committee for Scientific and Industrial Research (CSIR), South Africa.

In his talk to teachers, Benjamin simplified a lot of what he does and showed how it is relevant to teaching. What follows is what I took away from his talk. (Any errors in understanding are mine and mine alone – remember, I am but a lowly Arts graduate!)

1. Adaptive Learning

A big part of trying to get robots to think like people is trying to develop what’s called adaptive learning. In essence this means that there is a feedback loop where outputs are fed back as inputs and processed to provide new, adjusted outputs. In an ideal world, the human mind adjusts and processes according to new information so that we don’t simply respond in the same ways when situations, contexts, feedback from our senses and other information change. We adjust and adapt on the fly. For machines, this is much more difficult and one of the key foci in AI research. And in education, adaptive learning is just as essential. As teachers, we need to adjust our rigid learning plans, syllabi, assessments and delivery methods in accordance with feedback from our students. We should also realise that one size does not fit all: different students learn at different paces and they need a customized, fuzzy learning plan which adjusts as they do.

2. Pattern Recognition

Pattern recognition is what makes great chess players so exceptional. Contrary to popular belief, they are not able to see scores of moves into the future. But they are able to recognize a particular configuration on the chess board that they’ve seen before – and to play the best sequence of following moves to maximize their advantage. Us mere mortals do the same thing every day – it’s a large part of how our brains work – and has a great deal to do with how they deceive us. Pattern recognition involves making connections and identifying broad themes before making an intensive analysis. It also involves ‘chunking’ clusters of information together in order to make memorizing them easier. Getting a robot to do this more effectively is a big part of current research (and involves developing the ability to make associative memories). Teaching pattern recognition in class is tantamount to teaching kids to get a broad, connected view of a topic. Pattern recognition also teaches kids case-based reasoning – where the solution to one problem can provide some of the information required to solve the next. Finally, it teaches them the twin skills of synthesis and analysis as well as the ability to think on a ‘meta level’, all of which provides them with an essential scaffolding for their own learning.

3. Signal Filtering

Discarding information which is not useful is as important as retaining and using information which is. Our brains mostly do this automatically to prevent being flooded with irrelevant information. But sometimes this can come at a price – as when we miss important information our brains dismissed as irrelevant at the time. Machines need to be taught to discriminate between what is important and useful and what is not – and often, the choice as to what information to ignore is as important as the stuff they will use. In the age of information overload, teaching kids to think critically and discriminately about the information which comes at them is an essential skill. But we also need to teach them to open their minds to questions and problems they may have heretofore ignored. Teaching kids the value of questioning their own assumptions regarding what is important and what is not is thus a core aspect of a twenty-first century education.

4. Procedural Expression

Simplifying complex information so that it can be effectively processed is another important aspect of AI research. Posing complex information as simply as possible makes the processing easier, and is more likely to result in better outputs. This does not mean that information gets dumbed-down, but rather that information is inputted in a clean, concise way, so that it can be processed more meaningfully. The lesson here for teachers and students is to focus on making learning digestible by tidying up vague and ‘dirty’ data and emphasizing logical, clear and crisp procedural instruction.

5. Output Focus

AI research is focused on having robots and computers do something useful in the world. It isn’t about learning for the sake of learning. Teaching and learning in the classroom needs to be the same. There is little value in nurturing creativity and independence if it is not aimed (at least in some small way) at making a better world. Any learning process should thus make real world application a priority.

6. Connectedness

Connected, collaborative research is the hallmark of modern science. And robotics research is no different. Sharing ideas and findings enriches everyone and speeds up progess in the field. The analogies here to the processes of learning and teaching here are obvious.

7. Lifelong Learning

Getting a robot to be intelligent at a single task or in a single situation is not true AI. For it to be genuinely intelligent, a robot must continue learning in both a broader and deeper sense. Schools which focus on test, exam and term results are similarly not nurturing true intelligence. Stimulating curiosity and a desire to figure things out is true learning because it is pitched at long term success instead of short term benchmarking.

8. Failing and Prototyping

Robotics engineers are not afraid of getting things wrong. Eliminating possibilities is as essential to the process of developing AI as pursuing what works. Additionally, most failures will point to a new avenue of research. It is very much the scientific method in action: science advances as much because of its failures as its successes. Schools have become so paranoid about allowing failure that they limit the plethora of learning opportunities which students might have had. And the students themselves are so paranoid about being stigmatized as a ‘failure’ that they are not prepared to tinker and experiment and try alternative answers. Twenty-first century schools need to provide opportunities for kids to fail safely, and to encourage them to reflect more often and in greater depth on the lessons they have learnt in the process.

A Note on the Three Laws of Robotics

In many of his stories, the prolific sci-fi writer Isaac Asimov puts forward his three laws of robotics:

    • A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    • A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
    • A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
    • (A fourth law called the ‘Zeroth Law’ was added later and reads: A robot may not harm humanity, or by inaction, allow humanity to come to harm.)

    Contrary to what many people who have not read Asimov might believe, these rules were set up to be the only set of inflexible encoded ethics an artificially aware being would need. Instead, most of Asimov’s robot stories explore what happens where one or more of the laws contradict one another, or are not sufficient. Almost all of Asimov’s robot stories (and there are many) explore the moral issues and paradoxes associated with the three (or four) laws.

    For me, the laws of robotics provide a vital analogy to developing human minds in that an essential part of a modern education needs to address moral issues. That is not to say that kids need to be given hard and fast rules, but rather that they get to explore controversial and difficult moral issues in order to develop their own code of ethics. Hence, modern teaching must target hearts as well as minds, and the ‘zeroth’ law of teaching must be to address issues of morality (in a non religious sense) and the students’ own emotional and personal well-being.

    Peace.

    Sean.

    (All images are screenshots from the motion picture Artifical Intelligence, directed by Stephen Spielberg)

     

     

     

    Advertisements

    About Sean Hampton-Cole

    Fascinated by thinking & why it goes wrong➫ (Un)teacher ➫iPadologist ➫Humanist ➫Stirrer ➫Edupunk ➫Synthesist ➫Introvert ➫Blogger ➫Null Hypothesist.
    This entry was posted in EDUCATION, Intelligence and tagged , , , , , , , , , , , , , , , , . Bookmark the permalink.