Home Culture Culture News

The Rise of Intelligent Machines: Part 1

Revolutionary algorithms now let robots mimic the human brain. They can diagnose disease, drive cars and fly planes. But as computers leap forward, are we on the verge of creating a new life-form – or something much darker?

‘Welcome to robot nursery school,” Pieter Abbeel says as he opens the door to the Robot Learning Lab on the seventh floor of a sleek new building on the northern edge of the UC-Berkeley campus. The lab is chaotic: bikes leaning against the wall, a dozen or so grad students in disorganised cubicles, whiteboards covered with indecipherable equations. Abbeel, 38, is a thin, wiry guy, dressed in jeans and a stretched-out T-shirt. He moved to the U.S. from Belgium in 2000 to get a Ph.D. in computer science at Stanford and is now one of the world’s foremost experts in understanding the challenge of teaching robots to think intelligently. But first, he has to teach them to “think” at all. “That’s why we call this nursery school,” he jokes. He introduces me to Brett, a six-foot-tall humanoid robot made by Willow Garage, a high-profile Silicon Valley robotics manufacturer that is now out of business. The lab acquired the robot several years ago to experiment with. Brett, which stands for “Berkeley robot for the elimination of tedious tasks”, is a friendly-looking creature with a big, flat head and widely spaced cameras for eyes, a chunky torso, two arms with grippers for hands and wheels for feet. At the moment, Brett is off-duty and stands in the centre of the lab with the mysterious, quiet grace of an unplugged robot. On the floor nearby is a box of toys that Abbeel and the students teach Brett to play with: a wooden hammer, a plastic toy airplane, some giant Lego blocks. Brett is only one of many robots in the lab. In another cubicle, a nameless 45-centimetre-tall robot hangs from a sling on the back of a chair. Down in the basement is an industrial robot that plays in the equivalent of a robot sandbox for hours every day, just to see what it can teach itself. Across the street in another Berkeley lab, a surgical robot is learning how to stitch up human flesh, while a graduate student teaches drones to pilot themselves intelligently around objects. “We don’t want to have drones crashing into things and falling out of the sky,” Abbeel says. “We’re trying to teach them to see.”

Industrial robots have long been programmed with specific tasks: Move arm 15 centimetres to the left, grab module, twist to the right, insert module into PC board. Repeat 300 times each hour. These machines are as dumb as lawn mowers. But in recent years, breakthroughs in machine learning – algorithms that roughly mimic the human brain and allow machines to learn things for themselves – have given computers a remarkable ability to recognise speech and identify visual patterns. Abbeel’s goal is to imbue robots with a kind of general intelligence – a way of understanding the world so they can learn to complete tasks on their own. He has a long way to go. “Robots don’t even have the learning capabilities of a two-year-old,” he says. For example, Brett has learned to do simple tasks, such as tying a knot or folding laundry. Things that are simple for humans, such as recognising that a crumpled ball of fabric on a table is in fact a towel, are surprisingly difficult for a robot, in part because a robot has no common sense, no memory of earlier attempts at towel-folding and, most important, no concept of what a towel is. All it sees is a wad of colour. In order to get around this problem, Abbeel created a self-teaching method inspired by child-psychology tapes of kids constantly adjusting their approaches when solving tasks. Now, when Brett sorts through laundry, it does a similar thing: grabbing the wadded-up towel with its gripper hands, trying to get a sense of its shape, how to fold it. It sounds primitive, and it is. But then you think about it again: A robot is learning to fold a towel.

“The rise of smart machines raises serious questions we need to consider about who we are as humans,” Elon Musk says, “and what kind of future we are building for ourselves.”

All this is spooky, Frankenstein-land stuff. The complexity of tasks that smart machines can perform is increasing at an exponential rate. Where will this ultimately take us? If a robot can learn to fold a towel on its own, will it someday be able to cook you dinner, perform surgery, even conduct a war? Artificial intelligence may well help solve the most complex problems humankind faces, like curing cancer and climate change – but in the near term, it is also likely to empower surveillance, erode privacy and turbocharge telemarketers. Beyond that, larger questions loom: Will machines someday be able to think for themselves, reason through problems, display emotions? No one knows. The rise of smart machines is unlike any other technological revolution because what is ultimately at stake here is the very idea of humanness – we may be on the verge of creating a new life form, one that could mark not only an evolutionary breakthrough, but a potential threat to our survival as a species. However it plays out, the revolution has begun. Last U.S. summer, the Berkeley team installed a short-term-memory system into a simulated robot. Sergey Levine, a computer scientist who worked on the project, says they noticed “this odd thing”. To test the memory program in the robot, they gave it a command to put a peg into one of two openings, left or right. For control, they tried the experiment again with no memory program – and to their surprise, the robot was still able to put the peg in the correct hole. Without memory, how did it remember where to put the peg? “Eventually, we realised that, as soon as the robot received the command, it twisted the arms toward the correct opening,” Levine says. Then, after the command disappeared, it could look at how its body was positioned to see which opening the peg should go to. In effect, the robot had figured out a way on its own to correctly execute the command. “It was very surprising,” says Levine. “And kinda unsettling.” Abbeel leads me to his office, a windowless cubicle where he talks about a recent breakthrough made by DeepMind, an AI start-up that was purchased by Google for an estimated $400 million in 2014. A few years ago, DeepMind stunned people by teaching a computer to play Atari video games like Space Invaders far better than any human. But the amazing thing was it did so without programming the computer to understand the rules of the game. This was not like Deep Blue beating a human at chess, in which the rules of the game were programmed into it. All the computer knew was that its goal was to get a high score. Using a method called reinforcement learning, which is the equivalent of saying “good dog” whenever it did something right, the computer messed around with the game, learning the rules on its own. Within a few hours, it was able to play with superhuman skill. This was a major breakthrough in AI – the first time a computer had “learned” a complex skill by itself. Intrigued, researchers in Abbeel’s lab decided to try an experiment with a similar reinforcement-learning algorithm they had written to help robots learn to swim, hop and walk. How would it do playing video games? To their surprise, the algorithm, known as Trust Region Policy Optimisation, or TRPO, achieved results almost as good as the DeepMind algorithm. In other words, the TRPO exhibited an ability to learn in a generalised way. “We discovered that TRPO can beat humans in video games,” Abbeel says. “Not just teach a robot to walk.” Abbeel pulls up a video. It’s a robot simulator. In the opening frames, you see a robot collapsed on a black-and-white checkered floor. “Now remember, this is the same algorithm as the video games,” he says. The robot has been given three goals: Go as far as possible, don’t stomp your feet very hard and keep your torso above a certain height. “It doesn’t know what walking is,” Abbeel says. “It doesn’t know it has legs or arms – nothing like that. It just has a goal. It has to figure out how to achieve it.” Abbeel pushes a button, and the simulation begins. The robot flops on the floor, no idea what it’s doing. “In principle, it could have decided to walk or jump or skip,” Abbeel says. But the algorithm “learns” in real time that if it puts its legs beneath it, it can propel itself forward. It allows the robot to analyse its previous performance, decipher which actions led to better performance, and change its future behaviour accordingly. Soon it’s doddering around, swaying like a drunk. It plunges forward, falls, picks itself up, takes a few steps, falls again. But gradually it rises, and begins to stumble-run toward the goal. You can almost see it gaining confidence, its legs moving beneath it, now picking up speed. The robot doesn’t know it’s running. It was not programmed to run. But nevertheless, it is running. It has figured out by itself all the complex balance and limb control and physics. It is beyond surprising; it is magical. It’s like watching a fish evolve into a human being in 40 seconds. “The way the robot moves and begins to walk – it almost looks alive,” I say.  Abbeel smiles. “Almost.”

AI body 1
Learning to Be Human: Brett (Berkeley robot for the elimination of tedious tasks) is a humanoid robot that has taught itself how to build and sort objects. Credit: Spencer Lowell

Despite how it’s portrayed in books and movies, artificial intelligence is not a synthetic brain floating in a case of blue liquid somewhere. It is an algorithm – a mathematical equation that tells a computer what functions to perform (think of it as a cooking recipe for machines). Algorithms are to the 21st century what coal was to the 19th century: the engine of our economy and the fuel of our modern lives. Without algorithms, your phone wouldn’t work. There would be no Facebook, no Google, no Amazon. Algorithms schedule flights and then fly the airplanes, and help doctors diagnose diseases. “If every algorithm suddenly stopped working, it would be the end of the world as we know it,” writes Pedro Domingos in The Master Algorithm, a popular account of machine learning. In the world of AI, the Holy Grail is to discover the single algorithm that will allow machines to understand the world – the digital equivalent of the Standard Model that lets physicists explain the operations of the universe.

Mathematical algorithms have been around for thousands of years and are the basis for modern computing. Data goes in, the computer does its thing, and the algorithm spits out a result. What’s new is that scientists have developed algorithms that reverse this process, allowing computers to write their own algorithms. Say you want to fly a helicopter upside down: You write an algorithm that gives the computer information about the helicopter’s controls (the input data), then you tell it how you want to fly the helicopter, and at what angle (the result), and then, bingo, the computer will spit out its own algorithm that tells the helicopter how to do it. This process, called machine learning, is the idea behind AI: If a machine can teach itself how to fly a helicopter upside down, it may be able to teach itself other things too, like how to find love on Tinder, or recognise your voice when you speak into your iPhone, or, at the outer reaches, design a Terminator-spewing Skynet. “Artificial intelligence is the science of making machines smart,” Demis Hassabis, co-founder of DeepMind, has said. We are, of course, surrounded by smart machines already. When you use Google Maps, algorithms plot the quickest route and calculate traffic delays based on real-time data and predictive analysis of traffic. When you talk to Google Voice, the ability to recognise your speech is based on a kind of machine learning called neural networks that allows computers to transform your words into bits of sound, compare those sounds to others, and then understand your questions. Facebook keeps unwanted content off the site by scanning billions of pictures with image-recognition programs that spot beheading videos and dick pics. Where is the acceleration of smart machines heading? It took life on Earth 3 billion years to emerge from the ooze and achieve higher intelligence. By contrast, it took the computer roughly 60 years to evolve from a hunk of silicon into a machine capable of driving a car across the country or identifying a face in the crowd. With each passing week, new breakthroughs are announced: In January, DeepMind revealed it has developed an algorithm named AlphaGo that beat the European champion of Go, an ancient Chinese board game that is far more complex than chess. Of course, humans had a hand in this rapid evolution, but it’s hard not to think we have reached some kind of inflection point in the evolution of smart machines. Are we on the verge of witnessing the birth of a new species? How long until machines become smarter than us? Ray Kurzweil, Google’s resident futurist, has popularised the idea of “the singularity”, which is roughly defined as the moment that silicon-based machines become more intelligent than carbon-based machines (humans) and the evolutionary balance shifts toward the former. “In the coming years, we’ll be doing a lot of our thinking in the cloud,” he said at a technology conference a few years ago. He has even predicted an exact date for this singularity: 2045. In an offhand comment at a recent conference, Elon Musk, founder of Tesla and SpaceX, called the development of AI “summoning the demon”. Although he later told me his remarks were an overstatement, he says, “The rise of smart machines brings up serious questions that we need to consider about who we are as humans and what kind of future we are building for ourselves.” As he points out, our dependence on machines is here now: “We are already cyborgs. Just try turning off your phone for a while – you will understand phantom-limb syndrome.” It’s not like superintelligent machines have to be superevil to pose a threat. “The real risk with AI isn’t malice but competence,” physicist Stephen Hawking argued recently. “A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.”

Algorithms that enable AI are to the 21st century what coal was to the 19th – they are the engine of our economy: “If they stop working, it will be the end of the world.”

Despite advances like smarter algorithms and more capable robots, the future of superintelligent machines is still more sci-fi than science. Right now, says Yann LeCun, the director of Facebook AI Research, “AIs are nowhere near as smart as a rat.” Yes, with years of programming and millions of dollars, IBM built Watson, the machine that beat the smartest humans at Jeopardy! in 2011 and is now the basis for the company’s “cognitive computing” initiative. It can read 800 million pages a second and can digest the entire corpus of Wikipedia, not to mention decades of law and medical journals. Yet it cannot teach you how to ride a bike because its intelligence is narrow – it knows nothing about how the world actually works. One of the most sophisticated AI programs, named Aristo, at the Allen Institute for Artificial Intelligence in Seattle, cannot understand a sentence like “People breathe air.” To comprehend this, you need a general knowledge of the world – which it does not have. Even if it could define the words, the program does not know if breathing air is what people do in order to live; or if people breathe air once a minute, or once in their lives. Impressive feats, such as Skype Translator (still in preview), which allows users to have real-time conversations in two different languages, also have a long way to go. In one conversation with a person in Italy, my comments about the weather were translated into comments about the Bible. This is not to say that the risk of a rise of smart machines isn’t real, or that one day, a Skynet won’t emerge from some collection of data points we can hardly imagine. Autonomous weapons, such as killer drones that can assassinate people on their own based on facial-recognition technology and other data, are indeed a real danger. But they are not a threat to the survival of the human species. Nor is it likely that some hacker in North Korea is going to suddenly create a new algorithm that gives Kim Jong-un the ability to launch an attack of Terminators on the world. In this context, AI is not like an iPhone, where you write a new app and you’re done. It’s more like building the Internet itself – something that can only be done over time, and with a huge number of incremental advances. As Andrew Ng, the U.S.-based chief scientist at Baidu, which is China’s Google, told me recently, “Worrying about killer robots is like worrying about overpopulation on Mars – we’ll have plenty of time to figure it out.” In fact, the problem with the hyperbole about killer robots is that it masks the real risks that we face from the rise of smart machines – job losses due to workers being replaced by robots, the escalation of autonomous weapons in warfare, and the simple fact that the more we depend on machines, the more we are at risk when something goes wrong, whether it’s from a technical glitch or a Chinese hacker. It’s about the alienation that will come when we live in a world where we talk to machines more than humans, and when art becomes just a harmonious algorithmic output. The age of AI will also bring profound privacy challenges, not just from smart drones watching you from above, but also from corporations that track your every move in order to sell you stuff. As Marcelo Rinesi, the chief technology officer at the Institute for Ethics and Emerging Technologies, has put it, “The future isn’t a robot boot stamping on a human face forever. It’s a world where everything you see has a little telemarketer inside them, one that knows everything about you and never, ever stops selling things to you.” It also masks the benefits that could come from a deeper alliance with machines. Most researchers, like DeepMind’s Demis Hassabis, believe that if we give machines intelligence, they may be able to help us solve big problems like disease and health care, as well as help scientists tackle big questions in climate change and physics. Microsoft’s Eric Horvitz sees the quest for AI in even grander terms: “The big question for humanity is, is our experience computational? And if so, what will a better understanding of how our minds work tell us about ourselves as beings on the planet? And what might we do with the self-knowledge we gain about this?”

Technological revolutions inspire fear – sometimes justifiably and sometimes not. During the Industrial Revolution, British textile workers smashed machines they worried would take their jobs (they did). When the age of electricity began, people believed wires might cause insanity (they didn’t). And in the 1950s, appliance manufacturers thought there would soon be nuclear vacuums.

AI has long been plagued by claims that run far ahead of the actual science. In 1958, when the “perceptron”, the first so-called neural-network system, was introduced, a newspaper suggested it might soon lead to “thinking machines” that could reproduce and achieve consciousness. In the 1960s, when John McCarthy, the scientist who coined the term “artificial intelligence”, proposed a new research project to Pentagon officials, he claimed that building an AI system would take about a decade. When that did not happen, the field went through periods of decline in the 1970s and 1980s known to scientists as the “AI winters”. But those winters are now over. For one thing, the continued increases in computer power along with drops in prices have provided the horsepower that sophisticated AI algorithms need to function. A new kind of chip, called the graphics processing unit – which was originally created for video-­game processing – has been particularly important for running neural networks that can have hundreds of millions of connections between their nodes.

AI body 2
Playing God From 9 to 5: Researchers at the Berkeley Robot Learning Lab work to create machines that can learn on their own and may one day achieve human intelligence. Credit: Spencer Lowell

The second big change is the arrival of big data. Intelligence in machines, like intelligence in humans, must be taught. A human brain, which is genetically primed to categorise things, still needs to see ­real-life examples before it can distinguish between cats and dogs. That’s even more true for machine learning. DeepMind’s breakthrough with Go and Atari games required the computer to play thousands of games before it achieved expertise. Part of the AI breakthrough lies in the avalanche of data about our world, which provides the schooling that AIs need. Massive databases, terabytes of storage, decades of search results and the entire digital universe became the teachers now making AI smart. In the past, the attempt to create a thinking machine was largely an exercise carried out by philosophers and computer scientists in academia. “What’s different today is the stuff actually works,” says Facebook’s LeCun. “Facebook, IBM, Microsoft – everybody is deploying it. And there’s money in it.” Today, whatever company has the best learning algorithms and data wins. Why is Google such a successful ad platform? Better algorithms that can predict ads you will click on. Even a 0.5 per cent difference in click-through rates can mean enormous amounts of money to a company with $50 billion in revenues. Image recognition, which depends on machine learning, is one place where the competition is now fierce between Apple, Microsoft, Google and cloud services like Dropbox. Another battleground is perfecting speech recognition. The company that can figure that out first – making talking to a machine as natural as talking to a person – will have a huge advantage. “Voice interface is going to be as important and transformative as touch,” says Baidu’s Ng. Google and Apple are buying up AI start-ups that are promising to offer smarter assistants, and AI is crucial to the success of self-driving cars, which will have a tremendous impact on the auto industry and potentially change the look and feel of cities, once we will no longer need to devote space to parking private vehicles. “AI is the new buzzword,” says Jason Calacanis, an entrepreneur in San Francisco. “You just use the phrase ‘artificial intelligence’ in your business plan and everyone pays attention. It’s the flavour of the month.” That kind of scepticism is justified. AI can spot a cat in a photo and parse words when you talk. But perception is not reasoning. Seeing is not thinking. And mastering Go is not like living in the real world. Before AI can be considered intelligent, much less dangerous, it must be taught to reason. Or, at least, to have some common sense. And researchers still have a long way to go in achieving anything that resembles human intelligence or consciousness. “We went through one wall, we know how to do vision now, and that works,” says LeCun. “And the good news is we have ideas about how to get to the next step, which hopefully will work. But it’s like we’re driving 50 mph on the highway in the fog and there is a brick wall somewhere that we’ve not seen. Right now we are just happy driving until we run out of fuel.”

MIT physicist Max Tegmark, 48, has a bowl haircut and a boyish eagerness that make him seem younger than he is. In his two-storey suburban house near Boston, the living room is sparsely furnished, with pictures of ducks and woodchucks on the wall. As a physicist and cosmologist, Tegmark has a wacky side. He’s best known for exploring the idea of parallel universes, suggesting that there may be a vast number of universes, not all of which obey our laws of physics. It’s an idea he acknowledges is on the fringes of accepted science. But Tegmark (on his website, he rates the biggest goofs in his life on a zero-to-20 point scale) embraces it with giddy enthusiasm. In recent years, he has also become one of the most outspoken voices about the dangers of runaway AI.

This past U.S. summer, we sat in his dining room to discuss the risks of AI and his work with the Future of Life Institute, which he co-founded and is described as a “volunteer-run research and outreach organisation working to mitigate existential risks facing humanity”. Although the institute includes luminaries like Hawking on its advisory panel, it’s mostly just an ad-hoc group of Tegmark’s friends and colleagues who meet every few months in his living room. The institute, financed by the Open Philanthropy Project and a $10 million gift from Musk, funds studies into how to best develop AI and educates people about the risks of advanced technology. A few days after our dinner, the institute published an open letter, which was picked up by The New York Times and The Washington Post, warning about the dangers of autonomous weapons. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” the letter read. “Autonomous weapons will become the Kalashnikovs of tomorrow.” The letter has been signed by more than 20,000 people, including scientists and entrepreneurs like Hawking, Musk, Apple co-founder Steve Wozniak and Nobel laureate Frank Wilczek.

“If any military power pushes forward with AI weapon development, a global arms race is inevitable — autonomous weapons will become the Kalashnikovs of tomorrow.”

In January 2015, Tegmark organised the first major conference on the risks of AI. (It’s worth noting that Tegmark is a physicist, not a computer scientist. In fact, it’s mostly entrepreneurs, philosophers, sci-fi writers and scientists in fields outside of AI research who are sounding the alarm.) The three-day event in Puerto Rico brought together many of the top researchers and scientists in the field, as well as entrepreneurs like Musk. It was modelled after the Asilomar Conference on Recombinant DNA in 1975, which is remembered as a landmark discussion in the dangers of synthetic biology and cloning. According to several attendees, one of the central ideas discussed at the 2015 conference was how long it would take before machine intelligence met or surpassed human intelligence. On one side of the argument, AI pioneers like Ng claimed it would be hundreds of years before AI surpassed human intelligence; others, like Musk and Stuart Russell, a professor of computer science at UC-Berkeley, said it could be much sooner. “The median in Puerto Rico was 40 years,” Tegmark says.

Like Hawking, Tegmark doesn’t believe superintelligent machines need to be evil to be dangerous. “We want to make machines that not only have goals but goals that are aligned with ours,” he says. “If you have a self-driving car with speech recognition and you say, ‘Take me to the airport as fast as possible’, you’re going to get to the airport, but you’re going to get there chased by helicopters and covered in vomit. You’ll say, ‘That’s not what I wanted.’ And the car will reply, ‘That’s what you told me to do.’ ”

Tegmark believes it’s important to think about this now, in part because it’s not clear how fast AI will progress. It could be 100 years before they gain anything like human intelligence. Or it could be 10. He uses the nuclear analogy. “Think about what happened with the nuclear bomb,” he says. “When scientists started working on it, if they would have thought ahead about what it was going to mean for the world and took precautions against it, wouldn’t the world be a better place now? Or would it have made a difference?”

Wherever you go, assume a camera is pointing at you. They are on street corners, in drones and in most of the 4 billion or so cellphones on the planet. In 2012, the FBI launched the $1 billion Next Generation Identification system, which uses algorithms to collect facial images, fingerprints, iris scans and other biometric data on millions of Americans and makes them accessible to 18,000 law-enforcement agencies.

None of this would be possible – or at least not as effective – without the work of Yann LeCun. In the world of AI, LeCun is the closest thing there is to a rock star, having been one of a trio of early AI researchers who developed the algorithms that made image recognition possible. LeCun has never worked for law enforcement and is committed to civil rights, but that doesn’t matter – technology, once it is invented, finds its own way in the world.

These days, you can find LeCun at the Facebook office in downtown Manhattan. In an open space the size of a basketball court, rows of people stare at monitors beneath fractals on the walls. LeCun’s AI lab is off in a corner of the room, its 20 or so researchers indistinguishable from the rest of the Facebook worker bees. (His lab employs another 25 AI researchers between offices in Silicon Valley and Paris.) LeCun sits at a long row of desks, shoulder-to-shoulder with his team. If he looks out the window, he can almost see the building where IBM’s Watson is housed.

Wearing jeans and a polo shirt, LeCun shows me around with a calm, professorial air. He grew up outside Paris, but only a trace of an accent remains. “I am everything the religious right despises: a scientist, an atheist, a leftist (by American standards, at least), a university professor and a Frenchman,” he boasts on his website. He has three kids and flies model airplanes on the weekends.

LeCun was a pioneer in deep learning, a kind of machine learning that revolutionised AI. While he was working on his undergraduate degree in 1980, he read about the 1958 “perceptron” and the promise of neural-network algorithms that allow machines to “perceive” things such as images or words. The networks, which mimic the structure of the neural pathways in our brains, are algorithms that use a network of neurons, or “nodes”, to perform a weighted statistical analysis of inputs (which can be anything – numbers, sounds, images). Seeing the networks’ potential, LeCun wrote his Ph.D. thesis on an approach to training neural networks to automatically “tune” themselves to recognise patterns more accurately – ultimately creating the algorithms that now allow ATMs to read cheques. In the years since, refinements in neural networks by other programmers have been the technological underpinning in virtually every advance in smart machines, from computer vision in self-driving cars to speech recognition in Google Voice. It’s as if LeCun largely invented the nervous system for artificial life.

Despite the name, LeCun says that neural networks are not an attempt to mimic the brain. “It’s not the latest, greatest, most recent discoveries about neuroscience,” he says. “It’s very classic stuff. If you are building airplanes, you get inspired by birds because birds can fly. Even if you don’t know much about birds, you can realise they have wings and they propel themselves into air. But building an airplane is very different from building a bird. You have to derive generic principles – but you cannot derive generic principles by studying the details of how biology works.”

In LeCun’s view, this is the flaw in much brain research being done, including Europe’s touted Human Brain Project, a 10-year, $1.3 billion initiative to unravel the mysteries of the mind by essentially simulating the brain’s 86 billion neurons and 100 trillion synapses on a supercomputer. “The idea is that if you study every detail of how neurons and synapses function and somehow simulate this on big enough networks, somehow AI will emerge,” he says. “I think that’s completely crazy.”

After a stint at Bell Labs in New Jersey, LeCun spent a decade as a professor at New York University. In 2013, Mark Zuckerberg lured him to Facebook, in part by letting him keep his post part-time at NYU. “Mark said to me, ‘Facebook is 10 years old – we have to think about the next 20 years: What is communication between people and the digital world going to look like?’ ” LeCun recalls. “He was convinced that AI would play a very big role in this, and that it will be very important to have ways to mediate interactions between people and the digital world using intelligent systems. And when someone tells you, ‘Create a research organisation from scratch’, it’s hard to resist.”

LeCun won’t say how much money Facebook has invested in AI, but it’s recognised as one of the most ambitious labs in Silicon Valley. “Most of our AI research is focused on understanding the meaning of what people share,” Zuckerberg wrote during a Q&A on his website. “For example, if you take a photo that has a friend in it, then we should make sure that friend sees it. If you take a photo of a dog or write a post about politics, we should understand that so we can show that post and help you connect to people who like dogs and politics. In order to do this really well, our goal is to build AI systems that are better than humans at our primary senses: vision, listening, etc.” In January, Zuckerberg announced that his personal challenge for 2016 is to write a simple AI to run his home and help him with his work. “You can think of it kind of like Jarvis in Iron Man,” he wrote.

LeCun says that one of the best examples of AI at Facebook is Moments, a new app that identifies friends through facial recognition and allows you to send them pictures. But less-advanced AI is deployed everywhere at the company, from scanning images to tracking viewing patterns to determining which of your friends’ statuses to show you first when you log in. It’s also used to manage the insane amount of data Facebook deals with. Users upload 2 billion photos and watch 8 billion videos every day. The company uses a technique called AI Encoding to break the files down by scene and make their sizes less “fat”. The gains are not monumental, but they result in big savings in storage and efficiency.

AI body 3
Fight for AI’s Future: Facebooke’s Yann LeCun (left) pioneers AI; Tesla founder Elon Musk warns of its dangers.

Despite all the progress, LeCun knows these are only baby steps toward general intelligence. Even image recognition, which has seen dramatic advances, still has problems: AI programs are confused by shadows, reflections and variations in pixelation. But the biggest barrier is what’s called “unsupervised learning”. Right now, machines mainly learn by supervised learning, where the system is shown thousands of pictures of, say, a cat, until it understands the attributes of cats. The other, less common method is reinforcement learning, where the computer is given information to identify, makes a decision and is then told whether it’s correct or not. Unsupervised learning uses no feedback or input, relying on what you could call artificial intuition. “It’s the way humans learn,” LeCun says. We observe, draw inferences and add them to our bank of knowledge. “That’s the big nut we have to crack,” he says.

An idea floating around is that unsupervised learning should be about prediction. “If I show you a short movie and then ask what’s going to happen in the next second, you should probably be able to guess the answer,” LeCun says. An object in the air will fall – you don’t need to know much about the world to predict this. “But if it’s a complicated murder mystery and I ask you who is the killer and then to describe what is going to happen at the end of the movie, you will need a lot of abstract knowledge about what is going on,” he says. “Prediction is the essence of intelligence. How do we build a machine that can watch a movie and then tell us what the next frame is going to be, let alone what’s going to happen half an hour from now, where are the objects going to go, the fact that there are objects, the fact that the world is three-dimensional – everything that we learn about the world’s physical constraints?”

One solution that LeCun is working on is to represent everything on Facebook as a vector, which allows computers to plot a data point in space. “The typical vectors we use to represent concepts like images have about 4,000 dimensions,” he says. “So, basically, it is a list of 4,000 numbers that characterises everything about an image.” Vectors can describe an image, a piece of text or human interests. Reduced to a number, it’s easy for computers to search and compare. If the interests of a person, represented by a vector, match the vector of an image, the person will likely enjoy the image. “Basically, it reduces reasoning to geometry,” he says.

As for the dangers of AI, LeCun calls them “very distant”. He believes the notion that intelligent machines will evolve with the trappings of human intelligence and emotion is a fallacy: “A lot of the bad things that come out of human behaviour come from those very basic drives of wanting to survive and wanting to reproduce and wanting to avoid pain. There is no reason to believe robots will have that self-preservation instinct unless we build it into them. But they may have empathy because we will build it into them so they can interact with humans in a proper way. So the question is, what kind of low-level drives and behaviours do we build into machines so they become an extension of our intelligence and power, and not a replacement for it?”

On my way out of Facebook, I’m struck by how densely packed everyone is in the office – this is an empire of human beings and machines working together. It’s hard to imagine the future will be any different, no matter how sophisticated our robots become. “Algorithms are designed and built by humans, and they reflect the biases of their makers,” says Jaron Lanier, a prominent computer scientist and author. For better or worse, whatever future we create, it will be the one we design and build for ourselves. To paraphrase an old adage about the structure of the universe: It’s humans all the way down.

From issue #774, available now. Top photograph by Philip Toledano.

Part Two will explore how artificial intelligence will impact the world of self-driving cars and the future of warfare. Find it in Issue #775, available Thursday, 5th May.