Home / Blog / Q&A with Great Debate Panelists on AI

Q&A with Great Debate Panelists on AI

Twitter icon
Facebook icon

Posted on February 24, 2017 - 4:28pm

Written by Colton Smith

What makes humans so special? From a scientific point of view one could say that we’re special because we have brains capable of immensely large computational powers. From a philosophical point of view one could say we’re special due to our ability to reason, make decisions, and feel emotions. Whatever the reason you choose to prescribe to, I’m sure we can all agree that human beings are wonderfully complex and will always be at the top of the food chain. Right?

It’s hard to imagine an entity possessing a brain more powerful than our own, but it may be possible one day if artificial intelligence continues to advance at the rate it has been. We as humans tend to think that we’re special and are the only ones capable of complex perceptions such as consciousness, but why is that? We’re essentially a walking computer that analyzes and interacts with the world around us. This computer, our brain, is made of billions of neurons that communicate with one another in complex ways to produce our conscious reality.  Who’s to say that we can’t program a computer to do the same thing?

Nowadays, most technological advancements are coming in the form of artificial intelligence. Complex algorithms are being designed to allow technology to generate and utilize the power of networks similar to that of the neural networks in our brains. For example, the reason self-driving cars are able to operate independently of human interaction is because of complex algorithms that allow the cars to interact with their environments and make decisions about the best possible outcomes. Think about it this way, imagine you’re driving your outdated human-operated car in heavy traffic and the car next to you decides to come into your lane without properly signaling. Your visual system would see this car coming into your lane and alert you of this potential threat. This would then activate a motor response from your motor cortex that would quickly send information to your spinal cord and activate your leg muscles to allow you to brake before the car hits you. Now lets imagine how a self-driving car would deal with this situation. As the oblivious driver begins to merge into your lane the car’s built in detectors (similar to your eyes) would detect the threat. At this point the self-driving car would calculate the best way to deal with the situation by either slowing down or speeding up. This scenario illustrates how the self-driving car is operating in a similar way to a human brain, albeit a much more simplistic one.  But what about more complex abilities like learning and memory?

As crazy as it sounds, researchers have already begun to design algorithms that allow machines to learn by harnessing the power of neural networks. The logic is fairly straightforward: program the machine to perform basic functions in its environment and then allow it to make connections as time goes by. Making connections is precisely what allows these machines to learn. For example, take a look at the video below.

(source: https://vimeo.com/79098420)

Here we have a computer algorithm that displays a model of a fictional bipedal organism. As you can see from generation 1, the organism doesn't know how to walk. It takes a few steps and then falls flat on its face. It has all the necessary tools to walk, such as an anatomically sound skeletal structure and a functional muscular system, but it hasn’t formed a network that will allow it to properly walk. What makes this algorithm truly an example of artificial intelligence is that over subsequent generations the organism “learns” to walk. Through the process of trial-and-error our bipedal dinosaur friend “evolved” the ability to properly walk. The algorithm allows the bipedal organism to learn what specific movements lead to a functional walk cycle. The programmers didn’t teach generation 999 how to properly walk; it was given the tools to accomplish the task by itself.

Another example of artificial intelligence at work comes from an algorithm called Mari/o.

In the first video you will see a familiar face. It’s the beloved Italian plumber, Mario! But he isn’t acting quite right. He’s running across the screen, but he isn’t avoiding any of the enemies. He appears to be completely oblivious to his surroundings. That’s because he hasn’t learned how to play the game yet. Mari/o is an algorithm that utilizes artificial intelligence to give Mario all the tools he needs to beat the game without actually instructing him on how to do so. Through a process known as neural evolution, Mario eventually learns how to beat the game by constructing a neural network. Mario went from running across the level blindly to being able to navigate his world flawlessly.

We’ve talked about some of the capabilities of artificial intelligence, but what about the practicality of this technology? What will our world look like if this technology keeps advancing? What happens if we lose control of it? These are exactly the types of questions being tackled this weekend in Tempe, Arizona. Artificial intelligence experts from around the world are going to assemble this weekend at a closed Origins Scientific Workshop to discuss and debate the future of artificial intelligence. By assembling some of the greatest minds of our time we hope to foster new growth in the field and predict what the future may bring.

In conjunction with this closed Origins Scientific Workshop there will be an Origins Project Great Debate at Gammage Auditorium on Saturday, February 25th. This debate will feature a panel that is well versed in the topic of artificial intelligence. I’ve asked the panel to answer a few questions before the event so that people can get a good feel for what types of issues are going to be discussed at the event. This panel will include Eric Horvitz, Jaan Tallinn, Subbarao Kambhampati, Lawrence Krauss, and most recently added Kathleen Fisher.

 

Eric Horvitz is managing director of Microsoft Research’s main Redmond Lab, an American computer scientist, and technical fellow at Microsoft. Horvitz received his PhD and MD degrees at Stanford University, and has continued his research and work in areas that span theoretical and practical challenges of machine learning and inference, human-computer interaction, artificial intelligence, and more. He is a fellow of numerous associations and academies, has received numerous awards, given both technical lectures and presentations for diverse audiences, and been featured in the New York Times and Technology Review.

Smith: Will AI emerge slowly or will it come in the form of a breakthrough?

Horvitz: Research in AI has led to a stream of contributions and to increasing competencies over the decades since the phrase “artificial intelligence” was first used—in a research proposal in 1956. Ongoing advances in such areas of AI research as learning, perception, natural language understanding, and in decisions and planning have led to new kinds of applications over time. At times, we’ve seen faster jumps in abilities. I believe that we’re now at an inflection point in AI research, where there’s been an uptick in progress. The inflection is based on several factors. One factor is the rising availability of large amounts of data. We didn’t have large amounts of data to learn from until the relatively recent large-scale “digitalization” of much of life with the rise of the web. Along with the data, we’ve had jumps in raw computing power, and also advances in algorithmic methods, particularly with machine learning—methods that transform data into run-time systems that can do such critical tasks as recognize objects and patterns, perform diagnoses, and make predictions. The recent advances are showing up in our lives in different places, such as the Tesla’s driving features, faces being recognized on Facebook, speech recognition that just works, and real-time translation among languages in Skype.

However, the march to the kind of AI folks see in movies has been slow. We’ve made little progress on some core capabilities that we take for granted in people. For example, researchers are still baffled about how babies and toddlers learn so much with such ease, simply by observing and engaging in the open world.  And we don’t understand yet how to endow AI systems with the kind of commonsense reasoning that we take for granted and depend on in our daily lives.

Back to the question, it’s not clear whether we’ll achieve richer, more powerful and more general intelligences in an incremental way or via one or more breakthroughs. If history is a guide, we will most likely achieve increasing competencies over time in a largely incremental manner.  However, there may be surprising jumps in our knowledge and insights about the computational foundations of intelligence. We need to keep that possibility in mind.

Smith: Where do you stand on optimism versus concerns about AI advances?

I would say I’m optimistic and that I also have concerns. Along the way to leveraging the fruits of AI, we are going to have to address potential rough edges and downsides of uses, side-effects, and societal influences of the technology. We need to continue to work to envision and address potential costly outcomes.  We need to identify where we can take proactive efforts to diminish the probability of poor outcomes—and we also have to be prepared to respond in real time.

On the optimistic side, AI developments will clearly be valuable to people and society in numerous ways. Achievements in AI have been useful in many realms for years—often working behind the scenes. For example, for nearly two decades, automated handwriting recognition has been used by the US postal service to route hundreds of billions of letters. As another example right beneath my fingertips as I type, machine learning and decision making are being used in my laptop to guess what I’m going to do next and continue to compute and fetch things in advance to help the computer to respond quickly. We worked to create this capability with colleagues more than a decade ago. Other deployed systems from our team and from AI efforts by colleagues employ machine learning and reasoning to help doctors to understand patient outcomes—in advance of poor outcomes. There’s a great deal of low-hanging fruit where even today’s AI technologies are well positioned to help.  Sticking with healthcare for a bit, a recent study showed that nearly 1,000 people per day are dying in the US because of preventable errors being made in hospitals. I believe that AI technologies could be employed to provide new kinds of safety nets, via error detection, alerting, and decision support, that could save hundreds of thousands of lives per year. Other opportunities that jump out include putting a dent in the over a million deaths per year worldwide on roadways—and many more millions of life-changing injuries. We don’t need to wait until we have fully self-driving vehicles. Even uses of AI methods to power smart braking and alerting systems could make our roadways much safer. Over the long term, I believe that AI will provide value and even sets of breakthroughs across multiple realms, including education, healthcare, transportation, and governance.  Folks may find the recent report of the One Hundred Year Study on AI of interest in reflecting about near-team applications of AI in society.

Turning to concerns, efforts in AI are pushing into the realm of human intellect. As such, the methods will touch our lives in numerous ways. We’ll need to be cautious and proactive to ensure that AI applications are safe, inclusive, and fair. On fairness, it’s important to be aware that AI systems can actually amplify implicit biases in society based on the training data that they learn from. So, we need to work to ensure that the systems perform accurately and fairly for all constituencies and stakeholders—and that they can explain what they are doing. Where there are tradeoffs and uncertainties, we have to understand how to encode and communicate the guiding values of AI systems so that they align with the expectations and deeper ethics of people and society. On another set of concerns, we need to expect malicious uses of AI by people, states, or organizations. We have to be alert that AI can be used in support of selfish or frankly malevolent goals. We have to be mindful that new technologies and applications of AI bring with them new “attack surfaces’ and new pathways like “machine learning attacks” that provide new opportunities for exploitation. Beyond criminal uses of AI, pathways to poor and costly outcomes might come from the rising powers of AI and of inadvertent, unintended side effects of AI systems and actions. We also have to look ahead and address potential long-term, undesirable psychosocial influences of AI, including slow, insidious changes in our society based in the widespread uses of AI systems.

A couple of specific concerns bubble to top of mind for me. On one, I’ve long been concerned with the pressure to employ AI technologies in military applications as part of nations pushing to harness the most advanced automation available. Relying on sophisticated sensing, assessment, and decision-making action on all sides of a potential military engagement can lead to a reduction in the time available for people to take part in decision-making—and of the ability of people to understand situations. It’s not hard to envision how errors and misjudgments in AI systems—relied upon for fast-paced assessments and actions—might lead to new kinds of instabilities, and to imagine how undesired hostilities might be sparked. Beyond accidents, one can imagine how knowledge of systems can lead to deliberate attempts by parties to spoof systems on one or more sides to spark hostilities.  In another area of challenge, I’ve been concerned about attempts to leverage AI in new ways to influence the beliefs and actions of people via the use of AI in new, powerful technologies aimed at persuasion. The concern is that people will harness AI methods to generate personalized sequences of information over time to shift people’s beliefs. Used on a wide scale, such systems could be used to influence voting and elections. We need to understand the possibilities. We need to work to envision potential adverse outcomes of AI systems and work to understand the possibilities and do our best to address them.

Smith: Does information processing in AI resemble the way that human brains process information? In other words, is AI being designed to operate like a human brain?

Horvitz: My sense is that the human brain relies on computational principles of intelligence and that these principles will be the same principles that we harness in AI systems. To say a bit more about this, I believe that the rich competencies we see in people rely critically on a fine coordination of multiple, interwoven computational competencies, including learning, perception, reasoning, and decision making, and sets of specially honed skills such as social competencies and language understanding and generation. These principles apply to naturally evolved nervous systems and the computer-based intelligent systems that we might one day build. Some more on this perspective can be found in this recent article, co-authored with Josh Tenenbaum and Sam Gershman. Moving beyond principles, it is quite likely that human brains implement things quite differently than the ways we will ultimately implement core principles of intelligence within our future computing platforms. However, understanding how human brains—and the brains of other animals—perform their magic will likely be useful in understanding steps forward in AI, including the “how” of implementation. At the same time, we are continuing to learn and build insights on the computational side. I believe that the methods and principles coming into focus in AI will be valuable to neurobiologists pursuing an understanding of nervous systems. I believe that we will come to know ourselves better via the pursuit of insights about the computational principles underlying different aspects of our human intelligence. As a final comment on links, the “existence proofs” provided by the brains of people and other animals are quite motivating for AI researchers: The magic of human intelligence energizes many in AI to pursue a deeper understanding of the scientific foundations of cognition. 

 

Jaan Tallinn is co-founder of Skype, Estonian programmer, investor and physicist. He is partner and co-founder of the development company Bluemoon, Board of Sponsors member of the Bulletin of the Atomic Scientists, and one of the founders of the Centre for the Study of Existential Risk and the Future of Life Institute. He strongly promotes the study of existential risk and artificial intelligence, and the long-term planning and mitigation of potential challenges.

Smith: What do you think the greatest benefit of AI will be for humanity?

Tallinn: I sometimes refer to AI research as humanity's search for the best possible future, with the important caveat that we will be irrevocably committing to the first result that we find -- because once we create superhuman agent(s) the future will likely be shaped by them not us. If things go well though, humanity will likely be able to use AI in to unlock our cosmic endowment: take the universe that currently seems to be filled with dead stars and planets, and make it flourish with sentient life and other things we consider valuable.

Smith: Many people are losing their jobs to automation. The robots that replace these humans are much more efficient and cost significantly less than human workers. What will be the hardest jobs for robots to replace?

Tallinn: Jobs that require a lot of physical 1-1 human interaction seem at least tricky to automate. However, i'm not confident at all, given the impressive advances in robotics. I also hope that the jobs of scientists and - crucially - AI researchers turn out hard to automate, as taking humans out of the loop there might result in the self-propelled AI development switching to "silicon speed" and spinning out of our control.

Smith: In the long run, do you believe that artificial intelligence will supplant humans? If so, what can we do to prevent that from happening?

Tallinn: I’m not sure if the word "supplant" is correct here -- we humans did not supplant the species we drove extinct. We either, literally, hunted them down or - more commonly - messed with their environment until it became uninhabitable for them. Similarly, I believe that the biggest long term risks from the AI have to do with sudden AI-initiated changes in the environment that make the planet uninhabitable for biological life forms. To prevent such catastrophes we need to invest orders of magnitude more in AI safety research, plus steer the world away from AI arms races - both figurative and literal - that would result in corners being cut in terms of relevant safety measures.

 

Subbarao Kambhampati is a professor of Computer Science at ASU, and is the current president of the Association for the Advancement of AI (AAAI). His research focuses on automated planning and decision making, especially in the context of human-aware AI systems. He is an award-winning teacher and spends significant time pondering the public perceptions and societal impacts of AI. He was an NSF young investigator, and is a fellow of AAAI. He received his bachelor’s degree from Indian Institute of Technology, Madras, and his PhD from University of Maryland, College Park.

Smith: What is the Association for the Advancement of Artificial Intelligence and what is its main goal?

Kambhampati: Founded in 1979, AAAI is an international, nonprofit, scientific society devoted to promote research in, and responsible use of Artificial Intelligence.

Smith: If you had a message for the public in regards to what artificial intelligence is, what would that message be?

Kambhampati: Our lives are already surrounded by automation. AI technologies are aiming to transform this automation from mere tools to capable partners, by providing them perceptual, cognitive, emotional and social intelligence.  The resulting ecosystem will both augment our own intelligence and enhance our collective ability to understand our world.

Smith: What is the currently the biggest challenge for the artificial intelligence community?

Kambhampati: After decades of sustained research, AI is now progressing at a rapid pace with the formidable promise of helping to solve some of the biggest problems facing us in the century ahead. As we prepare for this future with AI, we must also take deliberate and nuanced care, with deep, ethical consideration for the people and society our technology impacts. It is also equally important to make sure that the public has a balanced and educated view of the impacts of our technology.

 

Lawrence Krauss is an author, professor, physicist, public intellectual and Director of the Origins Project at Arizona State University, where he is also Foundation Professor in the School of Earth and Space Exploration and the Department of Physics.

Smith: What do you think the greatest benefit of AI will be for humanity?

Krauss: If we are smart about it, AI could relieve many people in the world from the burdens of work and survival, and provide them time to enhance their cultural experiences. If we aren’t smart about it, AI could leave many people without gainful employment. I prefer the former. Beyond that, I am particularly excited by thinking about the kind of science that AI could do.  Would AI think of different physics questions to ask? Would quantum computers understand quantum mechanics better than we do?

Smith: In the future, it’s possible that robots will take over most of the jobs in the world. If this does happen, how will that change the way humans live their lives? On one hand people won’t be burdened by meaningless jobs, on the other hand humans may feel a lack of purpose. What are your thoughts on the matter?

Krauss: As I described above, this can slice one of two ways.  It could produce a better future for all, with less poverty, less hunger, fewer people doing awful jobs.. but how society as a whole decides to manage this is crucial. This can either simple remove those jobs from the pool, producing massive unemployment, or it could allow all of us to read more, socialize more, think more, cook more.. etc. 

Smith: What is your greatest concern about AI?

Krauss: My greatest concern is overconfidence in AI’s programming… in particular in military situations. Autonomous systems that take over battles have a host of potential problems. And when it comes to nuclear weapons, the possibilities are terrifying. In my experience, there is a tendency in military circles to rely on technology beyond its state of competence… Take lie detectors and neural imaging for example, which is currently replacing lie detectors in some circumstances, even though it is just as unreliable. We need to ensure that when it comes to military applications that AI is not in a decision-making mode until we have tested completely that it makes better decisions than humans in a broad set of circumstances. Certainly one of the biggest pressures will be to weaponize AI. That concerns me a great deal.

Learn more about the Origins Great Debate: The Future of Artificial Intelligence - Who's in Control? that is set for Saturday, February 25, 2017. This event will be live streamed.