Expert Speak Digital Frontiers
Published on Nov 19, 2018
Is what we commonly call artificial intelligence really that or is it something else being called AI simply to push the idea of forward progress? This is a deeply problematic trend, says Lipton — the lurching between visions of utopia and doomsday scenarios instead of focusing on what the technology itself is.
Your AI versus my AI: Zachary Lipton on the dangers of the AI and machine learning hype cycle

Times Square — a series on jobs, automation and anxiety from the world's public square.


A heady combination of excitement and ignorance is pockmarking the field of artificial intelligence (AI) with muddled definitions and sending it hurtling down a dangerous path, warns Zachary Chase Lipton, a rising star in the AI firmament and assistant professor at Carnegie Mellon University who helped create Amazon’s deep learning framework MXNet.

Lipton, widely regarded as a voice of sobriety in a turbocharged AI climate, questions the taxonomy switch from machine learning (ML) to artificial intelligence. Why the rebranding if the research hasn’t changed and what are the dangers to scholarship and society if there’s such a rush to maximize buzz?

Lipton’s research s core machine-learning methods and their social impact, with a concentration on deep learning for time series data and sequential decision-making. He is the founding editor of the blog Approximately Correct and the lead author of Deep Learning: The Straight Dope, an open-source interactive book teaching deep learning through Jupyter notebooks.

Nikhila Natarajan spoke with Lipton on the lexicon and “suitcase words” at the heart of the AI hype cycle. A summary of the conversation is below:

Nikhila Natarajan: If AI is not what (all) it’s being drummed up to be, then what is it?

Zachary Lipton: Part of the problem with the discourse these days is that there’s very little discussion about what the technology itself is; instead it’s all this utopia or doomsday kind of scenario. It’s easier to talk about what it (AI) is than what it’s not because that’s vague and amorphous. We’re really good at what’s called function fitting: a bunch of inputs that map to a set of outputs. You’re familiar with this if you’ve done some kind of high school science class — you measure the pressure, measure the temperature and plot the number of observations on a 2D plot and try to look for the best fit line. There, you’re fitting a one dimensional input to a one dimensional output. Things get more complicated when you take even a 200x200 pixel photograph. That’s straightaway 40,000 pixels and each one has a blue or a red or a green value which means we have 120,000 numbers to correlate to a certain image or celebrity face. The fact that we’re able to do this is really exciting because that’s the technology that’s able to learn behaviour that we don’t know how to hard code into computers. If you’re given 10 million sentences in English and each one fits with one particular sentence in Hindi.. this is the technology behind voice recognition, it’s behind picture recognition but it’s misleading to think that function fitting is behind everything and make wild claims about things. There are a lot of things that are not necessarily within that paradigm. A really good example is causal reasoning. You see these articles floating around saying your next boss will be an AI.. so for a very specialised task which can be distilled down to nothing more than pattern recognition, yes. But if you come up with a different policy, how is the world going to react? This requires causal reasoning. If you see a rainbow coloured dog, you know it’s a dog because you know how the world works and you know what a rainbow looks like. We’re able to answer these kinds of questions which machines still cannot.

Natarajan: Why has it gone from AI to ML to AI?

Lipton: It’s hard to pinpoint a single actor. The level of progress of specific things has changed but what hasn’t changed is the nature of the research. It’s important to have some historical context. AI referred to a pretty expansive field, the real dominant systems at the time were expert systems which were a set of hard coded rules which were distilled further in a long and complicated set of rules. The Deep Blue system that beat Garry Kasparov had absolutely no machine learning involved. It was simply a tree search. Soon enough, people began naming systems that suggested they were going to have human-like characteristics and made sensational projections. For the most part, the field started getting affected by terms lacking scientific rigour.. that’s when people started avoiding the word AI.

Natarajan: You say impostors may stake claim to some piece of AI market. How is that possible in such a deeply technical area?

Lipton: It’s possible because of the supply demand conditions and because the people making decisions don’t know any better. The combination of excitement and ignorance leads to misinformation. The excitement speaks to demand and the ignorance speaks to — well, not so many people really know much about it. It’s very easy when you have something perceived as a revolutionary technology to have a lot of dumb money (chasing it). How many people are real experts in ML — people who have the right combination of the math, engineering, real data? Maybe a few thousand people growing really fast are those who have a knowledge of the state of the art in the field. Suddenly there’s so much money and only a few thousand people who know the subject matter. Where are the investors getting their information from? The asymmetry is striking.

Natarajan: On suggestive definitions, suitcase words and troubling trends in research papers?

Lipton: We also have weaknesses inside the academic community itself — bad patterns that we would rather try to reduce. I have lots of peeves about the way people sometimes write papers. Suggestive definitions — people in the academic community, just like the investing community, get carried away with the excitement of incentives from the outside. One way to make a claim that can slip past peer reviewers is that rather than make a statement about what your model can or cannot do, you give it a misleading name. Instead of saying you are trying to do a passage-based question-answering, you say you’re doing reading comprehension which suggests that the machine is actually understanding what it’s doing! Instead of calling it slot filling, we call it natural language understanding! The danger with attributing cool names to machine learning is that you sneak in a very strong connotation without having to prove it. If you say we call this model the ‘consciousness model’ or a ‘thought vector,’ that’s terrible! Suitcase words — these are words so overloaded that when you try to unpack it, you don’t know what’s going to fall out. If you talk about fairness, there are a lot of interpretations. The real danger is when you use the word as if it has a scientific meaning and allow people to come away with whatever interpretation they want, that’s not scientific communication! There’s so much excitement about AI and so many people trying to get into a PhD program and there’s pressure to have a paper before you even start your PhD. What ends up happening is that people go as fast as possible and you start getting papers published that have technical words used in the wrong way to mean the wrong things. There’s this word called deconvolution. It’s very precise, it means to reverse a convolution but when you see this term now in a deep learning paper, I don’t know if they mean deconvolution in the real mathematical sense or do they mean just going from a low dimension to a high dimension. This began about three years ago. It’s come to a point where the words have become soupy and big parentheses are required to explain which version it means. It’s time for researchers to acknowledge the role we play in this. The hype cycle is not the entire story but it’s part of it. The cycle of misinformation is snowballing from the research community to corporate blogs. Writing a scientific paper should be a generous act and the idea must be to benefit others.

Natarajan: What’s the fix?

Lipton: We need more adults. If I write a blog post, several thousand people may read it. if it turns up on the front page of Hacker News for a day, maybe 50,000 people will read it but the excitement about AI and the number of people reading about it is a lot more. Scientific journalism is very different from other kinds of journalism. As a writer, your job isn’t to be the arbiter of what’s science. That’s very different from how to cover the economy or politics. You act as a conduit and synthesise the opinions but you’re not the authority on the subject. What’s happening now with AI is that the technology is in a stage where corporates are hawking stories directly to the press either by posting it to their blog site with the specific agenda of profit or PR and that in turn gets paraphrased and percolates even further. This is work that hasn’t passed peer review! People need to distinguish between science, opinions and prophecy. It’s not your job as a policy maker or a journalist to forget these distinctions. You can’t convene a special session of Congress just because someone said robots would march on Washington.

Natarajan: When is it AI and when is it ML?

Lipton: There’s a sort of category error of comparing the two in using this term. AI is a very broad aspirational term. The best definition comes from Andrew Moore who was our dean of computer science at Carnegie Mellon, “Artificial intelligence is the science and engineering of making computers behave in ways that, until recently, we thought required human intelligence.”

AI speaks to a moving target and not to a specific technology. You can have an AI community that was working on one set of tools 10 years ago and working on a very different set of things now. If you’re talking specifically about the technology, it’s better to use less words and be more specific, that’s generally a good idea. It’s not so much that the words AI are so horrible per se. Many reasonable people would argue that we shouldn’t have abandoned the term AI at all. But I think the challenge is to always ask ourselves why we’re using the words we are using. I know (and the danger is) that the words AI and increasingly AGI are being used to convey a feeling of forward progress. Every AI start up is dot AI and not dot ML because this affects how they are perceived and valued. If this is what we’re chasing, that’s when you know it’s going to result in a whole lot of disappointment.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Contributor

Nikhila Natarajan

Nikhila Natarajan

Nikhila Natarajan is Senior Programme Manager for Media and Digital Content with ORF America. Her work focuses on the future of jobs current research in ...

Read More +