Expert Speak Digital Frontiers
Published on Aug 30, 2018

Human roles that are all about prediction are not going to be important, but it’s the other aspects — judgement and action — that will gain in value.

“Thinking about AI as a drop in the cost of prediction is transformational” — Avi Goldfarb

Times Square — a new series on jobs, automation and anxiety from the world's public square.


“Transformational” is how Professor Avi Goldfarb, co-author of Prediction Machines, describes the effect of thinking about AI as a drop in the cost of prediction — in a 30 minute masterclass on the how the main themes in this recent book apply to four broad sets of people: business leaders, students and recent graduates, policy makers and the catchall category of professionals. Prediction Machines, co-authored by Goldfarb, Ajay Agrawal and Joshua Gans, has earned widespread acclaim from the world’s leading names in academia, business and artificial intelligence, including Erik Brynjolfsson of MIT, former US Treasury Secretary Lawrence Summers, Vinod Khosla of Khosla Ventures and Hal Varian, chief economist of Google.

“AI may transform your life. And Prediction Machines will transform your understanding of AI. This is the best book yet on what may be the best technology that has come along,” says Summers, in praise.

“Prediction is increasingly going to be done by machines, so if your job and your industry is all about prediction and a particular human is helping resolve uncertainty, then that’s a problem,” Goldfarb tells us on the du-jour theme for AI junkies: the future of work.

Avi Goldfarb is the Rotman Chair in Artificial Intelligence and Healthcare and a professor of marketing at the Rotman School of Management, University of Toronto. Goldfarb is also Chief Data Scientist at the Creative Destruction Lab, Senior Editor at Marketing Science, and a Research Associate at the National Bureau of Economic Research.

As of September 2017, the Creative Destruction Lab, for the third year in a row, had the greatest concentration of AI startups of any program on the planet. With the phenomenal brainpower that the CDL represents and the proximity to AI applications that it affords, Goldfarb, Gans and Agrawal found themselves at “the right place at the right time to form a bridge between the technologist and the business practitioner.” The result is Prediction Machines and this interview is a summary of the five dominant themes it holds within.

ORF Fellow Nikhila Natarajan sits down for a conversation with Goldfarb.

Nikhila Natarajan: When is AI simply an input and when is it strategic? When is it strategic even as an input (even if it’s not yet cheap enough)? What’s the thumb rule to thinking clearly about the choices?

Avi Goldfarb: First thing to recognise is this: the reason we’re talking about AI in 2018 and we weren’t talking about it so much in 2008 or 1998 is because of a particular aspect of AI which is called machine learning which is prediction technology. And, the reason is prediction in the technical sense — taking information you have to fill in information you don’t have. The reason prediction is transformational is because it’s a critical input in decision making. So, to think through organisational transformation, you have to know where uncertainty is becoming a key constraint in decision making. Prediction reduces uncertainty and uncertainty is everywhere. The example I really love is the airport lounge. So airport lounges exist because you don’t know how long it’s going to take to get to the airport and once you’re there then you don’t know how long it takes to get through security. There’s a lot of uncertainty so the airports compromise and give their best customers this nicer place to sit. But that’s a compromise. What would really be ideal is you just get to the airport, you walk right into the plane. The only reasons airport lounges exist is because of uncertainty. Better prediction would mean there’s no need for an airport lounge at all, no need for that reduced quality of experience, no need to invest to make things better.

Natarajan: You’ve said “reframing a technological advance as a shift from costly to cheap or from scarce to abundant is invaluable for thinking about how it will affect your business.” In that context, you’ve created an AI canvas or a kind of rubric.. how does that work?

Goldfarb: Once you understand that prediction is an aspect of decision making, you’ve got to think through what are the other parts of decision making, how is prediction going to be useful and what are the other assets that are going to be important? Besides prediction, we identify three different aspects of decision making. The first is data. It’s not news that data is increasingly valuable and data is the core input to machine learning and to the recent advances in AI. Second is what we call an action. Decisions are not worth spending a lot of time and investing unless you can do something with it. So, the ability to take an action and the ownership of the action end up being key to getting a benefit out of the prediction. The third aspect and the one that captures the imagination the most is what we call judgement. That’s knowing which predictions to make and what to do with those predictions once you have them. More formally, it’s figuring out the payoffs to those decisions — how happy or unhappy will you be in different situations but the intuitive way to think about it is what predictions should you make and what should you do with them once you have them.

Natarajan: Determining the best approach requires estimating the ROI of each type of data: training data/input data and feedback data. Assuming that a company is starting from 0, where should it begin? What are those crucial decisions around the scale and scope of data acquisition?

Goldfarb: The first thing you need to have is a problem somebody cares about solving! So, before you even think about what kind of data you need, you need to make sure what you’re investing in is something that people will pay for. There’s data you need to train your AI - that’s before you even take the AI into the field, you need an AI that works. You need data — once its in the field — to have it run. Then comes the third piece which is the most important one - feedback data which is the continual learning process.. once your AI is out there on the field, it can continue to get better and better and create sustainable competitive advantage. The different kinds of data feed into AI. What we’re emphasising in the book is where the technology is today and where it’s going to be in the next few years. Really intelligent AI — we’re not there yet, but we will be.

Natarajan: You quote Donald Rumsfeld about there being known knowns; known unknowns and unknown unknowns. Where does this fit in when we think about firm level use of predictive machines?

Goldfarb: Decisions are easiest when you have things you know and you know that you know them — these are Rumsfeld’s known knowns. This is where you have a good prediction, you can use it and do things better. Known unknowns are when you know there are certain things you don’t know, so how do you adjust to that? Amazon makes these predictions about what you may want to buy but the known unknown is that they don’t know exactly what you want to buy so they give you a whole bunch of options because they understand their predictions are not ‘good enough’, there’s some variance and they adapt their systems for it. The big danger is the unknown unknowns — things you don’t know you don’t know and especially if you think you know them but you don’t. A lot of things go wrong when you are confident that you have a good prediction but it turns out you don’t have it at all. The book, The Black Swan, is about this idea about a time when the Europeans came to Australia, they thought all swans were white and they couldn’t imagine the existence of a swan that wasn’t white. It turns out the existence of the black and white swans isn’t transformational to the economy and wasn’t a big deal but he (Nassim Nicholas Taleb) uses it as a metaphor that we didn’t really appreciate what could go wrong and that’s how the financial crisis (happened). The model said nothing could go wrong and we trusted the model so much, so we didn’t know what could go wrong — an unknown unknown. That is where prediction machines fail. The key is to you want to convert those unknown unknowns into known unknowns and have some human managing that process.

Natarajan: You write about that time when Steve Jobs introduced the iPhone to the world and not a single person was able to divine that it’s the end of the taxi industry. The suggestion, clearly, is that today’s AI foretells the end of some businesses which may be thriving.

Goldfarb: ..and we don’t know what it is. There’s a few things we do know. We know that industries that rely on using humans to help resolve uncertainty are going to be in trouble. So, prediction is increasingly going to be done by machines and if your job and your industry is all about prediction and a particular human is helping resolve uncertainty, then that’s a problem. Exactly how that’s going to play out is a real open question. We know the forces at work but how it’s going to affect this industry versus that industry is as uncertain as like…in 2007 recognising that perhaps the biggest impact of the mobile phone would be on taxis.

Natarajan: Automation that eliminates a human from a task does not necessarily eliminate them from the job. The risk for employees is that someone else may be doing that task. So, who is this prototype worker who will do well in the changed set of circumstances?

Goldfarb: So, the human roles that are all about prediction are not going to be important and it’s the other aspect in particular — judgement and action — that gain in value. The action part is less important and it’s worth getting out of the way quickly. There are certain things which we just like better when they’re done by a human. So whether that’s entertainment — celebrities, performance, athletics, etc., or some aspects of personal care, comfort and social settings — there are some things just better done by humans even if they’re scripted by a prediction machine. The more interesting aspect of this is judgement. Knowing the key aspects an organisation cares about — there are an increasing number of roles for that. That includes social skills and an understanding of humanities and social sciences to know what matters. Take the school bus driver. What does a school bus driver do? We think of them as driving a bus. That is a key part of their job and because people are transforming driving into a prediction problem, you might think we don’t need as many school bus drivers anymore. But you do need someone on the bus to protect the kids from other people and more importantly from each other. It’s unlikely there’s going to be no jobs for people on school buses but the skill set is going to be very different. There’s also going to be a whole bunch of roles in training machines and coding machines that stays with humans — we’re a long way away from the machines coding themselves.

WATCH:

Natarajan: There’s a chess analogy you’ve used in the book — where a programme developed its pieces, launched an attack, and immediately sacrificed its queen! “Reverse causality remains a challenge for prediction machines.” Can you explain?

Goldfarb: Prediction technology uses the data that you feed it to essentially say these two things tend to happen together in the past and so they are likely to happen together in the future. So the chess analogy is that if somebody sacrifices their queen, they were very confident they were a few moves away from winning. So, if you look at chess boards naively and you look at people deliberately sacrificing their queen, it suggests that causes winning but it causes winning only in a very particular set of circumstances and you need to be clear how to get there. But now, to be clear, computer chess has solved that problem. That was an old issue, now that doesn’t happen anymore but the underlying point is that you need to understand where the data is coming from and the biases that might be in it. You need to know what’s causing what and not just the correlations and prediction machines are not good at knowing what’s causing what. If you really care about what’s causing what, you need a different set of tools.

Natarajan: When prediction is cheap, there will be more prediction and more complements to prediction. But at what point does a prediction machine become so reliable that it changes how an organisation does things?

Goldfarb: The reason its valuable is because it’s cheaper (quality adjusted) than the alternatives. The first thing to recognise is that when something becomes cheap, we do more of it. This is economics 101, this is demand curves slope downward, this is when coffee becomes cheap we buy more coffee. The next point to recognise is when coffee becomes cheap, we buy more cream and sugar. These are the complements to prediction. When prediction becomes cheap, what do we buy more of: that’s judgement, that’s action, that’s data. The next thing to think about is that if something becomes really really cheap, we start to think of new applications for it that we may not have thought of before. My coffee analogy breaks down but the key thing is that prediction became cheap enough that we came to think about new applications for prediction — like driving now. Go back a decade and you have computers and arithmetic. Computers doing arithmetic were like artillery tables to figure out where cannonballs were going to land and over time, as computers became cheaper we began to use them for music and games and movies and pictures because these were new applications for arithmetic because of cheap arithmetic. We should expect the same pattern with prediction as it gets cheaper. How does all this apply to organisational transformation? First is identifying where uncertainty is a bottleneck in your organisation. What are the things that you do which you’re not doing as well as you should be? What are the constrains in your organisation because of bad prediction? I brought up the Amazon example before but that’s a known unknown now. But if their prediction got really, really good, and closer to a known known, they could ship it to your door, you open the package, see what’s inside and then maybe shop at your door or maybe take everything. And in that scenario, Amazon’s entire business model would change. They would no longer be a catalogue company and instead become a ship-then-shop business. The predictions have to be good enough and the cost of managing returns low enough to be worth a bigger share of your wallet.

Natarajan: You’ve come up with the AI canvas — a 7 box rubric that introduces discipline and structure into determining the exact use of predictive AI in an organisation. Erik Brynjolfsson and Tom Mitchell have come up with a 21 question rubric. So, would you say that if we use both, firms can come pretty close to a sensible framework?

Goldfarb: The two nest and complement each other. What we’re saying is if you identify a place where there’s uncertainty in your organisation, here’s the rubric for making it (prediction) happen. First you need to think through what is it that you’re trying to predict. Then you need to figure what the judgement and payoffs are looking like. What data do you need and what outcomes do you want to influence? Brynjolfsson and Mitchell’s framework is how professions look today. Skills and professions change over time. While there are many high skill professions that are at risk of changing dramatically because of machine learning, I am less worried about the people in those professions because, for example compare the truck driver to the radiologists: Radiologists are good at learning things and that how they became radiologists. Yes, the field is going to change and many of them are going to be able to keep up with the new skills they need and maybe they’ll have to invest a few years doing that, I’m less worried that they’ll land on their feet because they’re good at learning. Truck driving needs less learning, the people in that profession, for instance, are likely to be affected more by machine learning and AI.

WATCH:

Natarajan: With information retrieval, anything over 80% recall and precision is okay but not with assistance.. if that needs to be much better than 80% and that’s what’s called AI-first, what are the complements that gain value?

Goldfarb: When Google talks about AI-first, they’re saying their AI has to be really, really good. But if you really dig deep, what AI first means is the AI people moved offices to be near the CEO and that means there’s a whole bunch of people who used to be near the CEO who are no longer near the CEO. AI-first means something else comes second. AI-first doesn’t only mean your predictions become good, it means that you’re willing to do that at the expense of other things. It means we are investing in better data at the expense of short term customer experience.

Natarajan: You say owning the actions affected by prediction can be a source of competitive advantage that allows traditional businesses to capture some of the value from AI. Can you give us a real world example?

Goldfarb: We had a startup in the CDL that did a great job predicting sales in a grocery store, particularly in the dairy category. Where products have quick sell dates, knowing your inventory is really valuable — there’s lots and lots of products in that category. This startup had a much better prediction than anybody else and they tried to build a business around that. But where they struggled because end of the day, it was the retailer who owned the customer and the customer relationship. The retailer, therefore, had control over the feedback data and the retailer was the one who was in the end able to profit from that. The fact that the grocery retailer owned the action and the customer relationship — they were the ones who were able to benefit from the prediction meant that it was a very limited business model (for the startup) of just being a prediction technology without any ownership over the action.

Natarajan: Any closing thoughts? And what are you telling your students to learn?

Goldfarb: Just to summarise, the reason we are talking about AI today is because of the advances in machine learning over the last 10-20 years and that’s prediction technology. In trying to understand how this is going to evolve and affect an organisation over the next 5-10 years, you should be thinking about it a drop in the cost of prediction which means it’s all about reducing uncertainty. The opportunities are going to be in two places: In places where humans are already doing predicion, that’s increasingly going to be done by machines and second, we’ll recognise a new set of predictions which you’ve not thought of before - that’s where the real transformation is going to be. What we’re telling (our) students is to go understand the technology and understand what it can do. Understand what the organisation really cares about. It’s this combination of computer science and social science that’s going to be the core skill.

The views expressed above belong to the author(s). ORF research and analyses now available on Telegram! Click here to access our curated content — blogs, longforms and interviews.

Author

Nikhila Natarajan

Nikhila Natarajan

Nikhila Natarajan is Senior Programme Manager for Media and Digital Content with ORF America. Her work focuses on the future of jobs current research in ...

Read More +