For 30 years, Tom Davenport has been one of the most influential advisors globally on the business impact and application of information technology. He made the topic of big data and analytics a boardroom issue with his groundbreaking 2006 Harvard Business Review article (“Competing on Analytics”1) and a 2007 book2 of the same name. In 2016, Davenport and Julia Kirby published a book on AI (Only Humans Need Apply: Winners and Losers in the Age of Smart Machines3). Since the late 1980s, he has conducted research and published dozens of Harvard Business Review articles and 18 books, many of which have been bestsellers. Davenport is a professor at Babson College near Boston, a Fellow of the MIT Initiative on the Digital Economy, a co-founder of the International Institute for Analytics, and a senior advisor to Deloitte’s analytics practice.
TCS: You, more than anyone else, made executives aware of the opportunities of big data and analytics more than 10 years ago with your Harvard Business Review article and the book that followed. Now the market’s attention seems to have shifted away from analytics to artificial intelligence. Since both depend on digital data, there must be a big connection between the two—one that may be getting lost in the marketplace. How would you describe that connection?
Davenport: The fact that we have so much digital data today—in some domains, a massive amount— means AI can succeed now in areas in which it was struggling. One example is image and facial recognition. Yes, I realize that recognizing cat photos on the internet is not one of humankind’s greatest advances (laughs).
But, seriously, the distinction between big data and analytics and AI is a little bit artificial because so much of AI is based on big data and analytics. It makes big data and analytics a more autonomous and, in some cases, a more sophisticated and complex form of analytics. Some forms of AI are more semantics-based in their orientation, in trying to understand human language and so on. But the largest activity in AI, and the most sophisticated, is machine learning. Its new forms, like deep learning, are among the biggest changes in the current round of AI, and it is statistical in nature.
“The largest activity in AI, and the most sophisticated, is machine learning.”
You need a lot of data and statistical algorithms to analyze this data, no matter which of the four areas of analytics I’ve written about (descriptive, predictive, prescriptive, and automated). But with AI, you get industrial-scale analytics by automating the analysis.
TCS: So does that mean you’d advise companies that haven’t mastered big data and analytics not to go right to mastering AI?
Davenport: That’s right. Any company that would skip analytics and go straight to AI is not likely to be successful with AI. I was talking recently with an Asian insurance company, which is interested in automating some aspects of their underwriting. But the problem is, the business unit that proposed this doesn’t use analytics to any great extent for underwriting. They continue to rely on the judgment and experience of their people who make decisions on business insurance. I think it will be very difficult to make that jump without having done the analytics and accumulating the data you need for the analytics.
“Any company that would skip analytics and go straight to AI is not likely to be successful with AI.”
TCS: In your book last year you talked about automation vs. augmentation. Talk to us briefly about each type of usage and are most companies automation over augmentation and if so, why?
Davenport: Automation, of course, is the replacement of human workers by machines. Augmentation is when machines and people work together alongside each other. Perhaps because I am a human I am partial to augmentation. My book with Julia Kirby lays out five approaches to augmentation—three that involve humans working closely with smart machines, and two that involve humans prospering by avoiding machines. My research since the book suggests that a great majority of companies that are implementing AI are pursuing augmentation. There has been very little job loss thus far. I am sure there will be some jobs lost to AI in the future, but I think they will be in lower numbers—and it will take longer for the jobs to go away—than most other observers.
TCS: Do you believe every industry has big opportunities from cognitive technologies? And which ones do you believe have some of the biggest opportunities?
Davenport: The opportunities are there for every industry. But for sure, some will move slower than others. It will be the same ones that moved more slowly with analytics. The availability of data is the key gating factor. B2B companies, for example, just don’t tend to have as much data as B2C companies, at least not customer data.
In the example I mentioned earlier, business insurance will move more slowly with AI than personal lines. Any industry where you don’t have much data is going to be challenged by doing really sophisticated work in this area. Also, smaller businesses are moving much more slowly than big businesses with AI, as they have with other technologies.
The nature of what a company sells also will determine whether it moves quickly or slowly. Those offering a purely digital product are going to explore AI much faster. Banking is basically all digital now, and rapid adoption of robo-advice is evidence of that. AI has also been pretty pervasive in asset management and trading for a while and getting more so.
In medicine, digital specialties like radiology and pathology are likely to be the first to embrace AI. Medical activities that involve a lot of face-to-face contact with patients or physical manipulation of patients will probably proceed more slowly.
The media industry—and especially online media such as Facebook and Google that were born digital—have huge opportunities with AI. Google and Facebook are among the world leaders in using AI, and yet they are still finding their way about just how automated they can be. Facebook has had some false starts in terms of not being able to easily identify fake news vs. real news, and so it had to back away a little bit. It also wasn’t able to fully use automated approaches to identify offensive images, so it has a large number of people still doing that.
But they’re trying AI all over the place. For the most part, they have been quite successful at automating key aspects of figuring out what ads to run on what sites. That’s been very lucrative for both Google and Facebook.
TCS: What jobs do you believe are at greatest risk in companies because of AI?
Davenport: The way I think about it is this: If you’re doing something that is the same as what everyone else does, you will have two problems. The first is someone can probably identify the structure of your job, and if they can identify the structure they can probably write some code to automate it. The second problem is that if you have a job that many other people are doing, there is a lot of economic incentive to go to the trouble of automating it.
If you have a niche job that few people do, why would anybody bother automating it? My favorite example of this comes from a newspaper story I read a couple of years ago. It talked about a person who connects buyers and sellers of Dunkin Donuts franchises (a U.S. restaurant chain). He makes an incredible living and drives around in a Rolls Royce. And connecting buyers and sellers is something that machines do all the time. But I think that is such a niche that nobody would view it as economical to automate.
TCS: Should companies that invest, or plan to invest, heavily in AI be transparent with employees about what they are doing and why they are doing it? Should they address their worry about jobs?
Davenport: In first place, cutting jobs through automation is generally not a good strategy. It’s a race to the bottom in that your costs will go down, but you should expect competitors will do the same thing. That means everybody’s margins in your industry will drop, and it will be harder to innovate. Unless you already make a highly commoditized product, it will be a race to the bottom.
“We have to increase our productivity, which in the U.S. has been growing very slowly or even falling in 2016.”
What’s more, in the transition from humans doing the work to computers doing the work, you need a lot of help from the people doing the work. You’re far more likely to get that help if you give them some assurances that they will be able to keep their jobs.
I think we have to get some economic benefit from all of this technology. We have to increase our productivity, which in the U.S. has been growing very slowly or even falling in 2016. To boost productivity, companies produce more with less cost. But the best way to achieve that is through attrition, not through layoffs. Implementing AI is going to be a slow process anyway, so you might as well get some loyalty from your employees and say you’re only going to reduce jobs through attrition.
TCS: Do you come across companies that are thinking carefully about how these smart machines will affect employee jobs, and how employees might begin to prepare for the transition?
Davenport: It’s very early days for that. But General Electric has started to do it. They’ve created these personas of the types of jobs that are likely to be threatened, new types of jobs that will be created, and the jobs that will be largely augmented by machines. That was one of the topics they addressed at a big strategy meeting recently in Boston.
“The key for thought leaders will be in coming up with new ideas rather than describing numbers about old ideas.”
But I would still say it’s pretty early for that kind of conversation. It should be happening more frequently, but it’s not. It should be happening more for several reasons. In addition to not creating a workforce where fear of automation takes over, the fact is it will take a while to train people on the new skills they’ll need to work effectively alongside the machines. This is the “augmentation” model for AI. Now these people may hold back their knowledge and their support if they are worried that they are going to lose their jobs. So for a variety of reasons it makes sense to be upfront and start that early.
TCS: Do you think thought leaders soon will be facing competition from AI?
Davenport: Well, not the good ones (laughs). But I do think certain aspects of what professional services firms have referred to as thought leadership could well be at least partially automated. For example, many of them do surveys; you could easily imagine systems that automatically generated text about the results of those surveys, just like you have systems that automatically generate write-ups of baseball games and company earnings reports. Now, I don’t think there is any application available for this yet for surveys.
But if and when there are, I think they might be able to do a better job at writing about the numbers than a human could. For example, an AI writing application could make comparisons to other surveys, and provide analysis that a human might not notice because he or she might get bored poring over survey numbers.
This kind of writing application has worked quite well at the Associated Press and other organizations that have been automating the writing of numbers-driven reports. In fact, several companies have automated the writing of monthly investment reports, and Suspicious Activity Reports for money laundering in banking. Writing about numbers is increasingly going to be automated.
The key for thought leaders will be in coming up with new ideas rather than describing numbers about old ideas.
1. Harvard Business Review, Competing on Analytics, January 2006, accessed August 4, 2017 at https://hbr.org/2006/01/competing-on-analytics
2. Competing on Analytics: The New Science of Winning, at Amazon.com https://www.amazon.com/Competing-Analytics-New-Science-Winning/dp/1422103323
3. Only Humans Need Apply: Winners and Losers in the Age of Smart Machines, at Amazon.com https://www.amazon.com/Only-Humans-Need-Apply-Machines/dp/0062438611