AI collaboration: Three jobs of the future
The future of AI in the workplace will not be one of full automation but rather a collaboration between humans and machines.
10 MINS READ
AI in the workplace
Few technologies have captured the societal zeitgeist the way that ChatGPT did when it launched in November of 2022.
Within five days of its release, OpenAI’s breakthrough platform accumulated more than a million users. Only a month later, Microsoft announced a multi-year, multi-billion-dollar investment, alongside plans to “responsibly advance cutting-edge AI research and democratize AI as a new technology platform.”
Practical and ethical concerns regarding the use of artificial intelligence remain. ChatGPT, for instance, can be tricked into providing false, vulgar, and illegal information. However, there’s little doubt that AI is here to stay. Further iterations of the technology will allow firms to automate and optimize business processes in ways once considered unthinkable – leading some to proclaim “the end of work for humans” is near.
Such pronouncements are premature. The future of AI in the workplace will not be one of full automation but rather a collaboration between humans and machines. In spite of its awesome algorithmic power, AI will not (and cannot) accomplish many of the most important tasks for businesses.
Let’s look at three jobs that humans will continue perform for many years to come.
The project starter
In this job, humans serve as the visionary behind AI-enabled projects.
Just like other chatbots, every conversation with ChatGPT begins the same way—with an empty box for text entry. If no initial text is entered, nothing will happen.
The reason behind this, setting aside all the bells and whistles, is that complex AI models are nothing more than fancy calculators. They are machines designed to receive and process inputs (according to a series of very complicated equations) to produce outputs. If there are no inputs, there will be no outputs. Nothing in, nothing out.
AI models are a means to an end—increase sales, enhance efficiency, and improve services—but they have no preference about which ends they achieve. They can be used to boost customer success, to invent new products, to promote equality, or to reduce greenhouse gas emissions, but they will be used for nothing if a human does not initiate their use in the first place.
In time, AI models may be fed inputs from other, even more complicated AI models—like when a self-driving car changes course independently after a routing AI informs it of a crash up ahead. Even then, however, humans will have chosen the AI’s initial objective of arriving safely at a destination. Humans set the eventual cascade in motion.
Moreover, as anyone who’s worked for a large organization knows, nothing happens without buy-in. Perhaps its senior management, investors, team leads—or all three—who must be convinced of a project’s merits. AI, for its part, cannot argue for a project unless a human has programmed it to do so. Consequently, companies will need humans to act as their purpose-led project starters.
The rule setter
In this job, humans help put in place rules for AI.
A few years ago, AlphaZero, Google’s AI chess prodigy, defeated two other AI chess programs to become the world's highest rated chess player. This was noteworthy because AlphaZero famously received no instructions about chess strategy in its initial programming. It played against itself millions of times to learn how to win.
AlphaZero’s success would seem to indicate that, in the future, humans need not train the most powerful AIs – after all, humans had nothing to teach AlphaZero. This conclusion misses the point. Even in chess – especially in chess – there are rules which cannot be broken.
When AlphaZero plays a game and notices that its king is in danger, it does not attempt to teleport the piece off the board to safety. Its programming forbids it from doing so. Similarly, when facing a tough opponent, it does not attempt to hack into the opponent’s database to derail the opponent’s threatening ability to calculate strong moves – this, too, is understood to be out-of-bounds.
In business, as in life, there will always be rules and constraints. For example, projects may need to adhere to budgets, make use of specific resources, or avoid certain consequences. This is where future human workers must play a role.
By programming the rules under which AIs can operate and setting the limits of what can and cannot be done, humans create the kind of bounded problem spaces that allow AIs to flourish. Without these guiderails in place, AIs will be unhelpful at best, and problematic at worst (like in 2016, when Twitter users caused Microsoft's chatbot 'Tay' to spew hate speech because proper restrictions had not been put in place).
Hence, humans will have an essential role to play as the AI rule setter.
The edge-case specialist
In this job, humans serve as subject matter experts.
Last year, Ford and Volkswagen decided to pull the plug on their plans to develop fully autonomous self-driving vehicles. Given the pace at which AI capabilities are growing, this move seems counterintuitive. After all, aren’t fully autonomous vehicles getting ready for launch in the near future?
On the contrary, Ford and Volkswagen have determined that fully autonomous vehicles will be more difficult to achieve than once thought. This difficulty stems from the ubiquity of edge cases.
An ‘edge case’ is an unexpected or unforeseen situation. AIs excel at what they do because they have been exposed to vast amounts of historical data. Sometimes, however, changes in the present can cause historical data to lose its relevance. In the context of fully autonomous vehicles, this could mean a pedestrian acting unpredictably or a freak weather event.
In business settings, edge cases could refer to a firm’s unique exposure to changes in the multinational tax code, or to nascent employee wellness movements that affect worker productivity. Anytime the business landscape changes, new edge-case scenarios will arise.
Some of these changes will be close enough in spirit to historical events that AIs will be able to account for them. Some, however, will be novel enough that they cause AIs to give misleading or detrimental outputs.
This is why businesses will, for many years to come, rely on human oversight to review AI outputs. Edge-case specialists with domain expertise will be in demand long after AIs take over many business operations because AIs, while brilliant, are not infallible.
A new beginning
AIs will transform how companies do business, but they will not drive this transformation.
Behind every machine will be an army of humans working to initiate, guide, and oversee AI functions. For organizations, the rise of AI will not herald the end of work, but rather, the beginning of a new cornucopia of human-led possibilities.