Could artificial intelligence be your next strategist?

Learning machines have gained noteworthy victories in the last two years. In 2015 IBM’s Watson corrected an erroneous cancer diagnosis of a Japanese patient, cross-referencing the world’s entire stock of oncology knowledge against the patient’s genetic data – in less than 10 minutes. 

In 2016, Google’s AlphaGo artificial intelligence beat the reigning human grandmaster in the ancient game of Go – often referred as the most strategic and difficult game ever invented – using moves that nobody had taught to the machine. 

Processing vast quantities of unstructured data to solve difficult problems? Anticipating opponent’s reactions and coming up with creative solutions? Sounds a lot like strategy work, which begs the question: how far are we from a functioning strategy machine?

The current artificial intelligence (AI) applications are examples of so-called ‘weak AI’: solving a narrow, predefined problem by analysing a predefined set of data. Even under these limitations, weak AI can outperform humans by doing what computers do best: making a vast number of calculations in a blink of an eye, free from cognitive biases and without ever getting tired or jaded.

In a strategy context, there are a plenitude of tasks fit for current AI applications: recognising patterns in customer or competitor behaviour, predicting future raw material prices, calculating the probabilities of various future scenarios, and developing bias-free implementation plans – to name a few.

 

Traditionally “human” aspects of strategy can be automated 

Not surprisingly, researchers and consultants are recommending that such lower-order strategic tasks should be delegated to learning machines as soon as they become more widely available. 

However, the same experts convey a reassuring message for all strategists worried for their livelihood: higher-order strategic tasks, such as defining organisational objectives, coaching people, or reframing problems in a creative manner, remain firmly in human hands, and in the future. 

However, this proposed work division between human strategists and their computer counterparts is likely to become obsolete as soon as someone develops so-called ‘strong AI’. Such advanced AI can think creatively and in abstract terms, and thus it can define questions worthy of answering – and select the most appropriate information sources to go with each question.  

This kind of super-intelligence would make the domain of human strategists very small indeed – but that would not necessarily be a bad thing. After all, human strategists have some widely-acknowledged weaknesses: M&A deals can destroy shareholder value, new product launches can fail, and most employees cannot even remember their organisations’ strategies. 

 

Problematic business model 

However, human strategists may be needed for a longer time than technology-optimists suggest – and this is also likely to apply for the lower-order tasks that should be easy to automate. 

What is lacking from the current “strategy machine” discourse is the realisation that creating an AI to solve commercial problems is somewhat different from harnessing learning computers to improve the medical treatment of patients.

Healing patients as fast as possible is in the vested interest of all stakeholders, and one oncology AI could be enough for the entire world. However, the situation is markedly different when thinking about strategy machines. Would you trust the advice given by an AI if you knew that your competitor is also using the same machine? 

This need for unique strategies and competitive advantage is likely to discourage IBM from teaching Watson the basics of strategy – there just wouldn’t be enough interested customers for potentially “me-too” strategies. 

Large consulting companies are already investing heavily in their own software capabilities, so most likely we will see them bringing forth the first strategy AIs. 

However, developing several competing strategy algorithms instead of one will inevitably spread the development resources more thinly – and thus slow progress down. 

So, no need to worry about your strategy job in 2018. But it might make sense to keep a close eye on the AI development front, regardless of how “non-tech” your sector or background are.  M

 

Associate Professor Suvi Nenonen works at the University of Auckland Business School’s Graduate School of Management and teaches in the MBA programmes. Her research focuses on business model innovation and market innovation. 

 

Visited 12 times, 1 visit(s) today
Close Search Window