The long game: Re-engineering your business for artificial intelligence

There’s something of a battle raging around the world with the giants of the technology sector lining up on opposing sides on how society should deal with the massive adoption of artificial intelligence. In New Zealand those at the coalface of AI say business leaders should be starting to prepare for AI, to understand the downstream impacts on their business and start getting in place a dedicated investment stream to re-engineer their business for the future.

The tech billionaires have plenty to say about AI. Tesla and Space X chief executive Elon Musk has been widely reported as saying that artificial intelligence is one of the most pressing threats to the survival of the human race and pushing for the need for proactive regulation as is scientist Stephen Hawking. Facebook’s Mark Zuckerberg says Musk’s viewpoint is negative and “pretty irresponsible” pointing to the advantages for medical diagnosis and driverless cars. Microsoft’s Bill Gates has reportedly stated that robots which take human jobs should be taxed and last month saw reports that Facebook has shut down two artificial intelligences that appeared to be chatting to each other in a strange language.
In New Zealand the debate is also underway and while it’s early days the New Zealand Artificial Intelligence Forum was launched in June and the NZ Law Foundation is funding a major study into the legal ramifications of AI. But even at this stage local academics and technology thinkers say there is plenty that New Zealand business leaders should be addressing around the adoption of AI, both the opportunities it offers and the risks associated with it.
As Joanna Batstone, IBM’s chief technology officer for Australia and New Zealand told Management cognitive computing is already here. IDC forecasts worldwide revenues for cognitive and artificial intelligence systems will reach $12.5 billion in 2017, an increase of 59.3 percent over 2016.

Batstone says IBM, which is at the forefront of AI with its Watson platform, firmly believes it needs to be front and centre of that dialogue around the responsibilities associated with AI.

She says they are taking leadership and points to an IBM blog on transparency and trust in the AI area which lays out that these cognitive systems will soon “touch every facet of work and life – with the potential to radically transform them for the better. This is because these systems can ingest and understand all forms of data, which is being produced at an unprecedented rate. Cognitive systems like IBM’s Watson can reason over this data, forming hypotheses and judgments… These systems are not simply programmed, they learn – from their own experiences, their interactions with humans and the outcomes of their judgments.”

This technology carries major implications and the blog says that many of the questions it raises are unanswerable as yet and will require time, research and open discussion to answer.

Batstone says IBM views AI as a tremendous business and economic opportunity for society and in order to take advantage of that technology the focus must be on the intent and purpose of this new technology.
IBM is also looking towards the skills and education and training that will be needed by the workers of the future.
She also points to trust and the rights of citizens. “People have a right to know how AI is making decisions.” And points also to transparency and data governance.

The areas they are focusing on are firstly the intent and purpose of AI to transform systems; secondly the growth opportunities for education driven by that evolution and thirdly being responsible stewards of the technology.
Batstone says that as a developer building Watson the person who is writing the code is doing so with a specific use in mind. They are developing a set of algorithms for a specific purpose.
The end-user might be an oncologist or in the entertainment industry, or somebody in oil and gas using it to look for new insights into data.

So who is in charge? Batstone says they are seeing very active dialogue between government and industry around the world around the opportunities and around whether regulation is required.
As to the ethical implications Batstone says we have to build trust amongst society to trust the technology and that requires transparency. She says that the technology industry has a responsibility to be in that active dialogue and it needs to be guided by societal norms.

That also means there is a need to be able to demonstrate how AI is making decisions, to show how it came to a certain conclusion.

Watson is already working in oncology helping doctors decide on the right treatment for individual patients. It is being used in oncology centres around the world and in each case the doctor involved wants to be able to trust the recommendations on the Watson platform. The computer needs to be able to show the doctor the medical articles, medical evidence and logical processes it used to recommended a certain schedule of treatment.

As to what CEOs should be doing as AI grows Batstone says there are already many CEOs in large companies embarking on cognitive projects today. “It is here today and rapidly growing and expanding. Companies around us are using the competitive advantages it offers.”

She would encourage all leaders to engage in dialogue which should embrace the technology industry, government, citizens and be multi-disciplined and across all sectors.

New Zealand’s newly established AI Forum aims to help grow New Zealanders’ understanding of artificial intelligence, how it works and how it may affect lives in the future. The forum is carrying out a broad piece of research to articulate the size and current state of AI capabilities in New Zealand (and benchmarked offshore) and the risks and opportunities that we should be looking to address.

The New Zealand AI Forum, which is supported by NZTech, has government support with Communications Minister Simon Bridges and Science and Innovation Minister Paul Goldsmith saying it’s a good example of government and industry working together to share knowledge and build capability around AI.

Bridges said at its launch that AI presents exciting opportunities for New Zealand and the world. “I appreciate that some people may have some concerns about AI, which is why it’s critical that we collaborate with industry and across the sector to address the opportunities and challenges that AI brings.”

Forum executive director, Ben Reid, says the AI Forum is also one of the earliest partners of the global Partnership on AI – which was established with support from all major industry players to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

In New Zealand there has been interest in the forum from a broad spectrum of New Zealand’s business, government and education sectors – support from large technology firms including Google and Microsoft – but also large corporates in banking, engineering, agriculture and big legal and consulting firms are all engaged and encouraged to join. “It comes back to growing the fundamental level of understanding of the applications and impacts of AI and how we leverage it to achieve the best outcomes for New Zealand.”

Reid says that there is a sense of urgency to act now or New Zealand may fall behind as other countries seize the AI-driven opportunities. “AI is a platform technology which cuts across nearly every area of innovation today, yet AI adoption in New Zealand businesses seems low compared to some of our international competition. New Zealand needs to actively engage with AI now in order to secure our future prosperity.”

Fundamentally, he says, the AI Forum is optimistic but aware of the potential downsides of AI – and, in particular, falling behind the international competition.

With the size and scale of AI there are risks as well. “Issues such as AI safety and job automation are frequently covered sensationally in mainstream media but the reality is more nuanced. There is plenty of evidence emerging now that AI will create many new jobs in future but the focus of the public debate is always on future job losses due to robots.
“One of our roles is to raise the conversation and develop the public’s awareness of the facts about AI.”
The forum is aiming to support the development of a national AI strategy for New Zealand so it can maximise the upside opportunities and also address the downside risks of AI.

Reid says the forum is not seeking over regulation – the technology is still emerging – and often the best way to get policy is through agreement with all the parties involved.

He says business leaders should really be starting to prepare for AI and related technology and understand what the downstream impacts to their business will be. It is easy to focus on the short term, but businesses need to start getting in place a dedicated investment stream so that they can re-engineer their business for the future. This applies to any business in any sector.

Concurrently leaders can start building up knowledge at governance level and board level. “They have to understand at a national and business level the digital economy and what part AI plays in that as an enabler.”
Reid says boards have a fiduciary responsibility and he would argue in this climate of accelerating technology-driven change they must be thinking further out. In some cases the actual business existence could be at risk.  

And if your business is dependent on an existing software platform, understanding the implications of AI for that platform is important. One aspect which is at the fore of thinking about machine learning is that businesses need to be able to audit back to be accountable for what happens with the software – businesses still need to ensure that AI is fully accountable, transparent and they can explain why an AI decision was made.

The flip side, he says, is that using machine learning can deliver far superior customer experiences and deliver superior bespoke solutions. There are some amazing examples of customer service “digital employees” being developed by New Zealand AI businesses such as Soul Machines which raised US$7.5 million of investment when it was spun out of Auckland University last year.

Reid says that an AI capability has to be integrated into the DNA and the culture of that business, particularly in a business that is in a technology driven environment like financial services or insurance.

Businesses need to start to think about how they will address the rise of AI and allocate resources to cope with it. As with any emerging technology, “you need to monitor and build up a strategy to address it”.

So are any businesses doing this already? He says there are examples of machine learning happening now in New Zealand in diverse fields such as animal genetics, health, accounting, smart cities and says there is a general understanding that it is going to be an important development, but many are still working to understand how to respond quickly enough to it.
“That’s part of the objective of the forum: New Zealand has unique characteristics around our agricultural and forestry, tourism and service industry bases. We have an agile workforce and do not have the exposure to manufacturing that other countries do. We also have a continually growing technology export sector. So arguably New Zealand will be in a better position to be able to respond to what comes providing we start acting now.”

Dr Colin Gavaghan of Otago University is leading a New Zealand Law Foundation study looking at the possible implications of AI innovations for law and public policy in New Zealand.

As the project was launched Gavaghan pointed to a current example of AI in PredPol, the technology now used by police in American cities to predict where and when crime is most likely to occur. PredPol has been accused of reinforcing bad practices such as racially-biased policing. Some US courts are also using predictive software when making judgments about likely reoffending.

“Predictions about dangerousness and risk are important, and it makes sense that they are as accurate as possible,” Gavaghan says. “But there are possible downsides – AI technologies have a veneer of objectivity, because people think machines can’t be biased, but their parameters are set by humans. This could result in biases being overlooked or even reinforced.

“Also, because those parameters are often kept secret for commercial or other reasons, it can be hard to assess the basis for some AI-based decisions. This ‘inscrutability’ might make it harder to challenge those decisions, in the way we might challenge a decision made by a judge or a police officer.”

He says the research project is in partnership with Otago University’s Computer Science and Philosophy departments and his research collaborators are Associate Professor Ali Knott, Department of Computer Science, and Associate Professor James Maclaurin of the Department of Philosophy.

Gavaghan told Management the first question the research will ask is what are we talking about regulating here? What do we mean by AI?  He says a room full of computer scientists, academics and others will find multiple definitions of AI.
Another challenge is looking at if special oversight of AI is needed, what does it have jurisdiction over and do you need special rules around AI? Gavaghan says some feel there are plenty of existing laws out that that will suffice, such as contract law, product liability etc. Another question is who should be doing the regulating and then when do we regulate?
Gavaghan says there is a chance when you act in upstream way you can make rules that will not fit the technology. In the 1990s the United Kingdom tried to ban cloning, outlining what could not be done. Some years later Dolly the sheep was born, using different technology to that outlined in the legislation.

He also notes that the threat of automation is not new, but perhaps this is different because it is white collar workers and the professional classes that will be impacted.

AI also brings in an interesting tension for business – on the one hand they will want to leverage as much from the technology as they can, but on the other hand there is social responsibility and reassuring the public on what it all means.
Another question, he says, is do we need rules about transparency? If you are subject to a decision by an algorithm do you have the right to a human decision if you don’t agree with the AI decision?

Gavaghan also notes that if businesses want to avoid the public over- reacting about this technology they need to take responsibility to make it clear they are aware of the risks. “Take steps when needed and don’t only push the advantages and leave others to address the risk.”

The New Zealand Law Foundation’s Project Manager for the Information Law and Policy Project Richman Wee says several grants have been awarded to research studies that focus on the social change following technological change.
 “This project aims to help build New Zealand’s digital capability and preparedness. Technological developments present opportunities, and the wrong sort of regulation or processes can stifle these opportunities. So it is important to find a way to navigate a balance that leads to innovation.”

 

Visited 7 times, 1 visit(s) today
Close Search Window