As the use of AI becomes more widespread, the ethical expectations are clear – we must know how an AI system comes to one conclusion over another and have confidence in that process. Companies must be able to explain the foundation of their algorithm’s decision-making process. Transparency and trust are critical to public acceptance and adoption of AI; if they can’t provide that transparency, their systems shouldn’t be in use. By Isuru Fernando.
The rise of artificial intelligence (AI) is directly related to the phenomenon of data in which lies the potential to defeat cancer, reverse climate change, manage the complexities of the global economy. AI holds the promise to address these challenges and unleash a new age of knowing.
While 80 percent of all data today is “unstructured” and unreadable by traditional computing systems, AI is changing that. Organisations like IBM are making big bets to create systems that learn and reason – shining light on that previously unusable data. Research firm IDC forecasts worldwide spending on AI systems to grow from US$8 billion in 2016 to US$57.6 billion in 2021.
This surge in AI adoption has been met with excitement, amazement and for some, fear. Recognising that AI is already part of our lives and will profoundly affect our economy and society in future, industry group the AI Forum of New Zealand has been posing some hard questions about our readiness as a country for AI.
The recently released report Artificial Intelligence: Shaping a Future New Zealand, prepared with the support of industry stakeholders including IBM, is an important read for organisational leaders, providing real evidence of AI at work in New Zealand today.
Topics covered include the level of adoption and understanding of AI in New Zealand relative to overseas, economic opportunities and impacts, creating an AI-skilled workforce, regulation and ethics.
The report found that 75 percent of firms in New Zealand believe it will be a game-changer in their organisation. However, among organisations considering AI only 28 percent are having board level discussions.
Directors should be leading a conversation around three important ethical issues: How do we make sure AI is being used for the right purpose, that people have the skills to partner with it effectively, and that AI is applied in a way that is trusted across society?
As a developer, I believe the topic of ethics is critical, because there are inherent responsibilities that come with designing and deploying AI systems. New technologies such as driverless cars, crime prediction software and ‘robo-advisors’ pose challenges which require interdisciplinary solutions. Companies, universities and government agencies of all sizes are deploying AI systems, and robust principle-led policies must guide their design, ownership and usage.
IBM’s own Principles for the Cognitive Era guide our approach with clients:
• Purpose: Technology, products, services and policies should be designed to enhance and extend human capability, expertise and potential. They are intended to augment human intelligence, not replace it.
• Transparency: Developers must be transparent about when, and for what purpose, AI is deployed, about the major sources of data used to train and inform their systems and about the algorithms that fashion the data into insight.
• Skills: Developers of AI applications should accept the responsibility of enabling students, workers and citizens to take advantage of every opportunity with AI.
As AI supplements human-decision making, the issue of unintended bias becomes a potential concern. But it’s humans who create bias, not AI. The creators of an artificial intelligence system feed the system training data to help it learn, requiring developers to probe the data for sources of potential bias. AI actually provides the means for bias to be mitigated or even engineered out of information technology systems – as well as an unprecedented opportunity to shed light on existing biases.
AI systems make visible the bias that already exists, and such bias can be algorithmically detected and corrected.
AI developers will be asked to make difficult choices, considering questions of rights, duties and conflicting values. As the use of AI becomes more widespread, the ethical expectations are clear – we must know how an AI system comes to one conclusion over another and have confidence in that process.
Companies must be able to explain the foundation of their algorithm’s decision-making process. Transparency and trust are critical to public acceptance and adoption of AI; if they can’t provide that transparency, their systems shouldn’t be in use.
Transparency of data ownership is also critical. Data represents competitive advantage, and whoever holds an organisation’s data is responsible for protecting it. Our model for data and privacy with Watson allows businesses to train their own AI models versus contribute their data to a central knowledge graph. Users can keep their own critical information private and proprietary. Your data stays your data.
The AI Forum recommends establishing a working group to advocate for and provide a locus of expertise in applying principle-based ethics to AI to help companies navigate the societal impacts of the type of AI they are deploying. Globally, the Partnership on AI (a collaboration of IBM, Amazon, Apple, Google, Facebook and Microsoft) is leading the conversation, charged with guiding the development of AI to the benefit of society.
Finally, organisations will also need to assist employees with the coming changes to the workforce. The report indicates that the number of people displaced by AI will be modest in the context of current labour market shifts, and it will also create new jobs, both directly in the AI producing sector, and indirectly in other sectors.
It’s difficult to predict what new jobs will emerge over the next 10 or 20 years. Two decades ago, few could have foreseen the current demand for social media managers, web analysts or search engine optimisation specialists. Like the internet before it, AI will do more than redefine how we work. Over time, it will redefine what we are able to work on, opening up entirely new avenues of exploration, discovery, and industry. What is clear is that organisations need to review their approach to developing new skills and recognising non-traditional career paths and start planning now.
The adoption of AI carries major implications and many of the questions it raises are unanswerable today and will require time, research and open discussion to answer. The business community and wider society benefit if New Zealand expects those working with AI to be responsible stewards of public and private data. I encourage everyone to download the full report from https://aiforum.org.nz/research/ and lead a conversation within their organisation to ensure New Zealand engages with AI effectively to shape a prosperous and thriving future.
Isuru Fernando is the AI and analytics leader at IBM New Zealand and a member of the AI Forum of New Zealand executive council.