Using AI safely: A global balancing act

As New Zealanders and New Zealand institutions and organisations try to come to grips with AI and generative AI systems, globally there is much afoot to try and rein in, or at least put regulations around, the technology. As Australia’s Minister for Industry and Science noted in June, using AI safely and responsibly “is a balancing act the whole world is grappling with at the moment”. By Annie Gray.

In mid-June, the European Parliament was negotiating the first ever rules for safe and transparent AI. It had adopted its negotiating position on the Artificial Intelligence (AI) Act with 499 votes in favour, 28 against and 93 abstentions ahead of talks with EU member states on the final shape of the law.

A statement on the European Parliament website says the rules would ensure that “AI developed and used in Europe is fully in line with EU rights and values including human oversight, safety, privacy, transparency, non-discrimination and social and environmental well-being.”

It says the rules aim to promote the uptake “of human-centric and trustworthy AI and protect the health, safety, fundamental rights and democracy from its harmful effects.”

In essence the legislation aims for:
•    A full ban on artificial intelligence for biometric surveillance, emotion recognition, predictive policing.  
•    Generative AI systems like ChatGPT must disclose that content was AI-generated.
•    AI systems used to influence voters in elections considered to be high-risk  

Prohibited AI practices
The European Parliament statement says that the rules follow a risk-based approach and establish obligations for providers and those deploying AI systems depending on the level of risk the AI can generate.

“AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics). Members of the European Parliament expanded the list to include bans on intrusive and discriminatory uses of AI, such as:
•    ‘Real-time’ remote biometric identification systems in publicly accessible spaces.
•    ‘Post’ remote biometric identification systems, with the only exception of law enforcement for the prosecution of serious crimes and only after judicial authorisation.
•    Biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation).
•    Predictive policing systems (based on profiling, location or past criminal behaviour).
•    Emotion recognition systems in law enforcement, border management, the workplace, and educational institutions.
•    Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy).”

It also highlighted high-risk AI with the Parliament ensuring the classification of high-risk applications will now include AI systems that pose significant harm to people’s health, safety, fundamental rights or the environment.

“AI systems used to influence voters and the outcome of elections and in recommender systems used by social media platforms (with over 45 million users) were added to the high-risk list.”

Obligations for general purpose AI
The statement says too that providers of foundation models – a new and fast-evolving development in the field of AI – “would have to assess and mitigate possible risks (to health, safety, fundamental rights, the environment, democracy and rule of law) and register their models in the EU database before their release on the EU market.”

In turn, generative AI systems based on such models, like ChatGPT “would have to comply with transparency requirements (disclosing that the content was AI-generated, also helping distinguish so-called deep-fake images from real ones) and ensure safeguards against generating illegal content.

“Detailed summaries of the copyrighted data used for their training would also have to be made publicly available.”

Complaints
Finally, MEPs wanted to boost citizens’ right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their fundamental rights. MEPs also reformed the role of the EU AI Office, which would be tasked with monitoring how the AI rulebook is implemented.

Here in New Zealand
In July 2023 Management asked Privacy Commissioner Michael Webster whether he thought New Zealand needs to follow the EU example?

He says a month earlier he issued guidance setting out his expectations for how public and private agencies can uphold their privacy obligations by taking a responsible approach to generative AI.

“At the same time, I am always considering whether our regulatory settings are adequate in light of changing technologies.”
Webster says New Zealand should take lessons from proposals in the EU and elsewhere and “consider which models will work best in our context. We will need to consider a range of options, including aspects of the EU model, and looking at proposals for domestic AI-specific legislation alongside moves to update and strengthen New Zealand’s Privacy Act”.

“This will include considering Māori cultural perspectives and te Tiriti in co-operation with Māori.”

Webster says his office will be developing specific proposals to ensure New Zealand’s privacy law is fit-for-purpose and will be looking to advance these in 2024, while continuing to monitor AI issues.

So how urgent does he see it as being?

“New Zealand needs to act promptly and in a coordinated way. People are making decisions about these technologies now, and so I do feel a sense of urgency.”

In April, he called for regulators, both domestically and internationally “to come together to determine how best to protect our rights and create the space for this conversation to happen”.

In May he also set out his expectations on the responsible use of generative AI in New Zealand, which he says is an area of continuing focus for his office.

So, is that something he would approach the NZ Government about? 

“After the election I will be approaching the Government about proposals to strengthen the Privacy Act, and I will continue calling for domestic regulators to come together to determine how to best protect the rights of New Zealanders.”

Asked if he agrees with the tenor of the EU’s proposal – its ban on biometric use, predictive policing and the need to disclose when something is AI-generated, Webster says the EU approach is a useful signal as to some of the key issues to consider.

“There is no global consensus on how to regulate AI, so we should be learning from countries and regions that are leading the way, and then consider what is best for our country.

“I will be seeing how proposals in the EU and elsewhere can inform that approach. I note that EU data protection legislation has been very influential globally. Prior to the EU GDPR 10 percent of countries had national data protection or privacy legislation.  Five years later the proportion of countries with national privacy or data protection legislation stands at 75 percent.” Webster says on the biometrics point, he is considering whether a Code of Practice that goes further than the Privacy Act “is necessary to regulate the use of biometrics in New Zealand. Overseas developments are useful to see how regulation in this area can be approached, but I will be making sure any rules about the use of biometrics are tailored to work best for New Zealand,” he says.

The Australian stance
In Australia, the Government is taking further steps to ensure the growth of artificial intelligence technologies in Australia is safe and responsible.

In June it released two papers to begin a discussion to ensure appropriate safeguards were in place in relation to “these critical technologies”.

Its Safe and Responsible AI in Australia discussion paper canvasses existing regulatory and governance responses in Australia and overseas, identifies potential gaps and proposes several options to strengthen the framework governing the safe and responsible use of AI.

The National Science and Technology Council’s paper Rapid Response Report: Generative AI assesses potential risks and opportunities in relation to AI, providing a scientific basis for discussions about the way forward.

While Australia already has some safeguards in place in relation to AI, its Minister for Industry and Science, Ed Husic, said in a statement in June announcing the papers, that it’s appropriate that Australia “consider whether these regulatory and governance mechanisms are fit for purpose”.

Australia became one of the first countries in the world to adopt AI Ethics Principles in 2019.

Its Government’s recent budget invested $41 million for the responsible development of AI through its National AI Centre and a new Responsible AI Adopt program for SMEs.

Husic also noted in the same statement that using AI safely and responsibly “is a balancing act the whole world is grappling with at the moment. The upside is massive, whether it’s fighting superbugs with new AI-developed antibiotics or preventing online fraud.

“But … there needs to be appropriate safeguards to ensure the safe and responsible use of AI.”

Published in Management Magazine July/August 2023

Visited 35 times, 1 visit(s) today

Paying with your face

Imagine walking into a store, picking up your items and paying just by looking at a screen. This is already a reality in China thanks to facial recognition payment technology.

Read More »
Close Search Window