Privacy Commissioner outlines expectations around AI use

Privacy Commissioner Michael Webster says he would expect all agencies using systems that can take the personal information of New Zealanders to create new content, to be thinking about the consequences of using generative AI before they start.

Webster has outlined his expectations around New Zealand agencies, businesses, and organisations using generative artificial intelligence, noting that AI’s use of New Zealanders’ personal information is regulated under the Privacy Act 2020.

It’s the Commissioner’s role to ensure New Zealanders privacy rights are protected, which is why he is calling for businesses and organisations to check their obligations around generative AI use before they begin.

In a statement Webster outlined seven points of advice to help businesses and organisations engage with the potential of AI in a way that respects people’s privacy rights.

1. Have senior leadership approval: Businesses and organisations must involve their senior leaders and privacy officer in deciding whether, or how, to implement a generative AI system.

2. Review whether a generative AI tool is necessary and proportionate: Given the potential privacy implications, review whether it is necessary and proportionate to use a generative AI tool or whether an alternative approach could be taken.
 
Conduct a Privacy Impact Assessment: Assess the privacy impacts before implementing any system. This should include seeking feedback from impacted communities and groups including Māori. Ask the provider to clarify how information and evidence about how privacy protections have been designed into the system. 

4. Be transparent: Be clear and upfront when you tell your customers and clients that you’re using generative AI and how you are managing the associated privacy risks. Generative AI is a new technology, and many people will be uncomfortable with its use or don’t understand the risks for them. Giving them information about the generative AI system you’re using in plain language will be essential to maintain consumer trust and your organisation’s social licence to use AI.

5. Develop procedures about accuracy and access by individuals: Develop procedures for how your agency will take reasonable steps to ensure that the information is accurate before use or disclosure and how you will respond to requests from individuals to access and correct their personal information.

6. Ensure human review prior to acting: Having a human review the outputs of a generative AI tool prior to your agency taking any action because of that output will mitigate the risk of acting based on inaccurate information. Any review of output data should also assess the risk of re-identification of the inputted information.

7. Ensure that personal or confidential information is not retained or disclosed by the generative AI tool: Do not input into a generative AI tool personal or confidential information, unless it has been explicitly confirmed that inputted information is not retained or disclosed by the provider. An alternative could be stripping input data of any information that enables re-identification. We would strongly caution against using sensitive or confidential data for training purposes.
 
Webster says he would “expect agencies to do their due diligence and privacy analysis to assess how they comply with the law before stepping into using generative AI”.

“Generative AI is covered by the Privacy Act 2020 and my office will be working to ensure that is being complied with; including investigating where appropriate.”

The Commissioner has previously sent a letter to government agencies outlining his caution around prematurely jumping into using generative AI without a proper assessment and signalling the need for a whole-of-government response to the growing challenges posed by this tool. 

For the full guidance, click here.
 

Visited 50 times, 1 visit(s) today

Comments are closed.

Close Search Window