While small AI implementations are possible with the right tech and security settings, technology can’t govern itself (nor should it), writes Microsoft’s Sarah Carney.
We’re just approaching a year since generative AI started flooding our headlines, and many businesses are pushing to be among the first movers to adopt AI, particularly in areas like workforce productivity.
They see the opportunity to fundamentally change how their employees work and how they can focus on the tasks that drive the greatest impact. We are all ready for change and across the ditch 68% of Australian workers say they struggle to have enough time and energy to get their work done, while two-thirds of Australian leaders say they are concerned about a lack of innovation or breakthrough ideas.
But among all the eagerness to adopt these tools, there is understandably a tone of caution.
Both business and technology leaders are concerned about whether their corporate data might be shared with the world on platforms like Open AI, and as they bring tools into their workforce, what are the steps they need to take to do it in a methodical and secure way?
First, where can my data go?
Many of the conversations I have with customers are around the use of large language models (LLMs) in a business environment, like what we see with Microsoft 365 Copilot. Even though Copilot has Open AI’s GPT4 model embedded in it, an organisation’s data (including prompts, responses, and the business data Copilot uses to formulate its response) isn’t used to train the foundation LLMs that Copilot uses. That said, organisations can ‘ground it’ in local information so it develops ‘skills’ and learns from an organisation’s information.
Once customers have an understanding of this, the next question is often: what’s the minimum amount of data we can include in a data lake (or data repository) to get going on AI securely? What functions should we automate or integrate AI into first, to best manage the risks?
These are important questions as an organisation charts its path forward in bringing AI into their workforce.
Can you start small with AI?
The short answer is yes, it’s possible to dip your toes in the AI ‘lake’ by ring-fencing certain functions, such as the healthcare benefits available to your employees.
This starts with getting internal access right. For example, no company wants their employees to be able to see everyone’s salary, or other personal HR information. Most organisations already have default privacy settings in place to ensure information is shared ‘just in time’ with the right people, without going too far into full lockdown mode.
Bringing generative AI technology within existing access, permissions and data security policies ensures that organisations can embrace AI without worrying about sharing something broadly that they don’t want to.
I have found the one thing that many organisations haven’t yet considered is how they’re managing access to their IP, which is essential to maintain robust security and control of business data, as well as to ensure AI tools are generating the best quality results.
A lot of businesses still have duplicated documents and records, with multiple people saving different versions. When generative AI tools are searching through all that information for answers, that risks getting the wrong version back.
This is often a culture change and teaching the workforce how to more effectively manage documents, emails and other content to get the most value out of tools like Copilot.
Creating an ‘AI-ready’ data environment
At Microsoft, this is a lesson we learned first-hand.
The traditional top-down method Microsoft was using for its own data governance wasn’t scalable, and that was becoming a real issue for us as an organisation. Statistics show the amount of data being generated globally each year has almost tripled in just the past five years, with IoT sensors, Covid-fuelled digital transformation and, of course, the popularity of video sharing driving huge growth.
And it’s expected to go up another 150 percent within the next two years.
As explained in this blog, it left us little time to do more than reactively address data issues as they occurred. As a company that succeeds based on our data analytics capabilities, that wasn’t good.
We needed a scalable approach with baked-in automated controls, to address the root causes of data issues. The idea was to democratise data management rather than have a gatekeeper blocking the process, and follow a ‘governance by design’ approach.
That’s why we developed formalised data standards that we programmed into every new app we developed, and we’re using smart technologies to automatically flag when something isn’t compliant, to make our data team’s jobs easier and address issues before they come up.
Building a data governance culture
While small AI implementations are possible with the right tech and security settings, technology can’t govern itself (nor should it). Even in an AI world, people are the ones using and shaping the platform. To get the full benefits of AI across your entire organisation, while maintaining the best data standards, it’s essential to create a clear strategy and expectations around how your people are treating documents, emails and other assets.
This is where it’s just as important to focus on your culture as your technology. At Microsoft we’ve put a lot of emphasis on communicating right across our teams, to drive early adoption of good data standards at Microsoft and create a true ‘data governance culture’.
To achieve buy-in, bring everyone along on your ‘why’.
We’ve found it really useful to provide evidence as to why adopting certain data hygiene behaviours such as automatically marking documents with the right information and maintaining strict version control can avoid a lot of headaches and clean-up later.
Proof points that illustrate the enhanced productivity, cost savings, and boosted morale quickly transform perceptions of the Chief Data Officer from being a police officer enforcing unpopular rules to the one who makes everything possible.
It’s also important to note that retrofitting is a lot tougher than starting out on the right path. It’s essential to build the right governance and data foundations even when you’re just testing the waters, which will make it a lot easier to adopt tools like Microsoft’s own Purview and Fabric to keep things flowing smoothly and accelerate transformation later.
In today’s world, people rarely get excited by new tech developments, seeing a lot of iterations on the same old thing. What I’m hearing, over and over, is that generative AI is so far beyond anything they’d ever expected. Customers, partners, seasoned IT professionals are talking about the time they’re saving, and they’re surprised.
But avoiding surprises of another nature means getting on board with governance first. With the right data access and security settings and the right mindset and culture, the world of AI will be your smart oyster.
Sarah Carney is the chief technology officer, commercial enterprise at Microsoft ANZ.
Microsoft and AI: In January 2023, Microsoft announced the third phase of its long-term partnership with artificial intelligence company, Open AI, through a multi-year, multi-billion dollar investment. Microsoft is continuing to rapidly innovate for this era of AI – from AI systems, models, and tools in Azure, to introducing Microsoft Copilot and building it into the latest Windows 11 update and Surface devices. Microsoft is committed to advancing AI in a way driven by ethical principles of safety, security and trust that put people first.