Microsoft Execs: Partners ‘Critical’ To Achieving Responsible AI, Security

‘Security is a team sport,’ Microsoft CVP Vasu Jakkal said on a panel this week.

player
HkhnnMMbZ
video
6354029126112

Microsoft solution providers are “critical” to the vendor’s plans for achieving and maintaining responsible artificial intelligence and security, Microsoft executives told CRN this week.

During a panel on responsible AI and security held the week of Microsoft Build 2024, CRN asked Vasu Jakkal – Microsoft corporate vice president of security, compliance, identity, management and privacy – about the role of managed security services providers (MSSPs) and other services-led Microsoft partners in providing greater security through AI and protecting users from AI-powered threats.

Jakkal said that partners are “very important” to educating customers on AI, how to use it and how to develop playbooks. “Security is a team sport,” she said. “We have to make sure that our partners are on the journey.”

[RELATED: Nadella To Microsoft: Prioritize Security Over New Features]

Microsoft Build 2024

Microsoft has about 400,000 channel partners worldwide and is a member of CRN’s 2024 Channel Chiefs.

Answering the same question, Sarah Bird, Microsoft’s chief product officer of responsible AI, told CRN that “partners are critical in helping more people understand what they need to do to implement” AI.

Microsoft has also built its AI interfaces to help educate users, with an onboarding experience, users dipping toes with lower stakes data and “going with trusted providers who are transparent about how they built the system and what it does and what it's doing with your data,” Bird said.

She pointed to services partners who are members of the Microsoft Responsible AI (RAI) Partner Initiative launched last year for sharing information around AI best practices.

Members include Accenture, Avanade, Capgemini, Cognizant, Kyndryl and PwC, according to Microsoft. When Microsoft announced the initiative, it promised a team of dedicated AI legal and regulatory experts in regions worldwide to aid with AI governance system implementation.

“This is, I think, a huge part of how do we actually make all of this scale,” Bird said. “As much as I would like to work with everyone individually, I think that's not possible. And so they (partners) are really important.”

In April, Microsoft Copilot for Security became generally available, bringing generative AI to efforts to speed up cybersecurity professionals’ work. Users are more than 20 percent faster with the copilot no matter their level of expertise, Jakkal told the crowd.

Copilot for Security early access saw participation from more than 100 partners – including MSSPs, she told a crowd.

Identity attacks have “increased by an order of 10x over the same period year over year” and attackers only take 72 minutes or less on average to get user data access after a phishing link is clicked, Jakkal said. Compliance bodies can issue around 250 regulatory updates a day. And the security industry has 4 million unfilled jobs.

Copilots are a way for the security industry to keep up the pace and close gaps, Jakkal said. “Generative AI offers us a superpower.”

Bird told the crowd that Microsoft has worked with academics and experts on sociolinguistics, facial morphology and other areas to ensure a diversity of inputs to make responsible AI. The vendor has used red teaming, automated testing, safe and secure by design architecture and other techniques when developing AI-related technology such as recall.

“It's been a mix of ensuring that we have more people who have a seat at the table are driving the future of technology, but also ensuring that the technology understands what are the different people that are going to be using it and that it works effectively for them,” she said.

Microsoft has also looked at how to bring traditional security processes to responsible AI and to improve traditional processes with new dangers from AI, such as harmful output by an AI program.

She also said that Microsoft wasn’t sure at first that developers would like the vendor’s first large-scale GenAI release, GitHub Copilot. “We’ve seen, actually, that developers say that they are able to go 40 percent faster, but they're also 75 percent more satisfied,” Bird said.

Jakkal said that quality and accuracy improvement with copilots is as important as making professionals faster. Based on Microsoft testing, more inexperienced users are 35 percent more accurate with Copilot while seasoned professionals are 7 percent more accurate.

One of Microsoft’s earliest learnings with Copilot, Bird said, is that people think the tools make mistakes when in actuality “we weren’t getting the right data to the system,” leading to more work on data engineering and cleaning, work that has been provided by Microsoft solution providers.

“We're surprised at what data people have access to inside our organizations and they shouldn't,” she said. “They're going now and having to clean the data … AI is driving a lot of practices that should have been happening anyway, but maybe you didn't have that forcing function.”

Another panelist, Chitra Gopalakrishnan, who leads governance, risk and compliance efforts for Microsoft Windows and devices, told the crowd that the new recall feature in Copilot+ PCsfollows responsible AI principles by keeping customer content local to the device and “having the customer in control all the time.”

Those controls allow users to “very easily turn off saving of these snapshots … and they can even block websites or application lists.” IT administrators and commercial customers can also “disable recall through the group policy,” she said.