Pax8 On Turning AI Security Into A Leadership Opportunity For MSPs

Editor’s note: Responses have been edited for clarity and length.

AI security is quickly moving from a technical concern to a business-critical priority for MSPs. The opportunity is no longer just about managing risk. It is about leading the security conversation.

In this exclusive CRNtv interview, Pax8’s Matt Lee and Eric Stevens discuss how the company is helping partners make that shift by moving from reacting to AI risk to confidently guiding their customers and delivering new value.

What Partners Are Seeing Today

For many partners, the first challenge starts with data.

Sydney Neely: Matt, let’s start with what partners are running into right now. When it comes to AI security and compliance, what are MSPs and their customers seeing most today, and where are the biggest challenges showing up first?

Matt Lee: It’s unfortunate that AI challenges really come down to data management and classification—areas where MSPs and their clients have not always had the strongest practices. The early challenges are around establishing data governance before even moving forward with AI, let alone connecting critical business systems.

When you look at compliance and governance in this space, it’s uncharted territory. Most protocols, standards or frameworks are a year or two behind, so there isn’t much reference. MSPs are having to make extrapolated decisions about how to even begin assessing risk.

Getting started is really the biggest hurdle. There’s a gap in knowing where to begin when it comes to AI.

From Risk to Differentiation

As AI adoption grows, partners are starting to see that security is not just a challenge. It is also a way to stand out.

Sydney Neely: Eric, we hear a lot about the risks around AI, but we’re also starting to hear about opportunity. How is AI security evolving beyond a risk conversation and becoming an area where partners can really differentiate and lead?

Eric Stevens: There are a lot of things users will get wrong by default. When teams start experimenting with agentic workflows, the biggest trap is treating agents like features instead of high-privileged identities.

In many pilots, agents are given broad API access across Microsoft 365, RMM, PSA and line-of-business applications, without tracking which tenant data they’re touching. From a managed intelligence perspective, that becomes the new insider risk. One bad prompt can create an autonomous user that changes policy, moves data and even moves money at machine speed.

The shift is simple but powerful. Treat every agent as a managed non-human identity with least privilege, time-bound access and action-level approvals for sensitive actions. Pair that with inventory, ownership and logging so you can answer three key questions: Which agents do we have? What can they do? And who are they acting on behalf of?

id
unit-1659132512259
type
Sponsored post

Trust, Governance and Expectations

With AI becoming embedded in everyday workflows, customers are asking more detailed questions about how their data is used and protected.

Sydney Neely: As AI becomes more embedded in everyday business workflows, customers are asking more questions, especially around data use, governance and responsible AI. How are those expectations changing what MSPs are responsible for when it comes to security and trust?

Matt Lee: MSPs have always wanted a seat at the table. With AI, they are now being pulled deeper into business processes. They are helping determine where AI should be implemented, what the risks are and how to manage those decisions.

This changes the role of the MSP. They are becoming more involved in educating clients on risk-based decisions and helping ensure businesses reduce risk, improve efficiency and create growth.

You’re also going to see non-human identities—essentially new digital ‘hires’—become part of the workforce. MSPs will be responsible for sourcing, managing and governing these resources as part of a managed intelligence model.

That creates a powerful advantage, but it also introduces new challenges. How do you govern a non-human identity that doesn’t operate with the same constraints as a human? These are new problems to solve at a time when many traditional services are becoming commoditized.

This is one of the biggest opportunities for MSPs to deliver real security across the business, from quote to cash. MSPs are in a position to lead these conversations in a way they haven’t been before.

Helping Partners Take the Lead

For partners still early in their AI journey, the focus is shifting from experimentation to building repeatable, scalable services.

Sydney Neely: This is still a newer space for many partners, and it’s evolving quickly. How is Pax8 helping MSPs make AI security practical and scalable so they can confidently lead these conversations with customers instead of reacting to them?

Eric Stevens: The first thing we recommend at Pax8 is to start with our Managed Intelligence Provider Playbook. It’s the result of extensive research designed to help partners understand and navigate their journey.

We also have dedicated teams focused on identifying friction points along that path, along with training, education and Academy resources to support partners.

From a practical standpoint, we encourage partners to identify shadow AI within their organizations. Take the most effective experiments already happening, secure them and turn them into supported workflows and agents that can scale across the business.

That’s how you turn shadow AI from a liability into a pipeline of high-value, intelligence-led services—without slowing your clients down.

As AI adoption accelerates, partners that can secure and govern these environments will be best positioned to lead. Pax8 is helping MSPs make that shift from reactive support to strategic leadership. To learn more about how Pax8 is helping partners take the lead, visit Pax8.com.