Pax8 Chief People Officer Says AI Isn’t Replacing People. It’s Replacing Bad Process.
The question is no longer if AI will change how we work. The question is how leaders can ensure it does not compromise our pursuit of ethical organizational practice.
From generative to agentic AI, artificial intelligence continues a rapid integration into the workplace and the channel ecosystem. But how do we use the technologies to improve workplace productivity without compromising our ethics?
Deon MacMillan, chief people officer at Pax8, has a clear and hopeful answer:
“AI is not to replace people,” MacMillan said in a recent interview with CRN. “It’s to replace bad process.”
This perspective is energizing in a business climate where productivity, innovation and scale are moving at breakneck speeds. But her optimism is not one shared by all.
In a 2020 now widely cited article from the National Institutes of Health, “The Impact of Artificial Intelligence on Human Society,” a sharp warning is delivered about AI’s potential harms. From job loss and social disconnection to racial bias and widening inequality, the contrast between MacMillan’s practitioner-level hope and the NIH’s policy-level concern reveals the ethical tension shaping today’s AI conversation.
Hope Vs. Harm: A Channel-Specific Ethical Dilemma
MacMillan, whose company supports over 1,800 employees globally, sees AI not as a threat, but as a strategic collaborator in solving human challenges at scale.
“Don’t treat it like a tool. Treat it like a thought partner,” she explained. “It’s talent plus intelligence.”
But the NIH warns that such framing may unintentionally overlook structural risks. According to the article, AI’s unchecked growth could lead to:
Reduced human connection: “Human closeness will gradually diminish. … Personal gathering will no longer be needed for communication.”
Job displacement: “Traditional workers will lose their jobs” as automation becomes more dominant.
Wealth concentration: The “M-shaped” economy, where the rich get richer due to AI-driven capital gains.
Loss of control and bias: AI may one day “function on its own course… ignoring human commands” and reflect destructive human prejudice.
These risks are not theoretical. The IT channel is built on trust, deep contextual relationships and collaborative partner ecosystems. Over-reliance on algorithmic decision-making could erode the very fabric that sustains the channel.
Framing Bias In AI: The Data We Train It On
MacMillan’s call to replace “bad process” with intelligent systems is compelling, but it must be met with a rigorous standard for input quality.
As the saying goes, bad data in means bad process out. Thus, from a human relations and organizational development point of view, if data that contains bias is included in our AI-reimagined policies and processes, we will then too have biased algorithms and organizational outputs. Perhaps faster is not better if all it does is re-create exclusionary organizations.
This framework reframes the AI conversation: Efficiency is not inherently ethical. Without a conscious audit of the data inputs, i.e., performance reviews, hiring trends, promotion rates, customer feedback loops. Here AI can amplify the very inequities that HR and DEI teams have fought to correct.
In this light, MacMillan’s openness to experimentation is both a strength and a potential vulnerability. Encouraging teams to “ask crazy questions” of their AI assistants, like her personal GPT, “Quill,” may spark innovation. But without critical oversight, those prompts and the data behind them can reflect inherited workplace bias, especially around race, gender or ability.
When asked about their approach to AI governance, MacMillan concurred.
“Pax8 is a strong advocate for AI technology,” she said in a written statement. “Rigorous data governance must also be part of the equation.”
Bridging The Gap: Ethical AI In The IT Channel
So how do we reconcile MacMillan’s forward-thinking vision with the NIH’s systemic concerns?
- Human-in-the-loop design. AI should never operate without final human accountability. Whether it’s partner management, sales enablement or employee engagement, the IT channel must maintain human oversight over key decisions.
- Inclusive design from day one. The NIH warns that AI can be “egocentrically oriented to harm certain people or things.” Organizations must proactively test AI systems for disparate impact and include marginalized voices during design, testing and rollout.
- Bias audit and data governance. Following the bias framework above, channel firms must regularly audit their systems and datasets. Don’t just optimize for speed—optimize for justice. Who is missing from your data? Who is over-represented? What assumptions are being reinforced?
- Workforce reskilling and equity. AI implementation should include funding for employee retraining, mobility pathways and wellness checks. Displacement may be inevitable—but disposability is a choice.
Ethics by Action
Despite the risks, MacMillan’s leadership provides a grounded, pragmatic approach to workplace AI that is rooted in human-centered design and cultural experimentation.
“I’m not embarrassed to tell you I have a daily conversation with my ChatGPT,” she said. “The key is in the prompts.”
Prompts, like data, reflect the human behind the input. What we ask AI to do reveals what we value.
As for Pax8’s approach to ethical uses of AI? According to the CPO, the technology marketplace giant believes, “by centering our approach in ethics, inclusion, and human potential, we can use AI not just to work faster, but also better.”
If AI Is A Mirror, What Do You Want It To Reflect?
The future of AI in the IT channel won’t be determined by the tools themselves, but by how organizations use them to reflect or resist the status quo.
If we feed our systems biased data, they will build biased companies. If we chase speed at the expense of reflection, we risk automating exclusion at scale.
The real opportunity is not just to work faster but to do better.
And that starts not only with asking large language models better questions, but also with centering the ethics of the data we feed them and the collective memory we’re shaping through their use. If we fail to do so, we risk reinforcing exclusion and leaving those already struggling to connect even further marginalized and unseen.
The Inclusive Leadership Newsletter is a must-read for news, tips, and strategies focused on advancing successful diversity, equity, and inclusion initiatives in technology and across the IT channel. Subscribe today!