Intel CEO Admits AI Group Has Seen ‘Considerable Change’ With Leader’s Exit: Memo
‘I recognize that these teams have experienced considerable change in the recent months. That’s why I’ll be working directly with the leadership teams to refine our AI strategy and ensure consistent execution of our advanced technology road map,’ Intel’s CEO says in a memo.
When Intel CEO Lip-Bu Tan announced the departure of AI leader Sachin Katti to employees Monday, he admitted that the Katti-led AI and advanced technologies teams “have experienced considerable change in recent months,” CRN has learned.
Katti—who Tan appointed to lead Intel’s AI strategy and road map as chief technology and AI officer in April—said on X the same day that he is taking a job at OpenAI to build out compute infrastructure for the ChatGPT creator’s artificial general intelligence ambitions.
[Related: AMD Sees ‘Very Clear Path’ To Double-Digit Share In Nvidia-Dominated Data Center AI Market]
Katti’s exit came not long after he detailed Intel’s new strategy for challenging Nvidia’s dominance of the AI infrastructure market. The chipmaker has struggled for more than a decade to define and execute a successful accelerated computing strategy, most recently reflected by its failure to meet a modest, $500 million revenue goal for its Gaudi chips last year.
In a Monday memo seen by CRN, Tan told staff that he will assume leadership of the AI Group and Intel Advanced Technologies Group that were previously led by Katti, explaining that his decision was motivated by recent changes felt by the teams.
Tan said his oversight of the groups was “effective immediately.”
“I recognize that these teams have experienced considerable change in the recent months. That’s why I’ll be working directly with the leadership teams to refine our AI strategy and ensure consistent execution of our advanced technology road map,” he wrote.
Referring to AI as “one of Intel’s most important priorities and most exciting areas of opportunity,” Tan said these opportunities exist “not only in traditional general-purpose computing” — referring to CPUs — “but also in the emerging realms of inference workloads driven by agentic AI and physical AI.”
“To capture these opportunities, we must move decisively, leveraging our scale and ecosystem to establish Intel as the compute platform of choice for the next generation of AI-driven workloads,” Tan wrote in the memo.
Intel did not respond to a request for comment by press time. A company statement to CRN on Monday echoed Tan’s comment about Intel’s commitment to AI.
Other Recent Personnel Changes For Intel’s AI Efforts
While Tan’s memo did not discuss other changes to the AI and advanced technologies teams, the leadership shake-up came days after one of Katti’s direct reports, data center AI executive Saurabh Kulkarni, left Intel for a job at AMD, as CRN reported last Thursday.
Kulkarni, who had been vice president of data center AI product management since July of last year, announced on LinkedIn Monday that AMD hired him as an executive with a similar title that puts him in charge of charge of GPU product management.
Anil Nanduri, vice president of AI go-to-market at Intel, has taken over Kulkarni’s leadership responsibilities for the chipmaker’s AI product management organization, according to Intel.
The departures of Katti and Kulkarni came a few months after Tan appointed two outsiders to lead AI engineering efforts amid restructuring efforts pushed by the CEO. Jean-Didier Allegrucci, a former longtime chip designer at Apple, was named vice president of AI system-on-chip engineering. Shailendra Desai, another former Apple chip designer who also worked at Google, was given the role of vice president of AI fabric and networking.
How Tan Keeps Shifting Intel’s Organizational Structure
Tan’s move to take direct control of Intel’s AI and advanced technologies teams is the second time in as many months that he has used an executive departure to make more organizational changes on top of the restructuring initiatives he pushed earlier in the year.
The chief executive, who joined Intel in March, has sought these changes to cut down on what he referred to in April as “organizational complexity and bureaucratic processes [that] have been slowly suffocating the culture of innovation we need to win.”
When Tan announced last month the departure of Rob Bruckner, an engineering leader he turned into a direct report in April, the CEO said he was merging Brucker’s team, the Platform Engineering Group, with another division, the Silicon Engineering Group.
Mike Hurley, who had been leading the latter group as another new direct report for Tan, was tapped to lead the combined organization.
“Those two teams already work closely together, and this change creates an opportunity for us to further strengthen our collaboration by bringing the teams together under a single leader,” Tan wrote in an October memo, as CRN reported at the time.
Tan made his first big restructuring move in April by shaking up his executive leadership team to create what he called a “flatter structure.”
On top of making Bruckner, Hurley and another engineering leader direct reports, Tan did the same for the company’s business unit leaders, who had reported to Intel Products CEO Michelle Johnson Holthaus for a short period. Holthaus, a longtime executive who was given the role only last December, left Intel in September.
Among those business unit leaders was Katti, who had led Intel’s Networking and Edge Group since 2023 and was given the additional role of chief technology and AI officer as part of Tan’s move to flatten the company’s management structure.
In offering the new role to Katti, Tan gave the executive responsibilities for Intel’s AI systems and GPU product management team by breaking it out of the business unit that had been known as the Data Center and AI Group since 2021. At the same time, Tan directed the business unit to refocus on server CPUs under an older name, the Data Center Group.
In a memo Katti sent to employees back in April, the executive said that he would “lead the strategy, definition and execution for our data center accelerator portfolio as well as product positioning and customer engagements.”
Katti also took on responsibilities for other teams encapsulating systems architecture and engineering, Intel Labs, systems architecture and engineering, software ecosystem enablement, developer relations and Intel Cloud Services.
Months later, the leader of the software ecosystem enablement team, Melissa Evers, said in a July LinkedIn post that she was impacted by Intel’s mass layoffs that impacted 15 percent of the company’s workforce. Hannah Kirby, the leader of the developer relations execution team, also announced her departure from the company that month.
When Tan announced the layoffs in late July, he said that the move allowed Intel to reduce the number of management layers by about 50 percent.
How Katti Defined Intel’s New AI Strategy In September
Last month at the 2025 OCP Global Summit, Intel revealed a 160-GB, energy-efficient data center GPU that is part of a new annual GPU release cadence to deliver on its new strategy of providing open systems and software architecture for AI systems.
Katti detailed this strategy at an Intel press event in September, saying that it will involve cost-effective systems with multiple processor components designed to address different aspects of agentic AI workloads—namely the “pre-fill” and “decode” stages.
Noting that Chinese AI startup DeepSeek popularized the concept of separating the pre-fill and decode stages in a large language model for inference, Katti said compute-optimized GPUs are better suited for the former while the latter gets more benefit from GPUs that have the “highest memory bandwidth possible.”
A similar approach was touted by Nvidia when it announced in September the upcoming Rubin CPX as a “new class of GPU” that is meant to work alongside the vanilla Rubin GPU in a single system, with the former handling pre-fill and the latter handling decode.
“If we can build such a heterogeneous infrastructure, then we can optimize that performance-per-dollar by making sure that the right part of that agentic workload runs on the right priced hardware with the right performance and delivers that overall system level performance per dollar that customers need,” Katti said in September.
He said this will be made possible by an “open software approach” that supports multiple infrastructure vendors and won’t require developers to “change any of their habits.”
As an example, Katti said Intel has tested systems that run the pre-fill stage of a large language model on an Nvidia GPU and the model’s decade stage on an Intel accelerator chip. This allowed the company to achieve a 70 percent improvement in performance per dollar “compared to the homogenous systems out there today,” he added.
“That’s the strategy: We will be building scalable heterogeneous systems that deliver that zero-friction experience to agentic AI workloads and can deliver on the best performance-per-dollar for these workloads by leveraging this open heterogeneous architecture,” Katti said.