Search
Homepage This page's url is: -crn- Rankings and Research Companies Channelcast Marketing Matters CRNtv Events WOTC Jobs HPE Discover 2019 News Cisco Partner Summit 2019 News Cisco Wi-Fi 6 Newsroom Dell Technologies Newsroom Hitachi Vantara Newsroom HP Reinvent Newsroom IBM Newsroom Ingram Micro ONE 2019 News Juniper NXTWORK 2019 News Lenovo Newsroom Lexmark Newsroom NetApp Insight 2019 News Cisco Live Newsroom HPE Zone Intel Tech Provider Zone

What Intel Wants Partners To Know About 10nm, Optane, And Its Xeon Roadmap

Intel U.S. channel chief Jason Kimrey tells CRN why partners should get behind Optane memory, whether the hardware security features of new CPUs will be a selling point and how partners should think about Intel's 10-nanometer chip delays and Xeon server roadmap.

Back 1   2   3   ... 7 Next
photo

What are the verticals and use cases that you expect to drive Optane adoption?

Certainly, anywhere where you have high throughput, high data requirements. Certainly, financial services, healthcare (some of their artificial intelligence workloads). Those are the two I know where I've seen some very large, early — not just interest but commitments to buy. In cloud, [we've announced deployments with] Google and Tencent.

Are you deploying any particular resources for training? Things that can help them grasp the difference Optane makes?

We are, and there's some great tools — some are on the CRN site — some that we're just getting ready to launch through our [Intel Technology Providers] program, not only to provide the technical information but more of why it matters and how you actually can benefit. Some of it is out there. Some great content and tools are coming.

Intel has three new Xeon CPUs coming out within the next two years: Cascade Lake later this year, Cooper Lake next year and then Ice Lake in 2020. How should partners prioritize which Xeon CPU they invest in?

Every customer is different, and it ultimately depends on what's the problem is they're trying to solve. and what's the application that they're running. Every advancement will provide performance gains over prior generations, but … depending on the workload that they're running, mileage varies depending on what you're doing. The key message there is that we're not slowing down at all, and I think we're continuing to evolve our roadmap to adapt to the way our customers are running their business, and the new workloads they're running — AI, deep learning — all of these things are continuing to push the boundaries of compute, and we're not going to slow down.

 
 
Back 1   2   3   ... 7 Next

sponsored resources