Software-Defined Data Centers: Should You Jump On The Bandwagon?

"Over the next few years, having your own data center will be a more costly and complex solution than going with software-defined data centers, which can be configured as needed without all the required technical skills," said Baldwin, CIO and chief strategy officer at Nth Generation Computing, a San Diego-based solution provider.

IT agility is the primary benefit of the software-defined movement, he said. "In a matter of minutes, not hours, you would be able to reconfigure a data center and resources and get your people and applications up and running," he said.

The software-defined data center -- a term coined by VMware -- is part of a growing wave of software-defined technology. Today, the software-defined label is applied to most everything, including networks, storage and security. While some see the term as merely a marketing tool, others like Baldwin see the promise in having software increasingly perform functionality traditionally provided by hardware.

Trying to gauge the size of the overall potential software-defined market is difficult, as it is still quite new. However, prospects for the market look strong based on certain components. Research firm IDC, for instance, estimates the software-defined networking market to be worth about $3.7 billion by 2016, up from $360 million in 2013. And while IDC has not yet publicly estimated the software-defined storage market, it did say in April that this part of the storage market will grow faster than any other part of the file-based or object-based storage market. VMware would like nothing better than to see every data center function reduced to code running on low-cost hardware which, it just so happens, the company does not manufacture.

id
unit-1659132512259
type
Sponsored post

Meanwhile, a host of small storage, networking and other software developers are jumping on the bandwagon by painting some of their virtualization or other wares with the software-defined moniker.

How the software-defined landscape develops over time is the big question. For now and into the foreseeable future, solution providers and their customers will be balancing the potential benefits of software-defined anything or everything against the familiar -- and currently measurable -- performance and SLAs provided by traditional hardware-based technologies.

THE VIEW FROM THE CHANNEL

Software-defined suggests that certain parts of a data center, such as networking or storage, can be made more agile by moving their functionality to software and adding automation. Or it suggests that partial functionality of a networking or storage infrastructure can be defined by software, while the rest is managed on traditional proprietary hardware.

Either way, software-defined is already here to some degree. Networking software and hardware vendors, for instance, are adopting open-source protocols such as OpenFlow and joining open-source communities such as the Linux Foundation's OpenDaylight Project, which is aimed at developing a common platform for sending APIs to applications that enable those applications to work with multiple software-defined networking protocols, including OpenFlow and vendor-specific interfaces.

While one solution provider brushed off software-defined as "marketecture," other solution providers and their customers are already seeing the value of software-defined up to and including the whole software-defined data center.

eGroup already has started preparing for the software-defined-everything trend, said Rich Young, marketing and corporate communications manager for the Mount Pleasant, S.C.-based solution provider.

"Everything is now being built around the software, not the box," Young said. "The box is becoming a commodity. We've been focused on this for some time and are looking to expand our software engineering team."

Whether it's called software-defined data centers or software-defined networking or software-defined whatever is not a big issue for eGroup, Young said.

"A lot of manufacturers are pushing it, so it resonates," he said. "If a term is used by a big vendor to direct the conversation, we'll use it in our customer conversations. Look at Cisco's 'Tomorrow Starts Here' push for software-defined. If Cisco's using it, I'm willing to go with that as an entry point. They have a bigger reach than a $20 million, soon-to-be a $27 million, VAR."

While VMware is given credit for pushing the concept of software-defined in part because it has no legacy hardware business to lose, solution providers said not to count traditional data center hardware vendors out of the leadership race.

Cisco is proof that VMware is not the runaway leader in software-defined, said Mark Teter, CTO of Advanced Systems Group, a Denver-based solution provider.

"Software-defined data centers is the goal," Teter said. "We are looking at design once, build once, use anywhere architectures. And Cisco's UCS in combination with its Cloupia software is a great platform. Stand up a UCS server, then use Cloupia to manage the server, storage and networking through one pane of glass."

It's hard to predict whether VMware will be the winner in software-defined, said David Stone, vice president of business development at Solutions-II, a Littleton, Colo.-based solution provider.

"VMware was a big part of the movement toward virtualization, but now they're fighting in the software-defined business," Stone said. "Hardware vendors will continue to do well. The market will prove that it's not just hype."

The software-defined data center is based on the ability to integrate as much of a data center's infrastructure into a single solution, said Dave Cantu, COO of Redapt, a Redmond, Wash.-based solution provider and cloud service provider.

"It's that big dream of a single pane of glass," Cantu said. "But it's tough to do. VMware thinks it can do it because it owns everything they sell. Open-source companies have their vision of software-defined data centers, but they have to rely on the community to integrate their products with that vision."

VMWARE: ALL OF YOUR DATA CENTER BELONG TO US

For VMware, a software-defined data center is not only possible, it's a necessary step in the development of the cloud.

Mike Adams, group product marketing manager for cloud infrastructure at Palo Alto, Calif.-based VMware, said the software-defined data center stems from abstracting all of a data center's networking, security, storage and availability resources, pooling those resources, and automating them so they can be made available as a service on top of the company's vSphere virtualization platform for building cloud infrastructures.

The one resource Adams did not include in his definition is compute, as VMware considers virtualized servers part of the vSphere platform.

As envisioned by VMware, software-defined data centers, in which the data center resources are virtualized to work on standard x86-based servers, change the hardware game, Adams said.

"We believe the control mechanism is in the software layer," he said. "We want to work with hardware partners, but we have a different view from them. If you ask a lot of hardware vendors where is their differentiation, they'll say it's in the hardware layer."

Raghu Raghuram, VMware's executive vice president of cloud infrastructure and management, said at the VMware 2013 Strategic Forum in March that, while traditional physical data center infrastructures work for traditional applications such as Oracle, SAP and Microsoft Dynamics, or those written using Java, enterprises are increasingly turning to new applications that don't fit the traditional IT infrastructure including Hadoop, or those written with frameworks such as the Python programming language or VMware's Spring.

Furthermore, while tying the traditional client/server model to hardware makes it hard to scale and automate, Web 2.0 companies have learned to take advantage of industry-standard hardware, any IP network, and scale-out storage and flash technology. "This gives them an amazing ability to automate data centers, gives them an amazing ability to scale," he said.

The idea of a software-defined data center carries this further to work with both traditional and new applications, Raghuram said. "Because this works independent of the underlying hardware, it works for any application and for any hardware," he said.

However, VMware is not looking to replace all hardware with commodity boxes, but instead is placing the control mechanism in software, Adams said. "We allow a lot of choice," he said. "If you like the control mechanism on this piece of hardware, but want other software for the rest of the infrastructure, you have the flexibility. You can be as open as you want. We have a lot of open APIs to integrate with the hardware."

VMware has at times gone out of its way to downplay the idea that the elimination of differentiated hardware is a goal of the software-defined data center.

VMware President and COO Carl Eschenbach in February told the company's channel partners that while the priority of building software-defined data center architectures as a foundation for cloud computing is clear, it should not mean that VMware or its channel partners will ignore the importance of hardware.

"I have one thing to say: If anyone can find a way to run software on software, let me know," Eschenbach said. "I say that tongue-in-cheek. [But] software is driving hardware innovation."

SOFTWARE-DEFINED DATA CENTERS: FOR REAL OR PIXIE DUST?

VMware archrival Citrix Systems agrees with VMware that the data center will be increasingly software-defined over time.

However, said Peder Ulander, vice president of marketing for cloud platforms at Citrix, Fort Lauderdale, Fla., the concept of a software-defined data center isn't really all that new.

"The industry has been rallying around this for a while," Ulander said. " 'Software-defined data center' is a great term. VMware did a great job with it, and not just with virtualization. Virtualization is at the core, but it's not the biggest part. This gets VMware away from that virtualization pigeonhole."

Ulander said he would argue that companies have been building software-defined data centers for some time starting with virtualization, then adding such orchestration technology as Puppet, OpsCode and CFEngine and building self-service catalogs. The final result is delivering resources on demand, leveraging Citrix's NetScaler cloud services technology and its CloudStack open-source cloud platform.

"That is a software-defined data center," he said. "Users can click on resources and get them. There's no human involved -- it's all done automatically."

Software-defined creates risk for hardware vendors, Ulander said.

"In the software-defined data center, people are looking at open source," he said. "For storage, they're not looking at EMC or NetApp. They're looking at Inktank, Basho or SwiftStack -- three startups with the ability to abstract and manage storage in software."

While Ulander sees the software-defined data center as already here, Randy Bias, co-founder and CTO of San Francisco-based OpenStack cloud infrastructure developer Cloudscaling, called what's happening in the data center "software-defined hype." Instead, Bias said, the use of the term "software-defined" is the same as cloud-washing, or the liberal use of the word "cloud" to give a company cachet as a cloud vendor.

It is easy to have a software-defined data center if one is a Yahoo or a Google with completely homogeneous IT systems, something the average business user doesn't have, he said.

"Data center vendors are trying to sell pixie dust," he said. "The industry has data center fatigue. They've been told converged infrastructure can solve their woes. Now they're being told [software-defined data center] will solve their woes."

Bias called the software-defined data center a bunch of nonsense. "It's being propagated by IT vendors trying to sell their existing product lines instead of solving real cloud problems," he said.

Those problems do not get solved by adding software-defined data center APIs to a VCE Vblock or other converged infrastructure offering, Bias said. Doing so might provide some of the functionality of a cloud, but at a significantly higher cost and with none of a cloud's flexibility.

"Ask yourself what drives business decisions, and it boils down to cost," he said. "Security and compliance are important. But they're not essential to the business. It's a risk, but not essential. Most companies care about flexibility and performance."

HARDWARE'S ROLE IN THE SOFTWARE-DEFINED DATA CENTER

For hardware vendors, software-defined is part of the evolution of the data center, but not an endgame in itself if the goal is to run all data center functionality in software on top of generic server hardware.

Part of that functionality is being gradually abstracted from the hardware layer to the software layer, said Jimmy Pike, vice president senior fellow, and chief architect for Dell's Data Center Solutions Group.

"Whatever you call it is OK," Pike said. "If you want to call it Pink Bunny, it's OK. It's an aggregate of compute, storage and networking resources. We call it the Active System because it changes characteristics based on the attributes."

However, Pike said, focusing on software-defined this and software-defined that could be a disservice to customers who are really looking for solutions.

"If you are deploying a solution, and there are too many abstractions, there can be issues," he said. "In the hyper-scale space, ubiquity can mean nonoptimized. People might say one size fits all, but it usually doesn't. You can't afford to be nonoptimized."

For instance, a transaction system is optimized in a very different manner from a Facebook or a Web 2.0 service, and that requires a solutionwide approach, Pike said. "You have to build each one with the concept of the workload you run on it," he said.

Gary Thome, vice president of strategy at Hewlett-Packard's Servers Group, said the software-defined data center promise of abstracting data center functionality from hardware to a software layer and then coupling that abstraction back to the hardware is an important part of improving the management of the data center infrastructure.

"But it's not a question of blindly configuring the hardware for the app. It's also picking the right hardware to run the app," Thome said.

HP's new Moonshot servers take advantage of the idea of moving some server functionality into a software control layer in its cartridge design, where each server cartridge can be configured for specific applications, Thome said. And HP offers nearly 30 networking switches with OpenFlow capability along with a software-defined networking control layer in its Virtual Application Networks.

However, that does not make the hardware itself irrelevant.

"It's the opposite," Thome said. "Once we have programmable control, we have the means to define our switches through software-defined technology. This gives us the ability to expose our unique capabilities."

It's the same with storage, Thome said. "Our storage software is landing on our hardware, where it can expose its unique capabilities," he said. "Our 3Par storage has a lot of intellectual property built in."

While software vendors may see software-defined as an alternative to converged infrastructure solutions, Thome said the two actually work together. "The fact that we bring servers, storage, networking, power and cooling together doesn't impact [the software-defined data center]," he said. "We expose them to software-defined. We see software-defined as an attribute of converged infrastructures."

Software-defined environments provide cloud flexibility with the added benefits of optimizing for certain attributes that cannot be done with a cloud, said Jim Doran, distinguished engineer and chief architect for IBM's Research Compute Cloud.

Workloads in large public clouds are likely to be statically assigned to servers in a somewhat haphazard way, possibly with some compliance checking, Doran said.

"Software-defined maps the resources according to workload requirements, including compliance and high-availability policies, and provides dynamic reconfiguration of workloads according to the environment," he said.

While IBM abstracts many of its hardware capabilities to make them available as part of an application blueprint, this is not a software-defined data center, Doran said.

"We don't take an approach that looks at the entire data center," he said. "We think this will just become embedded technology in the delivery and deployment of applications."

One can adopt the concept of a software-defined data center and still have the problems associated with how to handle the difference between development and production environments, how to handle the mix of legacy and new applications, and how to deliver SLAs, Doran said. "These would still exist because you are not marrying application deployment to the architecture," he said.

Instead, software-defined environments actually allow vendors like IBM to showcase their hardware capabilities, Doran said.

"We can surface our [hardware] capabilities to the workload and expose them to the application," he said. "This is the opposite of going to commodity hardware."

COMPETITIVE MARKET

Nth Generation's Baldwin said a few vendors, notably VMware and HP, will lead the move toward software-defining as much of the data center as possible, and as such will likely get first-mover advantage.

But instead of cutting off new software-defined opportunities for competitors and their partners, those first movers will actually seed the growth of this new data center paradigm.

"Customers don't like vendor lock-ins," he said. "And VMware has done a good job of retaining relationships with a variety of partners."

PUBLISHED MAY 13, 2013