Saving time and money in the notoriously slow drug-discovery process are big incentives for life sciences to adopt cutting-edge technologies. Your neighborhood pharmacy may be able to fill your prescription within an hour, but getting that drug to market can take anywhere from seven to 10 years, costing pharmaceutical companies millions of dollars. What's more, with 35,000 genes in the Human Genome now mapped out, scientists are generating massive amounts of data that need to be integrated and correlated so biotech and pharmaceutical companies can more efficiently convert the research into new drugs and treatment therapies.
Streamlining that process is a tall order, prompting life-sciences companies to turn to integrators and VARs to help them install, configure and manage economical computing power and tools to compare and correlate the data that traditionally has been stored in separate databases. Those projects include clustering tens of thousands of Intel-based Linux servers instead of a supercomputer or two, database integration and the implementation of data-mining tools, storage-area networks and CRM applications.
"The real market winners in this are integrators who take the discovery process and CRM, for instance, and tie it all together," says Amith Viswanathan, health-industry analyst for Boston-based Frost and Sullivan.
GeneFormatics, for instance, teamed up with IBM Global Services (IGS) to install and configure its 160-node Linux cluster.
"We always look at the build-vs.-buy question," says Steven Lyman, senior technical manager of engineering for GeneFormatics, a San Diego-based biotech company. "As our needs spike, we will look at other potential outsourcing options."
Traditional pharmaceutical companies including Pfizer and Schering-Plough, meanwhile, are expanding CRM to help them forge closer ties to consumers with the help of integrators such as BearingPoint. "Pharmaceutical companies want to focus on making and selling drugs, and they view integrators as the vehicle to create and implement technology solutions for their business needs," says Marty Magazzolo, director of commercial business development for First Consulting Group, Long Beach, Calif.
Linux Under the Microscope
Life-sciences companies in the United States are projected to spend roughly $22 billion this year on software, hardware and services, according to estimates by Frost and Sullivan. Of note, biotech and pharmaceutical companies buy specific technologies for specific projects, rather than for big forklift or infrastructure change-outs. "They tend to buy in piecemeal, a software piece here and there," Viswanathan says.
Many new and expanding biotech companies are finding they can get mainframe-equivalent computing power with an Intel-based Linux PC cluster at one-tenth of the cost. Even the big boys, such as Amgen and Incyte Genomics, are running clusters of Linux servers,many with dual processors,to crunch algorithms and perform simulations and genetic sequences, jobs traditionally relegated to mainframes and supercomputers. The processing load gets spread around the multiple servers, so, for instance, scientists can more quickly compare a section of protein data against multiple databases.
"Good Linux clusters have the capability of reaching the power of supercomputers," says Mike Svinte, vice president of worldwide business development in IBM's life-sciences group, especially, he says, if they're configured in computing grids that let you break up the processing among multiple machines.
Structural Bioinformatics (SBI), a San Diego-based biotech company that licenses its 3D models of proteins to other biotech and pharmaceutical companies, moved its massive simulation and modeling applications from a $4 million SGI-based Cray supercomputer to 128 clustered Linux servers for one-quarter of that price.
The CPU costs are cheaper for Linux, too: It's about $1 per protein of computing time, whereas it was about $10 per protein on the supercomputer, says Dr. Kal Ramnarayan, vice president and chief scientific officer for SBI, which turned to IBM Global Services to install and configure the cluster. "We didn't want to be maintaining and playing around with the computers," he says. "We have to concentrate on our main business ,getting [protein] information."
In fact, the catch with Linux clustering is the maintenance and management, says Eric Williams, executive vice president of Arrow Electronics' SupportNet division, which partners with IBM and other vendors in life-sciences projects. Having an integrator support and maintain those clusters may be cheaper in the long run, especially if it means preventing downtime for a new drug discovery, he says.
Data Mining And Integration
Data mining is also an especially powerful tool for life-sciences firms in the R&D phase. For example, the combination of Linux-cluster compute power and data mining can help a company select early on which drug compound to scrap, or even find a new use for one that had been retired, says Simon Holt, a management consultant with EDS. Bristol-Myers Squibb,with the help of Accenture,has developed its own tool for selecting compounds with the most promise. Bristol-Myers' modeling and analysis tool, which it has been running for about a year-and-a-half, comprises commercial middleware, database-integration tools and some basic data-mining features. "It's not pure data-mining in the sense that it doesn't have automated tools running against collected data and spitting out answers," says Shawn Ramer, vice president of informatics for Bristol-Myers, Princeton, N.J. But it does let scientists go in and "mine" the data for answers to their research questions.
Researchers at Cytogen subsidiary AxCell are also reaping the benefits of data mining. Before they even step foot in the lab, the scientists do a dry run using the biotechnology firm's data-mining tool to determine how certain proteins interact, pooling and correlating protein data generated by AxCell's own scientists as well as public-domain research, to calculate or learn about the proteins.
"Data mining lets us learn something from known data, which can save half the time in the lab," says Lubing Lian, bioinformatics manager for AxCell, Newtown, Pa. "And it lets our scientists send more accurate data to the wet lab."
In addition, database-integration initiatives have become widespread among life-sciences firms. That goes hand-in-hand with data mining, with products such as Information Integrator, IBM's database middleware that uses DB2 as the central repository for data from multiple databases within or outside an organization. AxCell, for example, had IBM install and configure Information Integrator and the Intelligent Miner tool on its NT and AIX servers, Lian says.
Aside from IBM's database-integration tools, there are biotech-specific integration tools such as Lion Bioscience's Discovery Center, which Schering-Plough uses to integrate its separate Oracle databases in the United States and Germany.
"Each side was working locally, and they couldn't share information," says Juergen Swienty-Busch, director of solutions marketing for Lion Bioscience. Now, researchers' notes on a protein sequence, for instance, get stored in the database so other researchers can view them.
The next big thing is the biotech market: pharmacogenomics, the study of how a person's genetic makeup interacts and responds to a drug. So all of these projects will go a long way in helping integrators, vendors and life-sciences firms, alike. Pharmacogenomics will rely heavily on technologies like Linux-clustering, database integration, data mining, storage-area networks and collaboration to target drugs to people with a specific genetic makeup. That means pumping out 60 tailor-made drugs per year instead of three general ones, EDS' Holt says.
"The timing couldn't be better for using these technologies to alleviate the economic pressures on biotech and pharmaceutical companies," Holt says.
Kelly Jackson Higgins (firstname.lastname@example.org) is a freelancer in Stanardsville, Va.