The Paradox Of Apple Security: Does Secrecy Make You Safer?

It's safe to say that no one in the crowd expected De Atley would be the messenger for Apple's "Come To Jesus" moment with a security research community it has long shunned. But when it becomes clear that De Atley isn't going to take questions, real frustration settles over the room, and many attendees make for the exits.

As De Atley, manager of the platform security team at Apple, gathers his belongings, some Black Hat attendees rush the stage, hoping for a chance to talk with him and other members of Apple's security team, including Window Snyder, senior security and privacy product manager, who is sitting in the front row. But in a blur of motion, the Apple contingent escapes through a side door, mumbling something about having another meeting to get to.

And just like that, Apple's first-ever appearance at Black Hat is in the history books. Later, a rumor circulates that Apple had 14 members of its security team on hand at Black Hat. That's an impressive number, yet no one who witnessed De Atley's talk feels like they have learned anything new. Most important, Apple has done nothing to change the perception that it does not feel comfortable engaging with the security research community.

When it comes to security, Apple is a paradox: It works with industry groups like the Forum of Incident Response and Security Teams (FIRST) and FreeBSD Security Team, but sometimes takes months to fix vulnerabilities. Apple uses security as a marketing vehicle for its products, but also silently adds security features and technologies. And as evidenced at Black Hat, Apple hires top security talent but does not permit its people to engage in the sort of dialogue that is crucial in this tight-knit IT industry sector.

id
unit-1659132512259
type
Sponsored post

Apple, which is on track to become the first company to achieve a market capitalization of $1 trillion, is becoming a bigger target. Yes, we've been hearing for years about the impending Mac malware apocalypse, which has yet to materialize. But the emergence of the first Mac botnet is raising questions about how much longer Apple can continue keeping security under the same shroud of secrecy as the rest of its operations.

"Apple always prefers to do everything itself, with as little outside assistance as possible. Indeed, this is central to Apple's whole philosophy -- doing everything -- OS, software, hardware, security -- uniquely," Kaspersky Lab CEO Eugene Kaspersky told CRN recently. "But when it comes to bugs, Apple would be doing itself a huge favor in communicating more closely with people who are willing to help it in finding vulnerabilities."

Hackers love a challenge, and Apple's smug insistence that Macs are more secure than Windows PCs is like waving a red cape in front of a bull. Apple changed that message back in June, but experts believe Apple is putting itself in a dangerous position by not fostering closer ties with the security researchers that find and report vulnerabilities in its products. Or, at least, fostering the impression that it is open to working with the security community.

"Apple's problem with security is partly rooted in its culture of secrecy, in which even the most benign future products -- even a simple software or security update -- are not publicly discussed until they are released. This can give the impression that Apple is ignoring initial reports of security vulnerabilities," said Dave Schroeder, a senior systems engineer at the University of Wisconsin-Madison's Division of Information Technology, and a noted Apple security expert.

No company uses secrecy to more of a competitive advantage than Apple. Yet according to one former employee, Apple's secretive approach to security is a barrier to progress with ominous implications for the future.

"The whole secrecy thing was the most frustrating thing about being there. It made it difficult to get our jobs done," said the source, who requested anonymity in order to avoid repercussions. "Apple is superb engineeringwise, and in many ways I would say they are at the forefront of security development. The biggest problem with the way Apple does things is that they don't talk."

Apple does not have a regularly scheduled patch update, nor does it have a product security incident response team. Without these functions in place, Apple's response time to security vulnerabilities has been slow. This was aptly demonstrated by the Flashback Trojan in April, which used an unpatched vulnerability in the OS X version of Java to infect some 700,000 Macs and form the first large-scale OS X botnet. The Flashback malware was a drive-by download that infected Macs with Java installed when the user visited a malicious website.

Oracle issued a patch for the Java vulnerability for Windows on Feb. 14, but Apple did not get around to patching its version for OS X until seven weeks later. This wasn't the first time Apple had dropped the ball on security updates: In 2009, Brian Krebs of the Washington Post reported that Apple was taking an average of six months longer than Sun Microsystems to issue patches for the same Java flaws.

With Flashback, the seven-week window of opportunity gave malware writers plenty of time to build a botnet and highlighted Apple's slow reaction time, Kaspersky told CRN. "What really needs to change is how quickly Apple responds to issues communicated by security researchers," he said. "That response time would be best shortened by more openness generally, and better-quality interaction with security researchers in particular."

Joshua Wright, an independent information security analyst and senior instructor with the SANS Institute, has been disclosing vulnerabilities to technology vendors for the past 15 years. He believes Apple does a great job implementing security in OS X and iOS, but when it comes to communicating with researchers that report flaws in its products, Apple still trails vendors such as Microsoft and Cisco.

"Whenever I disclose a vulnerability to Apple, there is no response," Wright told CRN. "In Apple's view, the research community is not to be trusted and is not a valuable asset to work with."

Apple, on its official product security website, makes the following claim: "When a potential security threat arises, Apple responds quickly by providing software updates and security enhancements you can download automatically and install with a click."

Clearly, that did not happen with the Java vulnerability that enabled Flashback.

The Flashback botnet was a wake-up call not just for Apple, it also served as a guide for malware authors, according to Charlie Miller, principal research consultant at Accuvant Labs, a division of security solution provider Accuvant, and a well-known expert on Apple security issues.

"This will actually make things happen faster on the malware community side, and [malware authors] are going to be investing more time. This was certainly the first drive-by exploit with any broad impact," Miller said in an interview.

Miller has experienced retaliation from Apple firsthand. Last November, Apple kicked him out of its developer program for at least a year after he uploaded a proof-of-concept app to the App Store to highlight a security flaw in Apple's process for vetting iOS apps. Miller's app, called Instastock, was set up to deliver realtime stock quotes, but he added a back door that allowed the app to communicate with a server he controlled to show how remote attackers could run unsigned code on iPhones and iPads.

While Miller clearly violated Apple's terms, some security researchers viewed Apple's response as overly heavy-handed. "That was just a petulant move by Apple, and one that is quite indicative of how Apple treats the security research community," the SANS Institute's Wright told CRN.

Response time is only part of Apple's security problem. Some researchers, fed up with being ignored, don't even bother submitting vulnerability reports to the company, instead releasing them publicly or to bug bounty programs. Richard Bejtlich, chief security officer at services consultancy Mandiant, Alexandria, Va., sees this as a potentially damaging outcome of Apple treating researchers as adversaries.

"When you're a giant company with many products, you have to have a program built around interfacing with researchers. This is all free product research that people can provide. Even if you are paying a bounty, it is valuable information," Bejtlich told CRN.

Organizations such as HP TippingPoint's Zero Day Initiative and Verisign iDefense pay security researchers for the rights to vulnerability discoveries so that they can notify the affected vendors and bolster security of their own products. However, the amounts these organizations pay for flaws pales in comparison to what researchers can fetch for a high-profile bug on the open market.

Apple isn't doing itself any favors by being uncooperative with the hacker community, said Gunter Ollman, vice president of research at Damballa, an Atlanta-based network security vendor, and former chief security strategist with IBM ISS and Director of the ISS X-Force research team.

"There are a number of vulnerabilities in iOS and OS X that have not been disclosed to Apple, but may have instead been sold to interested parties. The going rate, while not as high as a Windows zero day, is still high. Apple has suffered from that quite a bit," he said.

In February, an independent security researcher known as The Grugq brokered the transfer of an iOS vulnerability between a developer and a U.S. government contractor for $250,000, according to a report from Forbes. Zero day exploits for OS X are worth between $20,000 and $50,000 on the open market, Safari zero days trade in the $60,000 to $150,000 range, and iOS zero days are selling for between $100,000 and $250,000, according to Forbes' report.

"That's a problem for Apple. When you have a situation where researchers are not willing to share vulnerabilities in Apple products, yet there is huge demand for that, this shows the bad guys are going to be the winners," Bejtlich said.

Though questions persists about how far it will go, Apple in recent months has shown that it is at least willing to change its approach to security. Apple has already started porting some of the iOS security model, which features several layers of security, over to OS X. One example is Gatekeeper, which extends the App Store model of verifying that apps are legitimate and secure to OS X 10.8 Mountain Lion.

Apple could get away with claiming its users were more secure when Mac market share was tiny and malware authors were not targeting the Mac. But Flashback showed that large-scale malware infections are indeed possible on the Mac. This could explain why Apple's longstanding claims that Macs are more secure than Windows PCs, so brilliantly laid out in its "Get A Mac" television commercials, vanished from Apple's website in June, as first reported by CRN.

Where Apple previously proclaimed that "a Mac isn’t susceptible to the thousands of viruses plaguing Windows-based computers," it now asserts that "built-in defenses in OS X keep you safe from unknowingly downloading malicious software on your Mac." Apple's oft-voiced position that OS X "doesn't get PC viruses" now reads, "It's built to be safe."

Also in June, Apple issued its Java for OS X update simultaneously with Oracle's and added automatic security updates to OS X 10.8 Mountain Lion, in what was widely hailed at Apple taking a page from the Microsoft playbook. So while Apple disappointed Black Hat attendees, it is working concertedly behind the scenes to improve the security of its products.

"Apple has come to the same realization Microsoft came to about a decade ago, which is that security is not just a public relations or product marketing issue, but rather, is a foundational product feature which needs to be taken seriously," said University of Wisconsin-Madison's Schroeder.

Apple has quietly been assembling a security team with deep industry experience. In 2010, Apple hired Snyder, former chief security officer at Mozilla who was previously a security engineer at Microsoft and CTO at Matasano Security. In January 2011, Apple brought in David Rice, a former analyst with the National Security Agency, as its director of global security.

"Apple has added some very visible people their security team. When you start hiring that caliber of people, [the] expectation [is] that they will interact and communicate with the rest of the community," said Damballa's Ollman.

But what does that mean if this talent is hidden under a basket? Mandiant's Bejtlich said while it's shrewd of Apple to bring in top security people, their lack of participation at industry events is a missed opportunity.

"There are people I know who've gone [to work at Apple] that you never hear from again. They do not appear on the conference circuit, and they don't talk," Bejtlich said. "And that is exactly what you don't want to do these days if you're in the security research field."

PUBLISHED SEPT. 24, 2012