Security researchers provide a valuable service to IT vendors. By poking at software and documenting what happens, they're often the first to identify vulnerabilities. Many do this free of charge, asking only that their findings be publicly credited by the vendor. While vendor-researcher interactions don't always go smoothly, they are part of a symbiotic relationship that results in more secure products.
Apple's track record with the security community is markedly different. For years, Apple ignored researchers when they reported vulnerabilities. When Apple did start to respond, it treated researchers with suspicion and mistrust and, in some cases, made legal threats against them. Apple viewed security response as just another PR function, discrediting researchers and downplaying their reports.
Apple has come a long way since then and has established cordial relations with some researchers that focus on finding flaws in its products. Yet the Cupertino, Calif.-based company still exhibits an obvious mistrust of the research community, as recent incidents of retaliation have shown.
In April, when Russian antimalware vendor Dr. Web notified Apple of its discovery of a massive Mac botnet encompassing some 600,000 machines, Apple contacted Reggi.ru, Dr. Web's DNS registrar in Russia, asking it to shut down one of Dr. Web's domains. Apple apparently believed the domain -- a so called sinkhole server that is left connected to the botnet so researchers can use it to determine its size and other characteristics -- was one of the botnet's command and control servers. Was Apple shooting the messenger here, or was this a simple case of mistaken identity? Apple did not respond to a request for comment.
Igor Zdobnov, chief virus analyst for Dr. Web, took the high road when asked for his assessment of the incident. "We do not have explicit evidence that Apple knew that domain was registered by Dr. Web," Zdobnov told CRN in May.
It wouldn't have been difficult for Apple to figure out that Dr. Web was a legitimate source for security vulnerability reports, as a discussion about the Flashback botnet was taking place on Apple's own support forums at the time it tried to get Dr. Web's sinkhole shut down.
"This domain name which Apple wanted to block was mentioned in Apple's forum by our virus analyst, so they probably knew it indirectly," Zdobnov acknowledged to CRN in May.
While the Flashback botnet did not end up causing widespread chaos, it could serve as a blueprint for future miscreants to follow. And according to researchers, therein lies the danger of Apple's security code of silence. The long-forecast deluge of Mac malware hasn't arrived yet. But when botnets start showing up, a coordinated security response becomes critical.
For Apple, which is setting its sights on the enterprise market, one major malware outbreak would probably be enough for businesses to put off purchasing decisions, as it would shatter the notion of Macs being secure than Windows PCs. At the very least, Apple is not taking advantage of what many vendors have found to be an extremely valuable resource.
"There is a lot of value to going outside your organization when it comes to security," said Chris Wysopal, CTO of Burlington, Mass.-based Veracode and a noted security researcher. "Microsoft hires multiple security teams to bang on Windows in the research community. You want to have that give-and-take, that independent view."
Charlie Miller, a renowned Apple security expert who was hired by Twitter earlier this month, was booted from Apple's iOS developer program last November for violating its terms of service. Miller uploaded a fake stock ticker app called Instastock to the App Store to highlight a flaw in Apple's code signing process, and he's now banned until at least this November.
The summer of 2006 provided one of the most telling illustrations of Apple's approach to security damage control. In a video presentation at Black Hat that year, security researchers David Maynor and Jon "Johnny Cache" Ellch showed how an attacker could exploit a vulnerability in a wireless driver to remotely hack into a MacBook.
Although Maynor and Ellch used a third-party wireless card and driver in their demo, they said the MacBook's native Airport wireless driver was also vulnerable and that they had reported the issue to Apple.
That's when things started getting weird. Apple and Atheros, maker of the MacBook's wireless chipset, released statements claiming Maynor and Ellch had not provided them with any evidence that the MacBook's native driver was vulnerable. But on Sept. 21, 2006, Apple released a critical patch for three Airport vulnerabilities that it said could be used for remote code execution attacks. Apple did not credit Maynor and Ellch, claiming it had identified the problems during an internal audit.
In an interview this June, Maynor told CRN he no longer works with Apple because of the way the company treated him and Ellch during the Macbook wireless driver saga. "Apple has always cared more about their appearance than they do about getting things right, and they fought us tooth and nail on the wireless driver issue," Maynor told CRN. "Now, when we find vulnerabilities in Apple software, we keep them."
In 2007, noted hackers L.M.H. and Kevin Finisterre took things a step further with their "Month Of Apple Bugs," a 30-day campaign aimed at highlighting the prevalence of vulnerabilities in OS X and Apple applications. "At the moment, we don't trust Apple on these matters due to the track of incidents and unpleasant situations surrounding their policy on product vulnerability handling," according to an explanation in the "Month Of Apple Bugs" FAQ.
Finisterre, now a senior research consultant at Accuvant Labs, Denver, began tinkering with Apple hardware in 2006. Upon digging into the idiosyncrasies of the Mac platform, Finisterre began finding vulnerabilities almost immediately, but he quickly realized that Apple was not used to being contacted by researchers and reacted with its customary silence to bug reports.
"They were very, very touchy about it," Finisterre said of his early interactions with Apple. "With some of the first few bugs I found, Apple would not even respond -- I would just get an automatic email response, and then the communication would go completely dead. They would not say anything about it."
Finisterre continued reporting vulnerability findings to Apple, and the company continued to ignore him. Eventually, though, Apple did respond, making its position clear.
"Someone finally came back to me and said, 'You are a hacker, and we don't respond to hackers. Internally, we are treating these communications as semi-malicious, and we may need to talk to our legal department.'" Finisterre told CRN.
Finisterre notes that he and L.M.H. gave Apple advance notice so that Mac users would not be subjected to potential danger.
"Some folks in the Apple security team were fully aware of what we were planning to do. As soon as our information went live, we made sure Apple had the information it needed to get the fixes out. And things started to change around that time," Finisterre said.
Eventually, Apple began to realize that it could benefit from Finisterre providing the full documented source code for each vulnerability. But Apple still refused to communicate any sort of time line for when it would release fixes for vulnerabilities, on the grounds that doing so could compromise the secrecy it holds so dear.
"Apple didn't want people to know about the next version of OS X, and they felt that by telling researchers about when patches would come out, it would give people clues about when new hardware would be released," Finisterre said.
By that time, Finisterre had developed personal relationships with Apple's security engineers, with whom he would communicate via instant messaging the details of his vulnerability findings. That nonstandard reporting mechanism, over time, became very useful to Apple when it came to getting issues taken care of, Finisterre said.
"One of the quickest vulnerabilities Apple fixed took seven days from me contacting their security engineers over IM to them pushing out a fix. It all happened completely outside the normal communication channels," he said. "That was huge in my mind, and a far cry from the days when they would not talk to us at all."
Over time, Apple grew so enamored of Finisterre's contributions that it offered him a job on its security team. Finisterre declined, but Apple continued to grant him special privileges. He now spends his time hacking SCADA and hardware systems, but credits Apple with making strides during the course of its relationship with him.
"They figured out that it made sense for them to continue giving me access to their software," he said. "I have a card that hangs on my wall from Apple's security team that says, 'Kevin, thanks for your tremendous contributions to Mac OS X', which is signed by three people on the security team."
Finisterre's story is highly unusual, however. More often than not, researchers who have tried to show up Apple in the public forum haven't been so well-received.
Apple, it seems, is at a crossroads with the security research community. If Apple has a positive response, as it ultimately did with Finisterre, more researchers could be convinced to play ball. But if Apple continues to retaliate against researchers' transgressions, both actual and imagined, the chilly relationship between the two parties could turn into a full-scale cold war.
"Apple doesn't like surprises. If you are going to drop a zero day on them, or a media-oriented thing, you will get a negative response," said Rich Mogull, founder and CEO of Securosis, a Phoenix-based security consultancy. "Microsoft would not be happy in that scenario either, but they will talk to you. If you ruin your relationship with Apple, there is an institutional memory, and they will treat you differently."
PUBLISHED SEPT. 25, 2012