January 26, 2005 4:00 AM PST

Flaw finders go their own way

To many software makers and security consultants, flaw finder David Aitel is irresponsible.

The 20-something founder of vulnerability assessment company Immunity hunts down security problems in widely used software products. But unlike an increasing number of researchers, he does not share his findings with the makers of the programs he examines.

Last week, Immunity published an advisory highlighting four security holes in Apple Computer's Mac OS X--vulnerabilities that the security company had known about for seven months but had kept to itself and its customers instead of disclosing the problem to Apple.


What's new:
Despite pressure from Microsoft and other companies about the dissemination of security alerts, independent researchers are sticking to their own approach to flaw disclosure.

Bottom line:
The debate about when and how to inform people about security risks is causing fractures in the industry.

More stories on this topic

"I don't believe that anyone has an obligation to do quality control for another company," Aitel said. "If you find out some information, we believe you should be able to use that information as you wish."

Despite efforts from Microsoft and other companies to direct how and when security alerts are sent out, independent researchers like Aitel are sticking to their own vision of flaw disclosure.

For them, software companies have become too comfortable in dealing with vulnerabilities--a situation that has resulted in longer times between the discovery of security holes and the release of patches.

At the heart of the issue is the software industry push for "responsible" disclosure, which calls on researchers to delay the announcement of security holes so that manufacturers have time to patch them. That way, people who use flawed products are protected from attack, the argument goes. But the approach also has benefits for software makers, a security expert pointed out.

"As long as the public doesn't know the flaws are there, why spend the money to fix them quickly?" said Bruce Schneier, chief technology officer at Counterpane Internet Security, a network monitoring company. "Only full disclosure keeps the vendors honest."

The debate over how open the discussion of flaws should be is not a new one. The locksmith community has been talking over the issue for more than a century and a half, and it still has failed to find consensus.

Matt Blaze, a computer science professor at the University of Pennsylvania, has seen firsthand the ire that the issue can raise. Blaze has studied how security threats in the logical world compare to problems with physical locks in the real world. His papers have revealed weaknesses in locks that some professional locksmiths would have liked to keep secret.

""We, as professionals in the security field, are outraged and concerned with the damage that the spread of this sensitive information will cause to security and to our profession," a person claiming to be a retired locksmith wrote in a bulletin board posting about Blaze's work.

That reaction is nothing new, Blaze found. Locksmiths have always been close-mouthed about the weaknesses of locks and, as far back as the mid-19th century, an inventor of mechanical locks found it necessary to defend himself when he published details of such flaws.

"Rogues knew a good deal about lock picking long before locksmiths discussed it among themselves, as they have lately done," Alfred C. Hobbs wrote in a book published in 1853, according to Blaze's site. The author also wrote:

"If a lock, let it have been made in whatever country or by whatever maker, is not so inviolable as it has hitherto been deemed to be, surely it is to the interest of honest persons to know this fact, because the dishonest are tolerably certain to apply the knowledge practically; and the spread of the knowledge is necessary to give fair play to those who might suffer by ignorance."

In the past, many hackers and security researchers outed glitches without much thought of the impact on Internet users. Microsoft, among others, changed this. As part of its 3-year-old "Trustworthy Computing" initiative to tame security problems in its software, the company began an outreach program to support the work of the security community. At the same time, it started chastising those researchers who, it believed, released details of flaws too early.

Balance of power?
The result is a tradeoff between security researchers and software businesses that is supposed to benefit product users.

Apple, for example, keeps the work of its security team wrapped in secrecy and issues patches approximately every month. Microsoft has moved to a strict second-Tuesday-of-each-month patch-release schedule, unless a flaw arises that poses a critical threat to customers' systems. Database maker Oracle has settled on a quarterly schedule.

"We think it is in the best interest of our customers," said Kevin Kean, director of Microsoft's security response center. "A large portion of the research community agrees with us and works with us in a responsible way."

But some security researchers believe the tradeoff is benefiting companies too much, as it allows them to tweak their patching processes at their convenience, and without the need to introduce fixes disturbing the progress of software development. That adds up to a lax attitude to security, some experts believe.

For example, eEye Digital Security abides by Microsoft's responsible disclosure guidelines, but posts the length of time since it reported a vulnerability to the software giant on a special page on its Web site. The top-rated flaw on the company's Web site was first reported to Microsoft almost six months ago.

The detente also makes manufacturers look good in terms of the lag between the public warning of a flaw and the release of a patch. For example, a year-old study by Forrester Research gave a nod to Microsoft

Page 1 | 2


Join the conversation!
Add your comment
Good for them!
Of course the "flaw finders" have the right to test and report on
software products. That's no different than independent testing
and reporting done on other types of products, such as
Consumers Reports does. Microsoft and Apple, in particular,
enjoy very cozy relations with an army of media sycophants. The
survival of many a web site and magazine, depends on not
alienating those companies, which are able to manipulate the
news about their companies adroitly. Naturally Gates and Jobs
take great offense at being treated as mere mortals who produce
products with flaws.

Speaking of flaws, I find one in this article:

"Last week, Immunity published an advisory highlighting four
security holes in Apple Computer's Mac OS X--vulnerabilities
that the company had known about for seven months but had
kept to itself and its customers."

Apple didn't share the information with its customers in general,
but possibly with a very few customers, although the writer of
this article offers no evidence of such. Apple certainly didn't this
customer a message about the flaws. If it had shared the
vulnerabilities with its customers, there would have been no
reason for the information to be made public by Immunity.
Millions of Apple customers would hardly have conspired to
keep the flaws a secret.
Posted by nicmart (1829 comments )
Reply Link Flag
Immunity is the 'company'
I believe the article is refering to the "company" Immunity sharing the news of Apple flaws with their own (Immunity's) customers. Apple just found out about the flaw report. This is several months after Immunity had been sharing the news with their own customers.
Posted by (6 comments )
Link Flag
totally irresponsible
Immunity knew of the vulnerabilities for 7 months and told no
one but its paying customers. Then Immunity went public with
the vulnerabilities. Apple did not find out about the
vulnerabilities until Immunity went public with the information.

This is totally irresponsible of Immunity. Period.

I support a company's right to use its information for its own
good. It was (and is) fine for immunity to find flaws and then tell
only its own paying customers for a finite period of time. This is
how companies make money and keep existing.

However, going directly from keeping to itself and only its
paying customers to going public with the information is
irresponsible. The crackers and Apple found out the information
at the same time. A dedicated cracker team MIGHT have found a
way to crack many, many Macs before Apple got out a fix. This
has had the potential of endangering the data of many people
and businesses. (Apple just yesterday issued a secutiy patch so
it may have responded first.)

The responsible way to handle it is simple:
The finder uses the information for his own benefit for a finite
period of time (say 60-90 days).
The finder tells the sofware developer a finite period before
going public (say another 60-90 days).
The finder goes public with the flaw if the software developer
has not already done so.
When the flaw goes public (either by the software developer or
the finder) the finder is specifically mentioned as having found
the flaw first.

This way everyone wins. The flaw finder gets benefit from his/
her efforts. The software developer gets a head start on the
crackers. The public is guaranteed to find out about the flaw so
the software developer has absolutely no ability to just "sit on"
the flaw and do nothing.
Posted by shadowself (202 comments )
Link Flag
I don't particularly care that...
...companies try to "manage" everything. Everything should not be in their control.

Security flaws should be disclosed so people are aware and can act accordingly. The flaws shouldn't be there to begin with, but companies short-shrift the quality control process to push *crap* out the door. It's about chasing dollars, not a quality product.

They need the pressure applied.
Posted by ordaj (338 comments )
Reply Link Flag
Dollars and Resources
Most people blindly expect software companies to plug every hole possible. There is a trade off between the possibility/severity of attack and the cost of resources it take to correct the issues.

One thing the article lacks at disclosing is, "How critical is the security hole?" As for Apple, they may have known about the proplems for a while, but if the hole is small with little or no risk of exploitation, why fix it immediately when you can work on a broad solution to fix many problems at once?

While no OS is completely without problems, Apple does enjoy a great deal of freedom from the majority of the problems that effect Microsoft. This gives Apple more room to withold updates so that it can focus resources on a more critical area.

Microsoft on the other hand continues to suffer a loosing battle on many fronts, especially with the average user which has little knowledge on how to stop adware/spyware/viruses. They do not enjoy any room to manuver resources if a hole is discovered as they are expected to fix all of the problems...now. This has to do with the large and un-technical population as well as the past and current problems that continue to plague the Windows environment. Plus, they have much more in terms of resources to throw at the problem, and they should.

Maybe I should send them a bill for the countless hours I have spent to fix my associates computers due to a lack of oversight in creating a more robust system, out of the box.
Posted by jypeterson (181 comments )
Link Flag
Reminds me of Ralph Nader's organization..
I could understand a grass-roots organization that finds and discloses flaws in "All Software Platforms" being helpful to consumers. But to have a private company sitting on flaws till the time is right to hurt a company financially is just as irresponsible as the software flaws they profess are harmful.
Many folks are stockholders and bad news, even if its non-credible or insignificant, can hurt everyone. Extortion comes to mind.
Heck, we can't even get gov agencies to cooperate and fix holes. You think Microsoft will jump everytime a pipsqueak yells "sky is falling" so that they make a buck?
Non-profit, supportive folks aren't out for financial gain- they want problems fixed.
But this Immunity company is spinning "known but under-control" flaws. And why Apple? because its gaining in market? Its profitable and shares quadrupled in value since last year? I smell a tick...
Posted by Below Meigh (249 comments )
Link Flag
flaws worth it economic-wise
Consider the trade off between delivering a perfect product and a product with some flaws:

To deliver a flawless product, you need lots of development time and money which typically means higher costs for the end user and delaying the use of technology that may become outdated by the time it is delivered.

Some consumers are content with purchasing flawed software provided that the economics are correct and the flaws will be fixed when they are found. That is, if the flaws are worth the cost savings to the consumer for having immediate gratification from the immediate availability of the software. In modern times, as speed and availability become the critical cornerstone in outbidding your competitors, some consumers find that a satisfactory compromise.
Posted by nrlz (98 comments )
Link Flag
The "company"
You may well be right about the writer referring to Immunity
rather than Apple. I stand corrected.
Posted by nicmart (1829 comments )
Reply Link Flag
It seems to me that Apple has more to lose if it suffers a widely
publicized security attack that is successful, so it should be
more attuned to vulnerabilities. A large airline might survive a
fatal crash, but a small one would almost surely be done in by
such a calamity. It is very hard to predict the impact of a security
hole. An enterprising cretin might find a way to turn what seems
small into a very large problem, indeed.
Posted by nicmart (1829 comments )
Reply Link Flag
Give them a week...
and then let it go. I don't see a problem with giving a company a heads up a week before you tell the rest of the world.
Posted by System Tyrant (1453 comments )
Reply Link Flag
A balance needs to be found
While the public has the right to know about a vulnerability it is definitely in the customers best intrests for there to be a fix for a problem before there is a working exploit.

IMHO Companies should be given a 30 days lead time to diagnose, fix and most importantly test the fix before the vulnerability is publicly disclosed.

If the vulnerability is disclosed publicly first you have a race between the bad guys trying to exploit it and the vendor trying to fix it, with the inevitble result being more successful exploits as well as buggy patches being released without proper testing.

Which leads to a related issue, the MS once a month patch schedule was created primarily to reduce the load for IT departments so they weren't statying late installing patches 6 times a month. With immediate public disclosure it would be even worse, as every patch would need to be loaded immediately as it is released, some patches would need to be loaded and then the patches to those patches.

While Apple shouldn't have sat on a vulnerability for 7 months the answer isn't letting vendors know at the same time you are letting the virus/worm writers.

p.s. I didn't feel like doing the reasearch but IIRC the industries "responsible disclosure" program calls for something like this, were there is advance notice on a vulnerability but the vendor is free to disclose publicly after a period of time whether or not the vendor has released a patch.
Posted by raitchison (103 comments )
Reply Link Flag
Apple didn't sit still
Perhaps this is an example of bad grammar, but Apple didn't sit
still for 7 months, the security team who found the
vulnerabilities (in BSD, not OS X directly) didn't tell Apple for 7
months! The writer's english seems to have confused this issue.
Posted by (20 comments )
Link Flag
Balance for Security
Perhaps the best solution, particularly as it relates to the general security, would be to require the entity that finds a security bug to report it to a government security department. That entity acting as a mediator would then work with the software maker to set a bug information release time. This release time might be much shorter to other major software makers that might also be effected by the problem, as well as the solution.


Gregory D. MELLOTT
Posted by gdmellott (28 comments )
Link Flag
Palladin Press as an example
Palladin specializes in "cookbooks" on crime; from changing identity to murder. They have been succesfully sued and prosecuted for some of this material. If someone figures out how to easily thwart the electronic locks in a hotel, should they publish the method to the world? The Press has already decided this for themselves to such an extent they are becoming Professional Voyeurs instead of responsible news providers. An answer will evolve as the actual consequences become more obvious.
Posted by cgrcgr (2 comments )
Reply Link Flag
You make a good point.
In the book they are showing you how to exploit flaws found in software or hardware. This is bound to open you up to litigation. However, if a security company releases to the public that a peice of software has a flaw that can be exploited to gain control or access to your pc it's not exactly the same thing. On the other hand though if that same company publishes source code and or enough information so that someone can exploit the flaw then they should be held liable for the damage. All of this assuming there are laws in place to prosecute for the offence.
Posted by System Tyrant (1453 comments )
Link Flag
30 Days
I've always believed that the "30 days" rule is enough.. They have a month from the time of the report to fix it... After that, the flaw goes public... Anything less then that and you'll snag an unstable patch which will create more problems then it solves.
Posted by nzamparello (60 comments )
Reply Link Flag
Most hackers are script kiddies using other people's research to gain access to machines. Releasing security flaws without giving the vendor ample time to respond to it is just, imho, irresponsible. Few hackers would be able to find the flaws themselves, but plenty can take advantage of flaws these researchers come up with.

How long, do you think, before the company is sued out of existance because they released information on a security hole thats used to exploit a company's network? Or before they erroneously release a report on a security release that is non-existant, and have to deal legally with the vendor?

And how exactly do these companies make money, anyways?
Posted by (402 comments )
Reply Link Flag
The software companies are irresponsible
Most of the flaws found would never have been address, if they hadn't been made public. Heck companies like MS like to deny the flaws exist even after they are made public.

If not for people finding and making them public, crap like windows would be even more insecure. Now that is a scary thought.
Posted by (40 comments )
Link Flag
bug in xyz software
There's a bug in xyz software. I know it's there, I know it can be exploited. What, are you going to sue me for saying that? Ridiculous.
Posted by tim__w (14 comments )
Link Flag
Compensation for discovery.
As for any compensation for discoverying a security bug. Perhaps they could use previous events an determine a reasonable formula for figuring the impact it might have it not addressed. Then work with a small percentage of that as a reward. If it is too little to drive the neccary research then it is too small. Companies and many private parties are already paying for it in the price they must pay for Anti-Virus programs and the likes. That cost may drop if the system worked.


Gregory D. MELLOTT
Posted by gdmellott (28 comments )
Reply Link Flag

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot



RSS Feeds

Add headlines from CNET News to your homepage or feedreader.