Me: You mentioned that you didn't expect to discover this particular vulnerability, the DNS vulnerability. What goes through your mind when you hit upon something that you think might be a vulnerability?
Dan Kaminsky: If you look at lot of my research I'm generally looking for interesting capabilities that are within the system. So really what goes through my mind when I find some new interesting capability with the system and just unfortunately the reality of things is, I can do X. Is X bad and then to be concrete that X caused a traversal of a security boundary?
The other day we had some remarkably complicated security policies floating around the Web and e-mail and various other things. There is a lot of things we want Web browsers to do, and there are a lot of things we absolutely need Web browsers not to do. And it's the same for Web browsers and it's for Office clients and it is for DNS servers, everything has capabilities, everything has more capabilities than they were originally designed to have. Those capabilities are problematic.
Me: There is no hard and fast rule when discovering a vulnerability. I know some people have discussed the model where a researcher reports directly to the vendor and then sits on it for 90 days or so. Where do you fall on public disclosure of vulnerabilities or contacting vendors or all of the above?
Kaminsky: I've spent my entire career in corporate environments. I didn't even start out as a security engineer. I...really started out doing graphics. Then I got a job in networking. I got into security because I was offered a really boring job and, you know, it's easy to break the code. But I've always had the freedom to develop new and interesting software. And the reason or the big source of that freedom is the complete lack of liability in computer software programs. This is not a bad thing. This is actually an important thing and a critical thing to the ability to innovate. Really.
And this is going to sound a little strange because it ultimately goes back to the ultimate purpose of vulnerability research. There are two kinds of human creativity: there is the kind that kills people and there is the kind that doesn't. Pretty much anything physically manufactured can kill you. Humans make chemicals and there is lots of stuff that can have chemicals. I mean, you know, you look up, the ceiling can fall on you, the paint on the ceiling can have fumes, the keyboard you're typing on can have fumes, or the phone that you're talking can too. Pretty much anything physical has product liability associated with it. Well if anything goes wrong then you're going to have to pay.
And then there is the Hollywood movie. It can be terrible, it be can be awful, it can be atrocious, it can be the worst thing you ever put to celluloid and you're not going to die, and so there is no liability for a terrible movie. Software, believe it or not, is actually in the second category. Yes, software can kill people but it's insanely rare. More people are killed by crashing windows than by Windows crashing. So we don't have liability associated with software and as such they can be far, far faster because the cost of handling is much lower.
But it does not mean the cost of failing is zero. We have this interesting third category where you're not going to die but you might go broke. You might get everything stolen. You might get everything lost. And with this third category we need some way, if we're not going to have liability. I'm not going to name names, but there are lot of the products, I don't know what decade they could ship if product liability was in place but it wouldn't be this one, it probably wouldn't be next one.
We need some way to at least differentiate good software from bad and that is what interestingly independent vulnerability research allows. It allows the market to monitor the quality of software, and to figure out what's safe and what isn't. That creates positive pressure toward writing safer code. And that is right now, how in the lack of a liability model, we can still get more secure code. It's not cheap but it's actually happening.
Me: So you have this body of independent researchers looking at code, doing a service. They found something and there is two or three different ways that you can go with it. Some people sell it to a company that says, "We'll act as your intermediary and we'll contact the vendor for you." Others approach the vendor directly and say, "I'm going to work with you on it." And still others immediately post it to the Internet and say, "Look, I found this."
Kaminsky: Well, there is also the fourth where you provide it, you sell it to someone who is not going to provide it to the vendor. And I do not expect that market to ever be a legitimized market. But focusing on three you just named, there's no problem selling an exploit to someone who is going to provide it to the vendor freely. I don't necessarily understand why companies are in that position where they're paying for vulnerabilities that they're providing to the vendor for free. But I'm not going to complain about it.
I just spent the last six months doing a lot of work that is not particularly technical--it was three days to find the hole, it was six months to get everything in the patch. That's a lot of work. But I mean if companies like Zero Day Initiative want to be in that position, I think they're great; they're, often they are a force for good in our industry. I have no problem with somebody going ahead and doing the work, that work with each vendor directly.
So, it's a group thing as well, and at the end of the day, bug finding is communicating, anyone who's ever worked in software knows that it's not enough to file a bug and be done with it. The tester and the programmer often need to test the code to actually figure out what the problem is. They all got the same goal: ship good stuff, launch good stuff.
I'm not a big fan of just dropping things out onto the open Internet. I mean, you know, what we're looking at, what we're looking for, is more secure code. We're looking for people to be able to protect themselves. Yes, when you drop it on the Internet, everyone knows, including the vendor, and that is way better than some alternate models.
But at least I prefer there to be some kind of responsible disclosure timeline. Now, it can't be forever, because if it's forever, nothing ever patches and then the market never gets the information to know what's safe and what isn't. But people need some time. I mean we're engineers here, and engineering takes time. It takes effort, takes testing, takes work to get out of the door.
Me: So walk me through. I know I wasn't to going to ask directly about the DNS thing, but you discovered it say early in the year and then you had a meeting with the vendors and then you arrived at a date where you're going to announce it; you had a timetable. Is that about right?
Kaminsky: It is about right; I mean this is an unusual situation in that you don't usually have one bug show up in every single implementation. In this case and it was not every single implementation. EDNS is not vulnerable, Bert Hubert's PowerDNS is not vulnerable. A lot of the newer DNS servers just have always been following this guidance, but the stuff that everyone was running: Bind8, Bind9, Microsoft DNS, IOS, these things were vulnerable. So what happened was and I don't want to take all the credit for this...Paul Vixie is a machine. I mean that guy has been doing DNS since I was in grade school.
So I went to Vixie because I had been doing DNS research for a long time. I said, "We got a problem. We got to figure out what to do here." And we eventually decided that we would have to hold a summit, that needs to be done in person.
Microsoft was very gracious, they completely agreed to host it and we had 16 people from all vendors and all these people (who) have been doing DNS for years fly in. We sat down, we did three things. First we looked at the problem, were we actually understanding this problem? Second, we tried to figure out what is the fix that would best protect the customer? And there are many fixes possible and once the bug is out, there are many more fixes possible.
And we ended up choosing a fix that would survive reverse engineering as long as possible, not forever, but as long as possible. And finally we all agreed the nature of this bug was such we needed to do a simultaneous patch. We needed to all do it at the same time...
Me: So how long...?
Kaminsky: ...and that's what we got.
Me: How long between the discovery and when you actually had these people in a room together discussing it?
Kaminsky: It was--between contacting Vixie and us actually going ahead and getting this together--no more than a few weeks. We spent some time going back and forth on the e-mail and, you know, e-mail was great, e-mail is wonderful, but sometimes you just got to get bodies in the room.
Me: So this was on March 31, 2008?
Me: At some point on that day or thereafter you decided that you would have a simultaneous release on a particular date?
Me: So going back to the whole disclosure thing: what if there was an independent body that certified that software met certain standards, sort of a good housekeeping seal of approval? Would something like that work in today's market?
Kaminsky: I think people underestimate the degree to which new vulnerabilities are found and new classes of vulnerabilities are found. A good housekeeping seal is, I mean it's, there is entire classes of low-grade bugs. I love to see something that said, "This software was bugged." I mean seriously, that would be fantastic. People shouldn't think that that is a replacement for independent security research, but yes at the end of the day we should have better ways for the market to be able to differentiate good code from bad.
Me: With regard to the DNS again what have you learned that you would maybe pass on to other security researchers about the process of brining multiple vendors together, anything that you would do?
Kaminsky: So this is an unusual lesson, but I'm going to bring it up anyway because it's been almost, we managed to deal with it at the last moment. Unless it's really, really, really, really important and you're willing to stake your entire reputation on it, do not do a huge big press thing with no technical details. If it is so important that you must do a big press thing with no technical details to get people to patch. No, it's not enough to have the entire vendor community behind you. No, it's not enough to have the entire DNS community behind you. You got to go ahead and get some of the other security engineers in the field, who are known, and they got to know everything and they got to go out with you.
So I went out with pretty much only the DNS people and only the vendors, didn't get other hackers in on it and you got to realize people in the security community need to be very, very skeptical. They have to, because it's so easy to make stuff up. It's so easy. You can just say anything and the press will believe you. I mean that's the honest truth. So there is a lot of skepticism and really, unless it's the most important thing you've done, go out with technical details. If you don't, go out with other hackers. Even if you're the guy who did, does DNS, don't go out alone. That probably be the biggest thing that I learned above and beyond the standard, you know, talk to the vendors, synchronize times, blah, blah, blah.
Me: I think you spent the last 48 hours working it out with your fellow security researchers?
Kaminsky: Oh! Yeah, yes. There was a wildly entertaining phone call with Thomas Ptacek.
Me: Yes, so...
Kaminsky: I'm not even joking, it was an absolute blast. And having to keep this secret for the entire year, actually being able to speak it aloud is kind of cool.
Me: You realize that that one session at Black Hat is going to be full of people waiting to hearing what you've to say.
Kaminsky: Don't remind me. I got my stuff. I got my toys.
Me: Well, I've seen you speak before. I know you will do really well. I appreciate the time you took to speak with me today and...
Kaminsky: No worry, just happy to help.
Me: All right Dan, take care.