Once again, it seems misguided reporters have appropriated a technical term and are misusing it in ways to confuse the field. "Hacker" was not the first term they ruined, but it is still the one that irks me the most. The primary definition of "Hacker," is of course "a person who creates and modifies computer software and computer hardware, including computer programming, administration, and security-related items" according to Wikipedia.
Now it appears that reporters unwilling to actually understand the terminology they use are in the process of destroying the term "zero-day." We have been reading over the past few days about a "zero-day" vulnerability in Symantec Anti-virus, which Marc Maiffret, probably to protect the world in his own trademark way, made public. Unfortunately (or maybe fortunately), this is not a zero-day, unless zero-day has somehow been redefined to mean "new."
Zero-day, as it pertains to vulnerabilities, means a vulnerability that was exploited before anyone, other than the criminal using it, knew about it. This definition is perfectly in line with the definition of zero-day as something for which information is not publicly available. By definition, the fact that Marc was nice enough to alert the world to Symantec's flaw means that it is not a zero-day, unless Marc went and exploited it before he advised the world of the flaw, and we have no indication that he did that.
It may sound like a rant, and of course it is, but it is really important that we keep these terms straight. A zero-day vulnerability is a security professional's worst nightmare. By diluting the term to refer to any vulnerability for which a patch is not available we dilute the language of our field, and lose a very important definition that we need to be able to discuss without ambiguity. It is unfortunate that reporters write about something without bothering to understand the terms of the field they report on. Those reporters give a bad name to those dedicated reporters who take care, and work hard to do a public service in understanding and documenting a field that is important to illuminate. Inaccurate use of important terminology muddy the waters for those of us who are charged with actually taking the field forward.
We do need a term for a vulnerability, like the current Symantec one, which has been publicly announced, but for which a patch is not yet available, I have in the past used "0.5-day" to describe such an issue, but that term does not yet seem to stick.
Didn't Marc have to "exploit" it to confirm code execution? Sounds like zero-day to me.
Marc did not HAVE to exploit anything, but he of course did demonstrate a working exploit. However, the fact that he announced the vulnerability and his private exploit is what makes this not be a zero-day. Had he simply released the exploit into the wild or gone and exploited some innocent user instead of merely showing off, then it would have been a real zero-day.
To clarify the first part a bit, demonstrating that a vulnerability is exploitable is unnecessary, or at least should be. A buffer overflow is always at least a stability bug, and is typically exploitable for privilege escalation. In addition, it is virtually always much easier to fix than to exploit. Arguing about exploitability is a waste of time that could be better spent fixing the problem (which often takes a matter of minutes) and testing the fix (which takes a lot longer).
Another - possibly even more egregious - example is the slew of media articles this week of a purported new "zero-day" SMB vulnerability in Windows 2000. In fact, the vulnerability it exploits was patched by MS05-011 well over a year ago. See the MSRC statement here: http://blogs.technet.com/msrc/archive/2006/05/25/430278.aspx
> a term for a vulnerability, ...
> which has been publicly announced,
> but for which a patch is not yet available
How about "no-patch vulnerability"?
I thought about "unpatched" but that overlaps with "not yet patched in our installation".
In theory "no-patch" overlaps with "not-yet-announced" vulnerabilities (including "known-in-secret" and "undiscovered") but we've managed without terms for those so far.....
How do you pronounce "0.5-day"?
"zero dot five day" or "half day"?
Jesper writes: "it is really important that we keep these terms straight."
I'd instead argue that it's really important that we don't confuse fuzzy slang and buzzwords with crisply defined jargon, and that if precision is required we stick to using existing dictionary terms (eg "Unknown", "Unannounced", "Unpatched", "Patched") rather than relying on every english-speaking culture and every security subculture throughout the world to have the same definition of the slang word.
Like "Hacker", "0-day" is slang, rather than jargon, and like all slang, its definition varies between subcultures.
However, "zero day exploit" got its name from the action of creating an exploit for security vulnerabilities the day the security bulletin goes out (from the warez scene where a 0-day is a product cracked and rereleased on the day of its formal release).
So, 0-day implies a public release, at least by derivation. This is supported by common usage.
Admittedly it's sometimes also used to describe pre-announce exploits, but I'd need proof of Wikipedia's claim that it "usually" means this since Google (and my own experience) appear to contradict this.
Dewi, I think you are right. However, every field has its own terminology. What may be slang to a sub-culture, is important terminology to a field. To ensure that we can communicate accurately in a field we really need to ensure that we maintain the integrity of the language of that field. The medical field has its jargon, or terminology if you will. IT has a different one, and infosec has additional terms. If we lose the precise definitions of those terms, we lose the ability to accurately talk about what is happening in the field. Several years ago Blackwell Publishing started publishing the Encyclopaedia of Management to document the language of the management disciplines, including Management Information Systems. The terminology of IT belonged in there, but unfortunately, the encyclopaedia was incomplete. It would be a good idea to actually develop one that is more complete for infosec.
Didier, I didn't say the term was good! :-) I usually call it "zero point five day" but that usually requires explanation. If anyone has a better term, I'd welcome it.
Once again, it seems misguided reporters have appropriated a technical term and are misusing it in ways...
I hear you on this one, Jesper - sadly, when lexicographers look at new words to document (note for the n00bs - dictionaries document, and don't define, usage), they look to "professional journalists" as prime sources.
So, when you look at a dictionary in the future, "zero-day" will mean the same as "exploitable", and "virus" will mean "unwanted code".
Search for "Word zero-day virus" and you'll see this already happening.
As for "Hacker", I had to define "Hacking" for Colin the other day, and chose to define at as "making a system do something outside the range of what its designer intended it to do". I'm still unwilling to cede the concept of "good hack" to the illiterate.
"The medical field has its jargon, or terminology if you will. IT has a different one, and infosec has additional terms."
I tried to differentiate between jargon and slang, but perhaps that was too woolly. Please consider anywhere I wrote "jargon" to read "techspeak", per http://www.catb.org/jargon/html/distinctions.html
When someone uses "hard drive" where they mean "base unit", they are clearly and blatantly wrong, though this is a common mistake. This is because "hard drive" is techspeak.
"0-day" is simply not in this category. Not in the the IT, infosec, hacking, cracking, or calendar-making cultures or fields. Nowhere. It is slang, and should not be used where clarity is desired, any more than management-speak should be used.
I've seen it used (mostly by infosec professionals) to describe a vulnerability that:
1) is known only to blackhats.
2) is known to the company responsible, but not publically announced.
3) has been publically announced today.
4) was announced at some point, but not yet patched.
5) has been patched today, but the patch was not widely taken up.
6) has had an exploit released for it today.
7) had an exploit released the day it was announced.
8) had an exploit released the day it was patched.
9) is old, but unknown to the organisation being targeted.
A) something else.
I'd be interested in how many people reading this blog (who, by and large, I suspect to be infosec professionals) would agree that only item 1 above is valid. I'd ratehr expect only item 9 not to get any votes.
I'd also be interested to see what you'd get, if you asked for a show of hands at an infosec talk, and told them they could raise a hand to as many as they felt defined what it meant to them.
The results, I suspect, will differ between subcultures (like slang) rather than between fields (like techspeak). In Redmond WA, there may be a general agreement with your view that item 1 is the only possible meaning. In London UK, I don't believe you'd find such a consensus.
security professionals that insist that terms have specific and unchangeable definitions should go be nazi Grammar teachers.
"zero-day" or "0-day" came from the warez world and was brought into cracker and information security culture to explain the newness of exploits or security-causal bugs.
"0-day" in warez meant that you released either:
a) the usuable software, 'point-click-run'
b) the usuable software with appropriate key/serial or generator
c) a crack to unusable software, which didn't necessarily need to come with the software itself
warez software that is released and requires a serial/key/crack is not actually technically warez, nor is it "0-day".
hence, "0day" in security-causal bugs means:
a) the identification of a bug that has existed in software, undiscovered by anyone including its original programmer(s). usually this if followed by a knowledge release to the general public, or to a smaller list of specific people
b) a POC or "proof-of-concept" that demonstrates how the exploitation of said "a) identification" [above] works. this does not mean anything malicious, but vendors have a way of saying "a) we identified bad shit" and then not releasing any proof that such "bad shit" actually exists or not, or how it works. it is often mispoorly refered to as "broken code", when it should be refered to as "purposefully broken code in order to prove a point or show those technical enough to understand how the security-causal bug works without allowing script kiddies to take down the internet". most security professionals and system administrators (well.. the ones that know what they are doing and deserve the title) welcome POC, so that they can defend their systems/networks
c) an exploit. this precludes anything but working code. this is a script kiddie's dream and a security professional's nightmare
note that "0day" may have different levels of interpretation. for example, an external group may release "0day" information/POC/exploit to a vendor regarding their product. this is certainly "0day".
but then, the vendor may produce their own "0day" information/POC/exploit on their own, with or without the help of beforementioned "external group".
also, the vendor may release "0day" informatoin/POC/exploit to a specific group or groups, before going completely public with the "0day".
finally (and this is the real deal), the information/POC/exploit goes public. this could be years after the "external group" "0day", but it's still called "0day" simply because it's "0day" to everyone else.
i hope that's confusing enough for you.
PingBack from http://ertitaly009.info/whatisazerolotline.html
Jesper I agree with your point on precise terminology and shake your hand. I can see where Dewi helps keep an open minded perspective between different groups or whatever one wants to call it. I will give everyone a good example about precise terminology.
A person at home cannot get on the internet. The home user calls their ISP and tells them the internet doesn't appear to be working. The home user tells the ISP their router doesn't appear to be getting the IP address from the cable modem. After 20 mins of trouble shooting the tech with ISP finds out it's not a router at all. The home user has the placed a 4-Port USB HUB. Now, you tell me how important precise terminology is.