It’s a challenge to protect an organization’s valuable assets against malware. And as malware increases in magnitude and sophistication, it seems that IT security is a journey that really doesn’t have a final destination. Do you find yourself looking over your shoulder and wondering about new threat vectors, where the next attack is coming from, and whether your current safeguards are sufficient?
We really want to know--what is your biggest malware challenge today? We aren’t asking about specific forms of malware. Let’s assume there are a whole bunch of bad things that you need to protect your organization from. We're interested in learning how you do so and the challenges you face in doing so. Do you use a defense-in-depth strategy to protect clients, servers, and the network edge? If so, what has been the most challenging part of implementing such a strategy? What keeps you awake at night worrying about whether your strategy is sufficient to keep the fox out of the chicken coop?
The Solution Accelerators for Security and Compliance team is starting a new project with a broad focus: Malware Defense-In-Depth. We’re looking for your input to help us determine where to focus our attention. Tell us what would help you the most in developing, implementing, and maintaining a malware defense-in-depth strategy. Tell us what tools you need. Tell us what guidance would be most beneficial. We are very interested in your ideas on this topic. Malware isn’t going to go away, so how can we best help you to defend against it? If you have an opinion on this topic, please post your comments.
I'd like my antivirus software to tell you the difference between a virus that was blocked before infection (like when your web browser downloads a script that exploits an old vuln for which you're already patched) versus a successful infection that was discovered only after a month, when a new antivirus update was applied. Because the actions you need to take afterwards are very different.
The aim is to attain survivability not only against malware, but natural disasters such as caused by hardware failures etc.
1) Separate data from write traffic
File corruption happens during disk writes; less disk writes, less risk exposure.
Obviously, data has to be written to disk - but it helps if that disk space is not shared with incessant temp, TIF, paging etc. traffic as well.
So step 1 is to get data off C: into a separate file system that is nearby, for performance reasons.
Step 2 is to auto-backup this data to a seldom-used file system that is far away, trading poor performance for survivability as writes and even head overfly will be rarer. I keep a FIFO of 5 .ZIP of the data set, updated daily.
2) Separate data from risky material
I define "core data" as that which is unique to the user, small enough to easily manage, and that cannot act as code.
Incoming material (email and IM attachments, Bluetooth or camera transfers, peer file sharing, saved downloads) are NOT data, and should be considered hi-risk for malware. They have no place in the core data set.
Code, even known clean, is not safe to include within the core data set either, because it can be infected by generic intrafile viruses. Limited user rights are irrelevant here, as even the most limited rights allow user data to be edited, and thus trashed or infected.
3) Treat hi-risk material with fire-tongs
Within the subtree that holds incoming material, handling should be as safe as possible. Alas, Windows currently has no concept of this as a set of shell folder behaviors, so there's not much one can do other than point a large tier of on-demand av scanners at this subtree, run that overnight, and apply a "wait until tomorrow" policy.
The biggest challenge is that MS currently does not share the same awareness and goals of the methods I use, so the design is not useful in applying these methods.
There's hardly any awareness of a need to separate data and incoming material; the first good sign has been Vista's Download shell folder, as distinct from Documents. We still have attachments hidden in mail stores, advice to save downloaded .EXE within Documents to avoid System Restore effects, inappropriate default download locations for IE, etc.
This is exacerbated by 3rd-party vendors, who tend to dump everything in the user's Documents space - e.g. 500M+ of The Sims 2 game data, which can render the data set too large to back up effectively.
The awareness of code exploitability does not permeate code or platform design, which still tends to be "one big lump, everything on by default". Bundled subsystems can be disabled but not excised, file types are allowed to sprawl, and unsolicited groping of content is becoming more common.
Bad defaults, inability to control the new user account template, inability to apply settings across user accounts, hardwired locations, closed shell folder architecture, and the unrelocatable bulking up of C: by Windows updates etc. are all obstacles to effective system management.
rootkits and keyloggers