As a general rule we trust software, to various degrees (based on its source, reputation of vendor, purpose and our own personal sense of paranoia).

Some types of software we place more trust in than others because of what it is designed to do – one example is the ubiquitous “benchmark tool” which, let’s face it, can only report how well a system runs that particular benchmark tool and it cannot be used as a general indicator of how “über” a system is.

There is no replacement for actually measuring performance whilst putting a system through its paces in a real world scenario, and repeating these tests to get an average instead of worst case (first launch, nothing cached) and best case (everything preloaded, disk cache populated) results.

I am not saying “don’t use benchmarking tools”, but do take into account they are very one-dimensional and rarely emulate real user activity – they will never be able to answer the question of “how many users can I get on my Remote Desktop Services farm with X cores and Y gigs of RAM?” (mainly because noone can – no two users work the same way to make this measurable in any accurate way).

 

Another type of software we place great faith in is the security tool – be it anti-virus, a firewall, anti-malware or what-have-you.

We regularly run system scans and have our resident agents checking the requests made by the user or a remote machine, and get a warm fuzzy feeling when everything comes back “green” and the rules/database/engine is reported as “up to date”.

But such tools can only be written with some kind of “black list” of known dodgy signatures or packet patterns, which makes them forever reactive and never proactive, sure we can use heuristics to try to spot something looks like malicious software or traffic, but this tends to only work for variants of existing, known signatures.

Try to set the system too paranoid or have a more complex set of heuristic rules and you can end up blocking or deleting harmless (or even essential) programs – plus they can have a much bigger impact on the system resources as this is effectively blocking operations until they have been OK’d… the more that goes into this check, the longer the wait, the slower the entire system can appear to be.

 

Then you have the case where a system has been infected by malware, and it’s only picked up when the anti-virus gets updated, replaced by another vendor’s product or is installed for the first time… after much rebooting, running and re-running of security tools, manual registry editing and file deletions everything looks to be okay and the tools all give the all clear.

Now, given that you trusted a product (or several) to prevent this from occurring and now these same programs (or at least types of program) are reporting that everything is clear… how much do you trust them?

Also, bear in mind what they are actually saying to you – “the system is clear from all malicious software that I know of, as far as I was able to determine”.

Again, due to the reactive nature of these tools, at some point there is a window of opportunity where malicious code exists for which the signature is not yet known – and if the malicious code is very subtle and quiet then it could be, for example, gathering data (financial, passwords, whatever) to send at periodic intervals to a drop-off point… the point being they could go undetected for a long time.

So, to hammer this point home, the security software giving the all clear is only “correct” insofar as its knowledge of what malicious software exists goes – all you know is that you were infected and now the tool you rely on is telling you that you no longer have that particular infection.

 

As a real world example, a few years ago relatively few people had heard of rootkits and these went undetected by every single anti-virus product on the market – they were very difficult to spot and often only gave away their presence because coding errors made the systems unstable and memory dumps were created.

Back then, if virus X infected a system and dropped a rootkit Y and then the signature for X was added the system would be “cleaned” as far as the anti-virus product was aware… and yet the rootkit will happily still be able to inspect or modify any activity on the system (including hiding files, processes and registry keys from view).

Any malware that has had a chance to execute leaves your system in an unknown state, even if you delete it – for example:
- copies of the malware may exist under different names, even using system executable names to appear legitimate at first glance
- the firewall process could have been stopped & disabled
- a firewall rule is added to allow anything in, but leave the firewall running (less obvious)
- hidden network shares with full access may have been created so the machine can be accessed remotely
- registry changes could have been made to change how & when programs or services are started, or their permissions
- a payload might not be dropped until a fixed date, after X days or reboots
(the list goes on and on, this was just a quick “off the top of my head” collection)

I am most definitely not saying “don’t use anti-virus products”, but in the event you do get infected and the virus was not caught on its attempt to be written to disk, the only sensible course of action is:
- back up any non-executable data you deem important (documents, media files, etc.)
- ensure you have serial keys/activation codes for any software installed
- ensure you have know where to download drivers for all your hardware
- format the partition, reinstall the OS and all your software, restore the backed-up data

The only person that can know for sure if their system is clean after running and then removing malware is the person that wrote it.
The rest of us can only format & reinstall to have that degree of certainty.

 

As prevention is better than cure - never use an administrator account for regular use.
This mitigates the impact of running malware that has no signature to detect – the less privilege you have, the less impact malware can have, as “privilege elevation” exploits are rare, and most malware relies on the user to have permission to make system changes.

I would also recommend against the commonly-heard advice of using ‘takeown’ to change the system Access Control Lists (ACLs), and also against disabling User Account Control (UAC) as these are some of the best mitigations against malicious software.
If you feel you have to do either of these, then there is a high probability you are doing something wrong, or out of habit.

Changing system ACLs leaves the OS in an unsupported state, and there is no tool to restore them to their “out of the box” state, other than a restore or reinstall.