In security circles we tend to view Trust as a Binary property - either we trust someone or we don't. Our application of security controls tends to mirror this. Surely we should reflect that there are degrees of trust. I trust a very small number of intimate friends with some aspects of my personal well being - they do the same in a reciprocal manner. I engender lesser degrees of trust to those with whom I have a lesser relationship.
I may trust those I work with to keep information confidential but I'm sure that if many people were given sufficient incentive their trustworthiness would be depricated.
Effective security is rarely binary - it's subjective.
Classification of the value and sensitivity of information enables appropriate levels of control to be applied via security controls. How many organisations classify information according to these principles? I've worked with some very large companies who've tried to classify data though rarely (outside the military) has it been successful.
Trust is a binary property largely because it's easier to see it that way. Consider your web site's SSL certificate. Internet Explorer trusts that, if it's signed by a trusted root, and trusts it explicitly for everything. There's no "I trust these people to play games, and these other people to handle money information", it's all "I trust this web site to do secure web-site things". X.509 certificates allow you to trust a certificate holder for one or several of a set of purposes. While it's not quite the "trusted up to $100,000" that might be appropriate for purchasers, or "trusted to sign code, as long as it doesn't do any network activity" that might be appropriate in other environments, it's not quite the binary relationship. The more you fuzz this binary nature, the harder the concepts become for people whose primary job is not to ponder the imponderables of security.
Alun> Agreed on all counts. It's easier to "pigeon hole" than to contemplate. I'm not trying to make life more complicated - merely to challenge perceptions as false assumptions / over generalisations can be worse than investing time/effort/money in further insight.
I know someone who's tag line is "the worst type of security is a false sense of it".
I thoroughly agree that categorising information appropriately is a good way of getting better protection. Not only does it improve the security of sensitive data, it also reduces the cost of processing/protecting less sensitive data.
However the Achilles Heel is how you apply the classifications. There's a big problem in the Military/Government community with both "over" and "under" classification. Overclassification tends to mean people will fail to apply relevant protection because they've seen stupid stuff classified (my favourite example is a set of log tables I have that are marked "Restricted"). Likewise underclassification leads to potential (and actual!) breaches. There are also terminology barriers to overcome; "Top Secret" written on an advertising document means a very different thing to "Top Secret" on a disk of Government source code (and "Restricted data" has very different meanings on different sides of the Atlantic!).
It would be interesting to see the application of something like PGP's sliding scale of trust to web authentication, in some sort of user friendly GUI fashion. Netscape have gone some way in this with their whitelist/blacklist setup for websites (I guess MS could do the same thing by remotely updating the site lists for IE's zones?).
As I understand it, this is the point of CAS (Code Access Security) in the .NET framework - you can say something like "I trust this screensaver to draw things on the screen, but it's not allowed to talk to the internet or modify any files on my hard drive".