
Sam Antar, convicted white collar criminal, says trusting is stupid clearly and explicitly:
President Ronald Reagan said:
“Trust, but verify.”
As a convicted felon, I say:
“Don’t trust, just verify.”
“Verify, verify, verify.”
As a criminal, I considered people’s humanity as a weakness to be exploited.
The inclination to trust first and then verify, gave me the upper hand.
The criminal always has the initiative.
While you initially trust us, we work on ways to solidify your trust before you verify.
Hopefully, you will never verify.
However, if you do verify, we will have corroded your skepticism to a large degree.
A word of advice from this convicted felon to the capital markets, securities analysts, journalists, the accounting profession, investors, and others:
The word “trust” is a professional hazard you can leave at home before you go to work.
A criminal says “Don’t trust”. Yet computer security experts talk about a “trust” model. When are we going to move beyond trust to verify? A google search finds endless examples of sites reassuring users that they are “trustworthy”. It should not be a surprise then that computer users are used to just entering their password or clicking o.k. when a security dialog comes up. Users are asked to always trust without any understanding. What does it mean when a certificate cannot be authenticated?
Furthermore, we now have “trusted” applications getting computer owners into trouble.
For example:
- Andre Vrignaud is a such a victim.
Comcast cut off broadband access to Andre Vrignaud. A month earlier, Vrignaud said he had a “polite but irritated” conversation with Comcast’s Customer Security Department about how much data he was using. He told them he had no idea how he used so much and wondered if his roommates may have hit the limit because they watched Netflix HD streaming movies and listened to Pandora’s internet-streamed music radio.
Why can’t Vrignaud limit easily on his end?
Once again, a google search reveals how important being able to control and manage at the application level. - How about the case of Matthew Brady? He is an innocent victim, like many others, framed by a poor computer security model.
Until recently [story dated Tuesday, January 16, 2007], the 16-year-old Arizona boy faced life imprisonment for possessing child pornography; each of the nine images on his computer carried a possible 10-year sentence.
The caution: Your computer could be storing and distributing child pornography without your knowledge. It could be what is called “a zombie.” A virus, worm or “bot” may have almost invisibly infected your operating system, perhaps when you opened an email attachment or clicked on the “wrong” (not necessarily adult) website.
The “infection” allows another person to remotely access your hard drive. Often, the third party tries to capture financial information such as bank account numbers. Often, he stores data on the hard drive and uses your computer to distribute spam, including pornography.Benjamin Edelman, a computer security expert, indicates how quickly a computer can become infected. “I recently tested a WindowsMedia video file…On a fresh test computer, I pressed Yes once to allow the installation. My computer quickly became contaminated…All told, the infection added 58 folders, 786 files, and an incredible 11,915 registry entries to my test computer. Not one of these programs had showed me any license agreement, nor had I consented to their installation on my computer.”
The Bandy’s two-year nightmare might be winding down, but the family has been financially ruined by over $250,000 in legal costs.
Instead of trust, as Sam suggests: “Don’t trust, verify, verify, verify”. No application should be given a blanket “trust” but rather a conditional trust. An application should not even be allowed to ask for a blanket trust.
Instead the application must ask for permission and indicate why it is asking for the permission:
- write to a specific directory
- send data to an internet site
- receive data from an internet site
- All data sent or received is logged
- Any data the application wants to send or receive needs explicit permission from the user.
The user must be able to selectively deny or condition a granted permission at any time (not just when an application is starting) :
- Granted for 10minutes
- Data sent/received is logged
- Data transmission rate is no more than 1mb/sec
- Data transmission rate is no more than 10megabytes/month
- Data stored for only 10 days
- Data is stored is no more than 10megabytes
- CPU usage is capped as a percentage.
It is up to the application to behave well if the permission is or it is denied permission. And if it doesn’t like the permissions then well – don’t run.
The application is a guest and needs to respect the rules as a guest.
Trust. is. stupid.
Allow me to be a security expert pretending to be a ID proponent.
All these scientist types are talking about the “theory of evolution.” And they’re forcing it down our children’s throats, as if it were truth. But it’s just a theory! It says so in the name. Well, we have our own theory; why can’t that be taught just as well?
See the problem with that argument? It’s actually a fairly subtle problem, specifically that “theory” means something very different when a scientist says it vs. when a layperson says it. The entire argument is based on a misinterpretation of that one word.
Your argument has the same problem. When crypto experts discuss “trust”, they mean something completely different than the layperson’s definition of trust, and far closer to your goal. The entire trust model of PKI is the following:
If you don’t see how that’s different from saying “A valid certificate implies that the site that uses it is trustworthy”, read it again. Heck, if you didn’t notice that every time that paragraph used the word “trust”, it always applied to one party performing one particular action.
Now suppose that Verisign hands you a certificate saying “The one who holds this certificate owns sworddance.com”. Does this mean that I should trust the information that I obtain on sworddance.com to be correct? No. Heck, unless I trust Verisign to verify identities, I have no reason to even believe sworddance.com is yours.
That said, you do have a good point. The threat model (which has nothing to do with “trust”, by the way) for applications is hopelessly outdated. It came from the early 60’s, when the only way for a program to appear on a computer was for your computer manufacturer to send out the Field Circus to put it there, or for you to put it there yourself. Then came the ARPANET, which begat the Internet. Now, we need a very different threat model, but AFAIK, we haven’t figured out what that threat model is.
Once we figure that out, capabilities (the mechanism that you’re suggesting) may in fact be the right solution. But you can’t even start thinking about solutions until you know what the problem is.
On a tangential note, Linux actually does implement capabilities. Not to the full extent that you are looking for, but enough of it that you can implement the rest in userspace. Interestingly, they aren’t actually used that much, because they are far too complicated to use. The UI is too unweildy, and when I sit down in front of my computer, my goal is not usually to figure out how to give some process least privilege. My goal is to get something done, and if the security system gets in the way, well, security be damned. (footnote: I actually spent two days trying to figure out how to delegate the authority to administer the network to a qemu process I was running, without giving it blanket filesystem access or anything. Two days. In the end, I could not get the damned thing to work, so the QEMU process had to run as root.).
Also, note that, even if all applications use conditional trust (aka capabilities), there’s one application that, by definition, has to be given blanket trust: your OS (or rather, whatever bit of your OS handles the security model). Put differently, you have to have permissions in order to delegate them.
So, no. Trust is not the problem. Fighting against trust will get you nowhere, because we’ve already learned that blanket trust is bad (except where it’s absolutely necessary). Instead, you should be fighting against outdated threat models, and the fact that all of our security UI is not the problem.