Nate • June 2, 2016 9:32 PM
Exasperated Programmer: "Yes... it's difficult... but it shouldn't have to be... it's POSSIBLE to design systems that aren't so difficult... why do engineers refuse to do so?"
Ergo Sum: "May I gently remind you that systems are nothing more than a collection of programs that are written by, well, programmers. While the engineers try to overcome the vulnerabilities of the software platform, it is not always possible."
@Ergo: that's the surface answer, and I suppose it's correct as far as it goes. But it doesn't answer the original question.
@Exasperated: YES. YOU ARE ASKING THE RIGHT QUESTION! PLEASE KEEP ASKING THIS!
I am exactly as frustrated as you are with just how badly flawed our system architectures are. And with how difficult it is to make programmers and system engineers and language designers understand that they need to fix it.
If the answer to "why do our systems keep getting hacked?" is "well, programmers make mistakes, you can't stop that", then that's THE WRONG ANSWER! It is simply incorrect. No two ways about it. We CAN stop these mistakes from breaking our systems! We simply choose not to, because it requires redesigning our entire systems - hardware, language and operationg systems - from scratch.
The correct question is "why do programmer mistakes lead to fatal security compromises? Why isn't the system structured in such a way that these mistakes - which we KNOW will ALWAYS happen because programmers are human - CAN'T violate the predetermined system invariants established by a proof-checked, algebraically correct lower layer? Our software is based on maths - some of it 50 to 100 years old - why have we not actually deployed the maths we have?"
And the answer is one of: "Because we choose not to believe that our architecture is so damaged. Because we're lazy. Because we don't want to believe the cost of security breaches. Because it's someone else's problem. Because the OS-to-hardware layer is not owned by us and we're not allowed to touch it. Because we don't have the time or money and Market Forces (tm) want us to ship dangerous junk, fast, and make it our customer's problem. Which it will inevitably become."
We're in roughly the position of the 18th-19th century steam engine industry, with boilers exploding every other day and killing bystanders, but we haven't yet grasped that it's our responsibility to make boilers that DON'T explode.
We CAN design high assurance systems that don't explode, at least that don't explode in some of the extremely dumb ways our current software does.
We could START by, for example, applying the lessons of "functional programming" - every component is a pure function with no side effects - to the operating system itself. Remember in first-year programming class, you learned that "global namespaces are bad" and you should have local variables? And then over in introduction to OSes, what do we do? We put all the data on the computer into one giant global namespace, called the filesystem.
How long have we known that local namespaces are good? Since the 1970s, I think? And yet we haven't absorbed even that one lesson.
And that's why we will continue to fail at security, because for decades now the OS design people haven't been applying what the CS people know about programming. And the hardware people haven't caught up with the OS people. And the Internet of Things think the entire Internet is a private secure LAN.
"Hey, let's make a universal serial bus where, when you plug in some random piece of plastic you found in the car park, it can install a root kernel driver and write to the entire system RAM, bypassing all security."
"Sure, sounds fine to me."
"Great, now let's put this in a car and plug its brake system into the Internet. What's the worst that could happen?"
from Schneier on Security http://ift.tt/25CBLw5
No comments:
Post a Comment