Michael Howard is infectious. He’s a great educator, an energetic speaker, and after nearly 20 years is as passionate about his computer security specialty, secure code, as he was in the beginning. It’s hard to be around him more than a few minutes without you wanting to help make the world more secure one line of code at a time. He first gained worldwide notice for coauthoring Writing Secure Code along with David LeBlanc and for being a significant part of the reason why Microsoft is hugely dedicated to writing more secure code. Howard, originally from New Zealand but now living in Austin, TX, has co-written several books on writing more secure code and is a frequent blogger.
Michael Howard – I asked Howard how he got into computer security. He replied, “I was working on very early versions of Windows NT for Microsoft. I was doing fairly low level stuff like access control, cryptography, and custom GINAs (graphical interfaces which used to be the way you logged into and authenticated by Microsoft Windows and other authentication providers). This really led me to start thinking about security-as-a-feature more. Around 2000 it became clear that security features do not make a product secure; rather you have to also focus on secure features, which is a different discipline.”
Michael Howard – I asked him how the SDL got started at Microsoft. He said, “Over time, various security-related practices learned by the .NET Framework, Windows, Office, and SQL Server teams, and others evolved into the Security Development Lifecycle (SDL). SDL helped popularize the secure code and secure design movement and is now the leading force in how many companies better secure their software.”
Michael Howard – I wondered whether SDL was a small improvement over something else he read, or if SDL was something he built from the ground up without any prior reference. He replied, “Everyone builds on the work of others, but most of SDL was from doing and learning. What works stays, and what does not work or is utterly non-pragmatic is tossed. Sometimes I wonder if some of the academic models had been tried in a production environment at all, one with deadlines, performance requirements, time-to-market, economic concerns, backward compatibility requirements, and so on.
“At the time, there was a huge school of thought which believed that if you could just increase the overall quality of the code you would directly increase the security of the code as well. But I have yet to see any empirical evidence of that. You can make functional SQL code that passes all functionality tests, but it could be riddled with SQL injection vulnerabilities. If you’ve never been trained in what SQL injection vulnerability is, all you’ll see is perfect code—it does what it’s supposed to do. A secure system only does what it’s supposed to do and no more—it’s the ‘extra functionality’ that comes with the SQL injection weakness that makes it insecure.”
Michael Howard – I asked him what his role was in Microsoft adopting SDL practices. He shared, “It was the synergy of a lot of different things that I and others were involved with. It started in late 2001, when the .NET team had a “security stand down” event to look at the current security issues and potential risks. We learned a lot from it and added many new defenses. I remember that we had some t-shirts printed up for the event with the dates on it, and then a major snowstorm hit and so the event was delayed … so it was ironic that in the quest for more secure code we had the wrong date on all our t-shirts. But what came out of that event was learnings that eventually fed into the SDL. David’s and my book came out and got many people to think more about code security. 2001 saw Microsoft under attack by lots of malware and hackers. The Code Red and Nimda worms hit hard. So, Bill Gates asked about the nature of software vulnerabilities and why we still had them. I got picked to be part of the team that met with Bill Gates. I handed him an early copy of our Writing Secure Code book, and from the meeting Bill eventually wrote his famous “Trustworthy Computing” memo ( https://www.microsoft.com/mscorp/execmail/2002/07-18twc.mspx ). Bill mentioned our book in the memo, which saw sales sky-rocket! I ended up working for the newly created Trustworthy Computing division at Microsoft. This started a process of additional security stand downs (also for Windows, for SQL Server, and many other Microsoft products). SDL was generated and updated out of all of this, and improved and made more efficient. It continues to be updated annually.”
Michael Howard – I asked if it was true that he and Microsoft have released more information and tools regarding secure coding than any other single entity. He said, “Unequivocally and emphatically yes! But more important, these are the tools and techniques that we use in our production environment, on millions of lines of production code every day. This is not an academic exercise. It’s what one of the largest companies in the world does. And we share nearly all of it.”
Michael Howard – I asked if the world’s programmers are getting better trained on computer security issues, why isn’t the world seeing fewer publicly announced vulnerabilities. He said, “Well, for sure there’s more software with more lines of code. But the real problem is that programmers still aren’t getting trained in secure coding and have no understanding of basic security threats. Academia is still way behind in most cases. I was reviewing a university’s computer security class curriculum the other day, and nearly 50% of the class focused on low-level network threats. There wasn’t any training on cloud security or secure coding. Our colleges are still turning out programmers who don’t know much about computer security or secure coding, which is a travesty when you consider these grads will create critical systems hooked up to the Internet. I still find very basic bugs in other people’s coding. When I demo a memory corruption issue or a SQL injection vulnerability—very basic stuff, very common—it’s as if I’ve done something magically or special. It is so hard to find an incoming programmer that actually understands basic computer security that I’ll get excited if the candidate at least cares about it. If the programmer’s eyes open wide when I’m discussing computer security issues, I’m pretty happy. If they are at least interested, we can teach the rest. You’d be amazed how many don’t care, and a big reason for that is it still isn’t taught. Or the wrong things are being taught, like focusing on network security or minutiae.
Michael Howard – Schools will teach a student how the RSA algorithm works in detail, but not spend time teaching why it should be used, what problems does it solve, and what solutions it’s good for. Knowing how to use something correctly to solve real-world security problems is much more important than knowing how it works. Anyone can memorize a protocol, but we need people knowing about risks and thinking about solutions. Some teachers and colleges are doing it right, like Matt Bishop at the University of California, Davis, but it’s heroic efforts by Matt and others that makes it possible. He and the other professors like him are the real heroes.”
I asked since most colleges aren’t adequately preparing our coders in this
area, what an individual coder can do. He said, “Always learn. I put an hour
every day on my calendar that says ‘Learn.’ And I read/code/experiment for an hour with something I don’t know about—every day. I’ve been doing that my whole career. Second, if you aren’t getting formal computer security training, make your own. Go to the CVE ( http://cve.mitre.org/cve/ ), read about some recent bugs, really read about them in detail. Then write code that has the vulnerability and figure out what it would take to have prevented that vulnerability, at both the technical level and the process level. How did the vulnerability happen and get put in the code in the first place? And then use those lessons to keep those same types of bugs out of your own code.”
I asked what most companies could do to write more secure code besides
following all the current SDL advice and tools that are readily available. He
replied, “Make coders understand the real threats, not just the theoretical
stuff. And build the security process into the development pipeline so bad and insecure code can’t even get into the pipeline. We call them Quality Gates at Microsoft. A good (non-security) example is someone who writes code that
assumes that all IP addresses have four octets. That means that code will
never work on a pure IPv6 environment. That code can’t even be submitted
to our code check-in processes because a tool that is automatically run finds
the issue and the check-in is rejected. That’s what we mean by Quality Gate.
But for security, repeat that for SQL injection, memory-safety threats, and
everything else you don’t want getting into the code.
“If I had to pick a few basic security-related practices, they would be:
- “Developers need to learn to never trust input data and to validate it for correctness, preferably through the use of a well-tested and reviewed library. If you expect data that is only 20 bytes long, then restrict to 20 bytes. If you expect a number, then check that it’s a number and so on.
- “Designers/architects/program managers need to learn threat modeling and verify that the correct defenses are in place for the system.
- “Finally, testers need to prove the developers wrong by building or procuring tools that build malicious and/or malformed data. The goal is to defeat the developers’ checks, if they have any!
“There is more to software security than what I said, but these are, in my
opinion, the fundamental security-related skills all software engineers must
If You Want Michale Howard Book Click Here
If You Like The Blog Please Suggest More Hacker For More Hacking Content Click Here