When I was a kid, I had nightmares every week. I still remember some of them vividly, particularly the ones where ghosts were involved. Not the typical ghosts from the movies, but ones that could not be seen, only heard and felt. Why would I be so frightened and still remember them “vividly” today? Because during those nightmares, I had the illusion of having control but still was not able to run away from those “entities”. I felt hopelessness even hours after waking up.
I started working with Web browsers for an online floating ad company a long time ago and my job was to make the floating ads cross-browser compatible (making sure that they worked on Netscape and IE/Win/Mac with and without Flash). During that time, Opera was just music and Firefox was not even in the wildest nightmares of anybody. To be fair, Opera existed but was only used to click on the red door at Fravia+. And by the way, Opera was not freeware during that time.
It was a very hot day in summer and I was trying to fix the position of the Artificial Intelligence Flash movie over the sites where the ad was shown. I don’t know if the music of that floating ad was the same as the movie's music, but it was scary for me. You know, the terror-music type, suspense, whatever. It was simply “not nice” for me. It was the type of music I would not like to hear alone at home, in the dark.
The designers did not finish the final animation until 10:00 PM that day, so I left the office with homework: I had to finish the positioning, make it cross-browser compatible, package the files, and deliver it.
Back at home, while testing, the music of the animation really bothered me. Besides being very scary (it was 2:00 AM and I was home alone), I had to listen to the same music over and over again! Every reload meant hearing that music again. So I simply muted my speakers and continued working for two more hours until everything was correctly positioned, the click-though was correct, and the animation worked flawlessly on all browsers. So I relaxed, moved away from my PC, and lit up my classic relaxing cigarette. No more work for today.
Half an hour later I went back to my PC to start Winamp, but when I watched the screen I saw the floating ad that was loaded on my browser from my last test, so I simply clicked on my home button, going effectively to about:blank and leaving the page clean. I loaded Winamp to hear a few tunes before going to sleep, and when I unmuted the speakers by dragging the volume-up, the scary music from Artificial Intelligence banged out of my speakers! I was expecting to hear “Twisting the Night Away” (Innerspace version) but the floating-ad-music had continued playing!
What was happening? The floating ad was not in the browser anymore! How was this possible? Yeah, the browser was open, but there were no tabs (tabs were only available on Opera) so the only loaded page was about:blank. I started killing every task until I was left with nothing more than the browser. The music was coming from my browser which had nothing more than a blank page loaded! How was that possible?
And the music was scary for me. It reminded me of my childhood nightmares: nothing to see, but you could hear it and feel it. My nightmare had become true. The browser (which was part of my life) had a real ghost inside!
Should I kill the task? Will that stop the music? Feeling a bit anxious, I simply clicked the End Task button and sure enough, Task Manager destroyed my nightmare. Mark Russinovich preaches about Process Explorer, but if I have to pray, I will always do it to Task Manager. I felt so good when the music stopped.
Anyway, it was that scary music that started my interest in “tricky code”. If the music could continue playing even after leaving the Web page, was it possible to run a script the same way?
Probably because of that story, my presentation at the 7th BlueHat started with the phrase “Do you believe in ghosts?” It was a presentation that showed how to run scripts in the browser after navigating away from a page. In other words, keep controlling part of the browser behavior from behind the scenes no matter where the user went. It was like “having a ghost in the browser”.
Not only did I believe in ghosts, I had evidence that they existed -- at least inside browsers -- and I wanted to show it to the folks at Microsoft. The nice thing is that the ghost-busters at Microsoft patched the code, destroying my nightmares, hopefully forever.
Last night I woke up in the middle of the night. I was thirsty. I grabbed the glass of water that I always have on my nightstand, and the second I placed it on my lips I remembered what I was dreaming about. I was part of a discussion panel, talking about what it means to be a hacker. We were exploring the value of hackers to the ecosystem, hacker skills, motivations, and incentives. Trying to understand the difference between a hacker and a criminal. Andrew Cushman and Damian Hasse from Microsoft were there! And so were colleagues of the ecosystem like Ivan Arce, Luiz Eduardo, Nico Waisman, Rodrigo Rubira Branco and Felix 'FX' Lindner!
But when did I fly to Redmond? It was strange to me to be at the BlueHat conference without remembering the flight (I hate to fly). Buenos Aires is far away from Redmond, how could I forget? When did I plan this? The instant I thought that, I read the small letters at the bottom of the BlueHat flag which said, “BlueHat Buenos Aires”. But it wasn’t a dream after all. The BlueHat organizers were in fact organizing a BlueHat in Buenos Aires for the local researcher, enterprise and government communities across Latin America and the reality was that I was going to be able to be a part of it—and all in my (near) backyard in March!
With my glass of water and having slept enough, I grabbed my notebook to see that I had received a new e-mail from Celene and Dana. The body of the message said, “Will you participate in the BlueHat Buenos Aires?” I could not believe my eyes. Still today I feel flabbergasted when I think about it. A nightmare vanished and a dream came true.
This BlueHat security conference in Argentina–for me–is the most important one. It has something for everyone. We will hear from the creator of OWASP, the minds behind PHNeutral, ysts and Hacker2Hacker, iDefense, CORE, and the ghost busters of the MSRC. I never believed that saying, "you learn something new every day;" however, this day you will certainly learn something and leave the BlueHat conference with new ideas, homework to do, and the motivation to get started.
Hope to see you there.
Celene here from the MSRC Ecosystem Strategy Team. BlueHat v9: Through The Looking Glass ended just over a month ago and the success of the con lives on due to the outstanding training and networking between Microsoft employees, external speakers, and guests. I'm happy to say that the speaker video interviews and selected recorded presentations are now live on the BlueHat TechNet Page. As promised, we have posted talks from every track block. The samples available are from the e-crime, cloud, mobile and fuzzing content blocks. Check out the MSRC Ecosystem Strategy Team Blog for more stats from BlueHat v9!
Mark your calendars! The next BlueHat is October 14-15, 2010. See you all there.
-Celene Temkin, BlueHat Project Manager
*Postings are provided "AS IS" with no warranties, and confers no rights.*
I recently attended BlueHat for the second time and spoke about the SMS vulnerabilities Collin Mulliner and I discovered and exploited this summer. BlueHat is an interesting speaking venue because the audience consists entirely of Microsoft employees. Some people might think security researchers speaking at Microsoft is like speaking before the enemy, but that is not the case (an actual example of that would have been when I talked about exploit sales at CERT a few years ago). The people I spoke with at Microsoft seemed genuinely interested in listening to what I had to say, learning how I look for bugs, and generally, how the adversary thinks. I think this is a good sign they take security pretty seriously, at least on some level. Hopefully, they got some value in listening to how I attack applications.
From my perspective, BlueHat is always very rewarding. I get a chance to speak with the folks at Microsoft who are in charge of product security. This year, I sat down with a large group responsible for the security of Windows Mobile. It’s always fascinating to hear what they are planning to do, what they were thinking when they made various decisions, what tools they have at their disposal, etc. However, just like I don't tell them all my secrets, I'm sure they keep a few of their own, but I got the feeling that they were willing to tell me more about how they work than the last time I was out there, which is another positive sign.
There is the old Sun Tzu quote that goes 'know thy enemy'. It’s not clear that this is entirely appropriate here, but BlueHat does provide a way for Microsoft employees to sit down and talk with top security researchers and I think both groups benefit from it by gaining insight into how the other group thinks. Now if only I could get them to stop automatically rebooting my computer and corrupting my IDA Pro databases....
-Charlie Miller, Independent Security Evaluators
Billy Rios here. I’m giving a talk this week along with Nate McFeters entitled, “Sharing the Cloud with Your Enemy.” It’s a fun, realistic talk on security in the cloud. Why cloud computing?
Cloud computing, software as a service, infrastructure as a service, platform as a service… with so many different terms and so much hype, this cloud computing stuff can be confusing and understanding security in the cloud can be even more confusing! Nate and I will break down some of the most relevant security challenges we see for the cloud “Barney” style so that even my nine-month old daughter (or your average everyday CSO) can understand them. How are we going to do this, you may ask? Well, up until this point, we’ve seen a lot of theoretical scenarios related to cloud security.
In our presentation, we’ll cover some important cloud security concepts and back them up with some real-life vulnerabilities we’ve discovered. These vulnerabilities are neat but more importantly, they highlight some hard hitting, real-life issues anyone considering adopting a cloud computing platform needs to consider. We’ll cover some questions that every business should be asking their cloud provider and we’ll also use some of the vulnerabilities we’ve discovered to highlight areas cloud providers can improve on (there are plenty of areas). The content we’ve put together is appropriate for all audiences, but especially geared towards cloud providers and those wishing to implement cloud solutions for their business.
Come in from the Seattle rain, grab a cup of coffee, and join us for an entertaining, yet stimulating talk on cloud security. The cloud providers we’ve chosen to highlight are some of the biggest in the industry, the vulnerabilities are real, and the presenters are some of the sexiest on the planet… what more could you ask for?
This year at BlackHat USA in Las Vegas, we presented on the topic of attacking Short Message Service (SMS). Our presentation focused on the different ways in which SMS can be used to compromise mobile security. We’re excited to give an updated version of our talk at the upcoming BlueHat v9 conference later this month, and thought the BlueHat blog readers who will not be able to attend might enjoy an overview of some key material from the presentation.
Why attack SMS – When we first started looking at SMS, two things immediately leapt out to us that made it an interesting attack surface. The first was that there is far more functionality delivered via SMS than the simple text messages that everyone is familiar with. For example, SMS can be used to reach other rich attack surfaces such as graphic libraries and video codecs. These are two areas which have contained extensive vulnerabilities in the past. The second item which makes SMS interesting to analyze is that it is always turned on (and ready to be attacked). SMS messages are delivered to mobile phones via the paging channel that the network uses to notify the phone of important information such as an incoming call. Therefore, it is extremely difficult to tell a mobile phone to not receive an incoming SMS as the phone always needs to listen on this interface. Additionally, the network is built to make a best effort to deliver an SMS to a recipient, which makes attacking even easier. If the target is offline or out of range it does not matter to the attacker, as the network will typically store the attack message until the target comes online and then will deliver it.
Attacks – In our presentation, we break down the attacks we discuss into three categories: Implementation, Configuration, and Architecture.
The first category of attacks we discuss is implementation flaws in the messaging software on mobile phones. We started with the assumption that any crash we triggered would likely be localized to the messaging application. We were surprised to find that crashes commonly occurred at a much lower layer that would knock the phone's radio interface offline. This would then prevent the phone from placing or receiving calls and SMS traffic, sometimes even across multiple reboots of the device.
The second category of attacks we discuss is a case study of a configuration flaw that affected a number of mobile devices. Those of us working in application security are used to one vendor having direct responsibility for a product. In the mobile world, things operate differently. Instead of each application being the responsibility of a single vendor, there are three main players: the carrier, the hardware OEM who makes the device, and the operating system vendor. When a vulnerability is found in a given piece of software, the responsible vendor ships a patch for that vulnerability. As has been shown with multiple real-world devices, one of the parties can make a change to the configuration of the device that results in the final product shipping with an insecure configuration.
The final category of attacks we discuss relates to the security architecture of SMS. As we mentioned before, there is a lot of administrative functionality on mobile phones that makes use of SMS. A straightforward example of this functionality is voicemail notifications - a carrier can notify a subscriber that they have a voicemail message waiting by sending a specially crafted SMS to their mobile phone. Most phones respond to this message by executing an administrative action, such as popping up a notification to the user. Obviously, an administrative message type such as this should only be generated and sent by the carrier’s equipment. During the course of our research, we found that there are a number of administrative SMS message types that we were able to send as a peer device on the carrier network. Some of these message types can have significant security implications to the mobile phone, unlike a simple voicemail notification.
Conclusion - SMS and mobile devices in general offer an intriguing area for future security research, especially as mobile devices store increasingly sensitive information. We are looking forward to spending time at BlueHat doing a much deeper dive into the topics we have begun to introduce in this blog post.
- Zane Lackey (iSEC Partners), Luis Miras (Independent Security Researcher)
Hello world! Remember Mad Libs? How about Scrabble, when you'd try making up words that sound legit just to be de-bluffed by your friend. Playing these games provides endless hours of fun with words and letters. In software and the Internet, words, letters, and text are everything. Whether you're up in the cloud, down in the code, or consuming the content—written language is the information that’s central to it all.
Unicode provides a set of standards for representing most of the world's languages and scripts within a single framework. It’s pretty awesome really—the ability to capture the world’s scripts past, present, and future. Where else would you find a character set that encodes everything from ASCII (Latin) to the symbols of the ancient Phaistos Disc, such as this PLUMED HEAD:
Unicode has come to be the de facto system for representing and encoding characters across any computing platform. It's central to most modern operating systems, programming languages, and applications. But, similar to a networking protocol stack, most software developers don't want to wrangle with the details. It should be good enough to know that your strings are handled as Unicode, so you can build your software without sorting out the complex details of charset transcoding, normalization, etc.
Still, there are attacks and countermeasures that should be known. In my BlueHat presentation I intend to cover two broad categories—one around visual perception attacks, and the other around character transformations. In the cloud, URL's rule. Okay, URI has superseded URL and, with Unicode, we should be talking about IRI (Internationalized Resource Identifier). But anyway, with the growth of Internationalized Domain Names (IDNs), IRIs have just as much a place as do URIs. What I'm really concerned with are the domain names, the IDNs. We saw early visual spoofing attacks as early as 2002, and again with Eric Johanson’s Paypal spoof in 2005. Times have changed since then and the browser vendors and registrars have gotten smarter about IDN.
However, the attack vectors continue to emerge. I plan to demo some of these and describe the current landscape of IDN, especially as it relates to the IDN revisions that are soon to be standardized. These revisions, dubbed IDNA 2008, bring important changes, both good and dangerous. On the one hand, we've moved to an inclusion-based model, from exclusion-based for allowed characters. On the other hand, we'll have edge cases where a single domain name could resolve to two different IP addresses under the new and old IDN standards. Can your cloud-based services be spoofed?
Moving along, we'll take a closer look at how character transformations can be used to exploit software. Some characters really do have split personalities much like Dr. Jekyll and Mr. Hyde, which affect you whether your product parses text and wants to prevent buffer overflows, or its a Web-app looking to defend against XSS attacks. Through subtle manipulations, attackers could send you strings that expand by factors up to 18x when normalized. In attempts to evade XSS filters, an attacker could inject characters such as the U+0130 LATIN CAPITAL LETTER I WITH DOT ABOVE which when lower-cased change to a U+0069 LATIN SMALL LETTER I.
In other situations, processing of special Unicode characters such as the BOM might also open up exploits. Because many assigned characters have special meaning and properties, their usage outside of their intended scope may require closer attention.
I’m happy to be going over these issues with you and the Blue Hat crowd at my talk, Character Transformations: Finding Hidden Vulnerabilities, aimed at developers and testers. I want developers to see some of the issues, and I want testers to see some new inputs and test cases.
Co-Founder, Casaba Security
A single Web page may be a composite of the efforts of many different development teams, each utilizing different technologies. If you are responsible for the overall security of a site, then you need to have a clear picture of how content will interact in order to understand the risks. Without a clear mapping of permissions granted to each piece of content, an attacker might be able to find subtle paths through your defenses.
Combining research makes it easier to communicate common risks with deploying RIA technologies. The attacks in the above examples could also occur if the content were based on Silverlight and granted the EnableHTMLAccess permission. As the webmaster responsible for the overall site, you may not be an expert on each RIA technology. However, if you understand the common risks shared across RIA technologies, then you will know to ask whether the SWF or Silverlight content has access to your HTML’s DOM during your security review. Understanding the common risks will allow you to draft security requirements that can be flexible enough to address different RIA technologies.
During the presentation we will be providing guidance on how to secure your site against these and other RIA attacks. It is our goal to communicate some of the important commonalities and differences between RIA platforms to enable developers to understand the breadth of RIA's capabilities Architectures that mix content from diverse sources will need to build holistic views of their content. Data flow diagrams detailing where cross-domain communication occurs can help identify where unintended paths into sensitive areas may exist. By understanding the capabilities of RIA technologies and by tracking the flow of those permissions, developers will be able to accurately manage their risks and provide users with a rich Web experience.
Senior Security Researcher
Adobe Systems, Inc.
[Editor's note: Check out Bryan Sullivan's post on the SDL blog titled "Cross-Domain Security" discussing the existing SDL requirements around cross-domain access security and the implications of Peleus' research on these requirements - coming soon.]
There have been many disruptive innovations in the history of modern computing, each of them in some way impacting how we create, interact with, deliver, and consume information. The platforms and mechanisms used to process, transport, and store our information likewise endure change, some in subtle ways and others profoundly.
Cloud computing is one such disruption whose impact is rippling across the many dimensions of our computing experience. Cloud – in its various forms and guises -- represents the potential cauterization of wounds which run deep in IT; self-afflicted injuries of inflexibility, inefficiency, cost inequity, and poor responsiveness.
But cost savings, lessening the environmental footprint, and increased agility aren’t the only things cited as benefits. Some argue that cloud computing offers the potential for not only equalling what we have for security today, but bettering it. It’s an interesting argument, really, and one that deserves some attention.
To address it, it requires a shift in perspective relative to the status quo.
We’ve been at this game for nearly forty years. With each new (r)evolutionary period of technological advancement and the resultant punctuated equilibrium that follows, we’ve done relatively little to solve the security problems that plague us, including entire classes of problems we’ve known about, known how to fix, but have been unable or unwilling to fix for many reasons.
With each pendulum swing, we attempt to pay the tax for the sins of our past with technology of the future that never seems to arrive.
Here’s where the notion of doing better comes into play.
Cloud computing is an operational model that describes how combinations of technology can be utilized to better deliver service; it’s a platform shuffle that is enabling a fierce and contentious debate on the issues surrounding how we secure our information and instantiate trust in an increasingly open and assumed-hostile operating environment which is in many cases directly shared with others, including our adversaries.
Cloud computing is the natural progression of the reperimeterization, consumerization, and increasingly mobility of IT we’ve witnessed over the last ten years. Cloud computing is a forcing function that is causing us to shine light on the things we do and defend not only how we do them, but who does them, and why.
To set a little context and simplify discussion, if we break down cloud computing into a visual model that depicts bite-sized chunks, it looks like this:
At the foundation of this model is the infrastructure layer that represents the traditional computer, network and storage hardware, operating systems, and virtualization platforms familiar to us all.
Cresting the model is the infostructure layer that represents the programmatic components such as applications and service objects that produce, operate on, or interact with the content, information, and metadata.
Sitting in between infrastructure and infostructure is the metastructure layer. This layer represents the underlying set of protocols and functions such as DNS, BGP, and IP address management, which “glue” together and enable the applications and content at the infostructure layer to in turn be delivered by the infrastructure.
We’ve made incremental security progress at the infrastucture and infostructure layers, but the technology underpinnings at the metastructure layer have been weighed, measured, and found lacking. The protocols that provide the glue for our fragile Internet are showing their age; BGP, DNS, and SSL are good examples.
Ultimately the most serious cloud computing concern is presented by way of the “stacked turtles” analogy: layer upon layer of complex interdependencies predicated upon fragile trust models framed upon nothing more than politeness and with complexities and issues abstracted away with additional layers of indirection. This is "cloudifornication."
The dynamism, agility and elasticity of cloud computing is, in all its glory, still predicated upon protocols and functions that were never intended to deal with these essential characteristics of cloud.
Without re-engineering these models and implementing secure protocols and the infrastructure needed to support them, we run the risk of cloud computing simply obfuscating the fragility of the supporting layers until the stack of turtles topples as something catastrophic occurs.
There are many challenges associated with the unique derivative security issues surrounding cloud computing, but we have the ability to remedy them should we so desire.
Cloud computing is a canary in the coal mine and it’s chirping wildly. It’s time to solve the problems, not the symptoms.
I look forward to diving deeper into these details with the folks at BlueHat next month in my session titled Cloudifornication: Indiscriminate Information Intercourse Involving Internet Infrastructure.
There are only a few days left before Black Hat USA, and we, like most other speakers, are in the midst of the last-minute push to have all the materials finalized in time for our presentation. Our presentation this year, "The Language of Trust," features a lot of material related to attacking software interoperability layers, and focuses on Web browsers as case studies. Some of the vulnerabilities we will be disclosing affect Microsoft software and have resulted in Microsoft releasing an out-of-band update on July 28th. The updates included in this release are the result of a lengthy and largely successful collaboration between us and Microsoft, particularly individuals from MSRC including Steve Adegbite, David Midturi, and Dustin Childs. Microsoft has had the unenviable task of dealing with the issues surrounding the fixes for these problems, and they have worked diligently to do so in a timely manner. We decided to put together a blog post discussing the problems that needed to be contended with to get this update out in time, and plug our upcoming Black Hat presentation.
The update addresses some issues we uncovered in the Microsoft Active Template Library (ATL). Released in 1997, the ATL is actually distributed as source code with Visual Studio and is aimed at simplifying various programming tasks for developers. It provides, among other things, helper functionality that is utilized by most ActiveX components, which is where the vulnerabilities we are disclosing reside. Anyone who has utilized the relevant ATL code in their ActiveX controls for the past twelve years may have inadvertently incorporated these vulnerabilities into their own products. Microsoft has been getting a considerable amount of criticism for the amount of time it took to patch the Video Control vulnerability; however, the issue is much larger than it first appears and this fact, along with why detection is so difficult, will be discussed further in our presentation.
There are a few unique problems that needed to be dealt with for the ATL bugs. The first problem is efficient enumeration of vulnerable applications. When you contrast issues within the ATL against issues within application code, a number of differences become apparent. Generally, problems within application code are localized to a single source file, and require the recompilation of a single program. However, with issues in the ATL, any application that includes code from the ATL may be vulnerable. Furthermore, successful detection of vulnerable ATL code usage is a complex and error-prone process, and is difficult to achieve with standard static analysis tools. The reasons why detection is difficult will become clearer after our presentation, when we discuss the details of the bug.
The second issue that needed to be addressed was that of vendor coordination. As previously stated, other vendors that use ATL code in their ActiveX controls are potentially vulnerable. As such, Microsoft charged themselves with the arduous task of tracking down as many potentially vulnerable vendors as possible, and coordinating with each of them. Coordination involved explaining a bit about the potential problems, how to determine if a given control is vulnerable, and mitigation steps that can be taken to fix identified problem controls. This is a process that obviously takes time and effort, and Microsoft has been working around the clock with a number of vendors trying to minimize the risk to end users.
So, the mitigation work is done, the update is out and the presentation is going ahead on schedule! We would like to use the rest of this blog to shamelessly promote the presentation, which is quite broader than bypassing kill bits, and give a little insight into some of the issues we will be discussing. Primarily, our presentation intends to address three issues:
1. Interoperability layers in software do a lot of complicated work behind the scenes, and provide a vast and largely unexplored attack surface.
2. Throughout the course of our research, we discovered that unique bug classes exist due to the specialized tasks that marshalling code must perform. We intend to unveil these bug classes during the presentation. We will show how various data structures and APIs utilized for marshalling in the two dominant browser architectures lend themselves to misuse, creating the potential for subtle vulnerabilities that attackers may target. We will give practical examples for code constructs we have identified as vulnerable.
3. When two disparate components are given direct communication channels to each other, trust is implicitly extended between the two components. This trust relationship can be useful to attackers wishing to bypass various security features present in one component, by abusing features of another.
We are hoping this information will be useful for developers and security professionals alike, and look forward to seeing you all there! Our Black Hat presentation is slated for Wednesday, July 29th, at 3:15pm PDT, in the Augustus Ballroom 5-6.
-Ryan Smith, Mark Dowd, David Dewey
Hi, this is Scott Stender from iSEC Partners. I recently had the privilege of speaking at Microsoft's BlueHat event in Brussels on the topic of securing legacy systems.
With all of the recent coverage on the need to secure our networked systems -- national, corporate, and individual alike -- I felt that the BlueHat event was a good time to shine the spotlight on those little-loved, perhaps little-known systems that keep our plugged-in society working. Those are the legacy systems, the giants on whose shoulders we stand in order to build the rich computing environment we enjoy today.
I had hoped to discuss, perhaps defend, the following points with the attendees:
· Legacy systems will always be with us. After all, we create more of them with every completed software project.
· The attacks leveraged against our systems are always changing and growing more sophisticated. Those of us on the defensive side will need to be equally sophisticated and tireless in our response.
· We software engineers need to develop and improve the means to secure our existing systems, just as we already do when developing for new systems.
· Those who maintain the budget for software systems not only need to plan for the effort required to build secure systems, but also to plan for the effort required to secure and maintain these systems throughout their lifetime.
However, as is often the case in these gatherings, I was surprised by the diversity of opinion in the room.
What I thought were going to be the most challenging statements did not stir the attendees. Most notably, it seemed to have been accepted that we will need to evolve the security of our existing systems rather than "start from scratch" for the majority of our systems. The benefits of starting anew are often far exceeded by the drawbacks. For instance, there is potentially a large amount of acquired wisdom in a system (learned through hard years of bug fixes and real-world operation) that could be lost when starting anew.
Instead, the attendees challenged me with the following topics:
· How do we show progress and demonstrate value for the resources spent on securing our legacy systems? After all, it is hard to make the case that we need to spend money on something that was deemed "completed" years before.
· How do we manage tightly-regulated systems, where certifications limit the changes that can be made? Attackers move faster than certifying agencies, and that opens a window for attackers.
I am afraid that easy answers to these questions are elusive, and those found are unlikely to hold in the general case. That is what makes venues like BlueHat important; because by discussing our experiences with peers in the industry, we come closer to understanding the potential solutions to our hard questions and the scenarios in which these potential solutions could be applied.
It is my hope that I made a good case for the need to secure our systems at their core, and that perhaps a few attendees were moved by this software engineer's view of how to address our quickly shifting attack landscape. I left BlueHat with a greater appreciation for the experience of those who work in different industries than I do, under different regulatory pressures, and with varying levels of support for security initiatives. Together, continually improving software combined with technology to help us improve security immediately, we may be able to address the challenge of securing our legacy.
- Scott Stender, iSEC Partners
Share this post :