IANS Blog

  • Beaver: Taking Responsibility for Vendor Product Security

    by Daniel Maloof | Mar 20, 2017

    Monday, March 20, 2017 By Kevin Beaver, IANS Faculty


    Kevin-Beaver

    When I perform independent information-security assessments, I often assess the security of product vendors' systems and software. Some of these products are related to IT and security, while others fall into the Internet-of-Things (IoT) category. Sometimes it's the vendor looking for an independent review and other times it’s the end user.

    Regardless, the overall goals are always the same: to determine how resilient the system or software is to security attacks, how it meets commonly expected security practices, and how it processes, stores and otherwise handles sensitive information.

    When I'm seeking out security flaws in these products – be it web browser plug-ins, mobile app environments or network-connected hardware devices – the outcomes are predictable. If the system has an IP address, a URL or a mere software surface that can be interacted with, anything is fair game for attack and exploit. All that is typically required is a web browser, command prompt and network analyzer combined with a malicious mindset and knowledge of what to look for. Network and web vulnerability scanners often uncover even more issues. Some common flaws that I have found in these products include:

    • Severely outdated operating system and application patches

    • Users running with administrator or root privileges and no reasonable means of protecting against malware

    • Documented and undocumented backdoor accounts – for some of which the credentials are a simple Google search away

    • Ancillary services and features enabled by default, such as USB ports, as well as FTP and web interfaces that create unnecessary exposures

    • Cross-site scripting and SQL injection flaws

    • Sloppy storage of sensitive user and system configuration information

    • Unencrypted network communications that expose login credentials and connection details of the endpoint systems involved

    • Software installations that reconfigure system settings and create exposures for both endpoints and administrative systems

    Every manufacturer out there has a gadget or application that can easily end up creating security risks in the enterprise. Be careful. Just because a vendor makes and sells these products, it doesn't mean they have security-savvy software developers or even someone validating that security is in check. Ironically, IT- and security-related products tend to be the worst offenders.

    It’s clear to me that many product vendors are not thinking about how their systems can be attacked and what those exposures can lead to. I also think this underscores the criticality of IoT and product-centric security and how they impact both businesses and consumers alike.

    Doing Our Part

    Is more regulation needed? That's an entirely different discussion, but generally speaking, I don't think so because we've seen the comical outcomes of regulations such as HIPAA and the CAN-SPAM Act. If anything, regulating product security gives businesses and consumers something to fall back on when the products lead to security attacks.

    In the same way we can't rely on the police to keep us safe all the time or doctors to ensure that we always take the proper steps to live long and healthy lives, we all have a responsibility to keep things safe and secure in the systems and software we use. Maybe that means holding these vendors more accountable when security flaws are found. Maybe it means adding your own compensating controls when using their products on your network. Or, perhaps it means letting the market work things out by looking elsewhere and not supporting vendors that have subpar security.

    It's ultimately up to you. At the end of the day, you can't blame poor security and the subsequent incidents and breaches on someone else. Rather than more finger-pointing, regulation and red tape, let's have the discipline to do what's right and take the proper steps to reasonably lock things down – even if it's someone else's product.

    If you suspect a third-party product is exposing your network or creating other unnecessary risks, don't be afraid to test the system yourself if it's in your own environment. Or, ask for (or hire someone to do) an independent assessment. Ask your vendors the tough questions about what they're doing to test and resolve the issues that are uncovered. Hold their feet to the fire. In the worst-case scenario, if an incident or breach does occur, you'll have a paper trail showing that you were taking reasonable steps and doing your own due diligence to keep things in check.

     

    ***

    Kevin Beaver, CISSP is an independent information security consultant, writer, professional speaker, and expert witness with Atlanta, Georgia-based Principle Logic, LLC. Kevin has written/co-written 12 books on information security including the best-selling Hacking For Dummies (currently in its 5th edition).

  • Van Wyk: Building Your Threat Modeling Process

    by Daniel Maloof | Mar 14, 2017

    Tuesday, March 14, 2017 By Ken Van Wyk, IANS Faculty


    Ken van WykIt’s not often that a new “cool” thing comes along in information security and we’re able to say we’re already doing it. But that’s the case with threat modeling – well, at least in part. You are doing threat modeling, right? If your answer is “no,” perhaps you just know it by a different name. Maybe you call it a design/architecture review, or something completely different.

    Well, whatever you call it, let’s take a look at threat modeling and why it’s so important for infosec teams.

    Threat modeling is the process of critically reviewing a system’s design for potential security defects – plain and simple. There are a few industry processes that allow us to follow a rigorous methodology, including Microsoft’s STRIDE and DREAD, as well as Synopsys’ (formerly Cigital’s) Architectural Risk Analysis (ARA).

    Or, as fellow IANS Faculty Adam Shostack describes in his excellent book, “Threat Modeling: Designing for Security,” we can follow a more simple and intuitive approach and still get good results.

    But, why bother? Aren’t we already doing static code analysis and rigorous security testing? Shouldn’t those processes find any security mistakes we’ve made in our software? Well, partially. It’s a question of perspective.

    Static code analysis can be effective at finding implementation bugs such as failing to adequately filter data input, which can result in various cross-site scripting (XSS) and related data injection attacks. Its shortcoming, though, is that it fails to see business-relevant architectural flaws, such as relying on client code (e.g., JavaScript or Objective-C) to make security decisions. If we do our threat modeling well, we can find those sort of design failures before they cause us real grief later. Design failures, after all, can be among the most difficult and costly problems to fix.

    Threat Modeling in Practice

    So how, exactly, do we do this threat modeling thing and in what ways might we already be doing it?

    Basically, it involves four steps:

    1. Model it
    2. Find threats
    3. Address threats
    4. Validate threats

    Easy peasy, right? Well, let’s dive into each of those steps a bit and see for ourselves.

    Model It

    This is where we describe our system. One of the best things about threat modeling is that it allows us to take either a big- or small-picture view of our systems, but we must be able to clearly describe our design. This can take the form of a component diagram – either physical or logical – or things like data-flow diagrams. Be sure to clearly describe the component interconnections within your application. These inputs make up what we call the “attack surface” of the application (or the portion of the application we’re presently considering).

    Find Threats

    So, taken by itself, this piece seems a bit like “… and then a miracle occurs,” right? But there’s obviously a lot more to this, and there are a few things that can make this step go a bit more smoothly.

    For starters, zoom in on one component at a time. Also, make sure to critically consider who has access to each component, including those who are authorized and unauthorized. Then, consider what would motivate the bad guys to attack. What is their goal? What are their expected technical capabilities? Can they, for example, write serious malware that can withstand reverse engineering for a considerable amount of time? In other words, make your threat scenarios real.

    It’s also important to include key personnel such as your incident response operations team, your threat intelligence people, your principal design team and the business owners of the application being reviewed. Encourage the team to brainstorm and record everything they come up with.

    This is also where Microsoft’s STRIDE (Spoofing, Tampering, Repudiation, Information Disclosure, Denial-of-Service, Elevation of Privilege) model can be helpful. Use it as a checklist to catalyze conversations about individual scenarios.

    Address Threats

    This is where you decide what, if anything, you’re going to do about each threat you’ve uncovered. Start by prioritizing them. Microsoft’s DREAD (Damage, Reproducibility, Exploitability, Affected Users, Discoverability) methodology can be helpful here. Use it to prioritize your threat list. Some companies prefer a quantified approach here, but I’ve found that the threat model team can do this step pretty intuitively using simple levels of quantification like low, medium and high priority. Of course, don’t neglect to factor in the costs of each potential remediation.

    Validate Threats

    Finally, it’s a good idea to prove that the threats and their remediations are based on facts and not just “gut feel.” Make sure each threat scenario is something your attackers can actually achieve, and that each remediation will truly protect your business.

    Knowing Where to Begin

    So, perhaps you never called this threat modeling, but I’ll bet you were already reviewing your business applications at some level prior to rolling them out. Maybe your process is too simplistic, though. Perhaps a more rigorous threat modeling process like the one I’ve outlined here can be helpful.

    Either way, I say give it a try. Threat model a small application first and see how the process works for you. You may need to tweak it a few times before you get it right, and that’s all good. In Adam Shostack’s book, he encourages us in Chapter 1 to do a simple threat model first, before even getting into processes like the one I’ve described above. That’s great too!

    Ultimately, in my experience with threat modeling, the two most important ingredients for success are a talented and willing multi-disciplinary team and a huge white board. Sure, the former might be a bit tougher to come by, but don’t even think about underestimating that white board! 

     

    ***

    Ken Van Wyk is president and principal consultant at KRvW Associates and an internationally recognized information security expert, author and speaker. He’s been an infosec practitioner in commercial, academic, and military organizations and was one of the founders of the Computer Emergency Response Team (CERT) at Carnegie Mellon University.



  • Vault 7: WikiLeaks Dumps Massive Trove of Alleged CIA Hacking Tools

    by Chris Gonsalves | Mar 07, 2017

    Tuesday, March 7, 2017 By Chris Gonsalves, IANS Director of Technology Research


    Information security professionals on Tuesday were scrambling to make sense of a sizable dump of alleged CIA surveillance tools and techniques that appears to detail methods for hacking into networks, computers, smartphones and internet-connected consumer devices such as smart TVs.

    Notorious document leakers WikiLeaks released the trove of nearly 8,000 pages and 1,000 attachments, which includes supposed details of the CIA’s hacking arsenal such as malware, viruses, trojans, weaponized zero-day exploits and malware remote control systems. WikiLeaks officials said the Tuesday document dump, dubbed Vault 7, constitutes “the largest ever publication of confidential documents on the agency” and is just the first part of what it promises will be an ongoing release with “several hundred million lines of code” comprising “the entire hacking capacity of the CIA.”

    While security industry insiders work to verify the authenticity -- and relative importance -- of the documents and the code in the release, the claims raising the most concern include details of the CIA’s ability to work around encryption efforts in popular mobile messaging apps by capturing message data in compromised endpoints prior to encryption. The Vault 7 release indicates robust and focused efforts by the CIA to develop and hoard exploits targeting Apple's iPhone, Google's Android and Microsoft’s Windows operating systems.

    Perhaps most damaging, the release, which includes verifiable CIA code names and organization charts, exposes CIA playbooks showing how the agency conducts operations, evades detection and deploys malware.

    According to a statement from WikiLeaks accompanying the Vault 7 dump, the CIA over the past seven years has been developing an elite hacking division to compete with its main cyber-intelligence rival, the NSA. By last year, the CIA hacking unit, which falls under the Center for Cyber Intelligence, had more than 5,000 registered users and “had produced more than a thousand hacking systems, trojans, viruses and other weaponized malware.”

    The collection of CIA hacking tools and techniques, many of which appear fairly routine on first inspection, “appears to have been circulated among former U.S. government hackers and contractors in an unauthorized manner, one of whom has provided WikiLeaks with portions of the archive,” WikiLeaks officials said. Dates on the released documents range from 2013 to 2016.

    If ultimately proven authentic, the Vault 7 document collection will also be notable for revealing:

    • In addition to its operations in Langley, Va., the CIA also uses the U.S. consulate in Frankfurt, Germany, as a covert base for hackers working in Europe, the Middle East and Africa.

    • The CIA's arsenal includes numerous local and remote zero days obtained from GCHQ, NSA, FBI or purchased from cyber arms contractors such as Baitshop.Dave-Kennedy

    • CIA hackers have developed successful attacks against most well known anti-virus programs and have targeted anti-exploitation defenses such as EMET. 

    “Honestly, this pretty much kills large aspects of the United States offensive capabilities, techniques, and ability to perform operations,” noted security expert and IANS Faculty Dave Kennedy said on Twitter in the wake of the Vault 7 release.

    “While I agree on having debate on what the government should be able to do and oversight in that, this is something different,” Kennedy said. “To me, this seems extremely aggressive for WikiLeaks, more than I’ve ever seen. This really hurts operations abroad for true hostiles.

    "Truth of the matter is if I was an adversary against the United States, this is exactly what I would do…Burn capabilities globally,” added Kennedy.

  • Poulin: Breaking Down RSA and the Past, Present and Future of Information Security

    by Daniel Maloof | Mar 06, 2017

    Monday, March 6, 2017 By Chris Poulin, IANS Faculty


    IANS Faculty Chris PoulinWhether you look forward to or dread the RSA conference each year, it's practically a must-attend event for everyone who’s anyone in information security.

    I’ll admit I’m a fickle attendee: I make my way across the U.S. about once every five years to check out the happenings, then disappointment sets in for a few years before time erases the memory and I start the cycle all over again. This year was a reset year and did not fail to disappoint in confirming my general cynicism.

    Well, that’s not totally fair. There are definitely a few big reasons to go to the conference, and painting the entire experience with a sarcastic broad brush is unfair. These include, as you might expect:

    • Training and presentations: These can be hit or miss, but there are always some that give good background on a variety of topics, like IPv6 security. Still, as I get older and more impatient, I think I'd rather digest even the well-crafted talks in whitepaper form where I can control the pace.

    • Meetings with clients, prospects, partners and analysts: At this point in my career, these meetings in hotel suites and coffee shops around the Moscone Center offer me the most value. 

    • Networking with peers: This is related to the above, but it's about growing your network. This is another big reason I still attend RSA. 

    These factors alone can make attending RSA worthwhile, but then, of course, there's also the expo floor. I have a lot of thoughts on the expo floor at RSA, but to sum it up, I would say this: There are way too many vendors to digest in five days, so the best you can hope for is to get a general sense of the landscape and have a half-dozen meaningful conversations.

    Ultimately, I came away feeling a bit like Scrooge in “A Christmas Carol," but nonetheless, here are my three major takeaways from RSA Conference 2017:

    • The ghost of security past: Mobile security turned out to not be a thing after all. The days of vendors that only focused on mobile security are gone. The major players have been sucked into the gravity of larger institutional planets. Mobile security is now table stakes and relegated to MDM/MAM functions, with containerization, approved app stores and self-service installation. We’ve hit the wall with mobile security hype, and as I’ve predicted many times in the past, there have been no major mobile security breaches that have compromised enterprise data. Maybe that’s a result of early fear and rapid introduction of security tools; maybe the big one just hasn’t hit yet. In any case, there’s nothing to see here, people, move on.

    • The ghost of security present: Threat intelligence and analytics are still being talked about a ton. Everyone is trying to figure out a meaningful use for all that data we’ve been collecting over the last 20 years. Some of it comes from external sources, others from inside the enterprise. Vendors are sucking in data from thousands of paste sites, underground forums and the deep and dark webs. They’re scraping sites and looking for bad signs of compromise or illicit content with which to rate the badness, and they’re watching internet traffic to profile phishing email and spam. For data from inside the organization, the mantra seems to be, “The SIEM is dead; long live the next-generation SIEM!”

    • The ghost of security future: Hunting and deception are starting to capture the interest of more progressive organizations. The trick to hunting is getting the right information at the right time - a balancing act between storing every bit in case it’s needed later - and identifying potential attacks in progress to take a snapshot in time. The former requires insane amounts of storage, while the latter requires triggers that need to be tuned to reduce false positives and eliminate false negatives. We’ve seen this before with tools like NetWitness (now owned, appropriately, by RSA) and…well, SIEMs. Deception is interesting in that it provides an early warning when attackers start rattling doorknobs during internal surveillance and lateral movement. Some can even work with IoT devices, such as industrial control and medical equipment.

    Of course, there are also the usual suspects, who’ve been around for years. Think AV, firewalls, and all that “defense-in-depth” stuff. They’re trying to reinvent themselves in light of new trends, but they’re also still stuck in the past with the old chestnuts. The whole experience felt a little like a traveling medicine show on steroids. For all of you millennials out there, you’ll have to Google that reference. Look up snake oil while you’re at it.

    So, finally, after a week of confusing my fitness wearable by clocking more than 15,000 steps per day (where presumably some machine learning algorithm is figuring out my exact date and time of demise), I’m left with the impression that we’re making progress in information security, despite my cantankerous nature.

    However, the positives are hidden in the white noise of marketing. It’s like finding the actual image in those Magic Eye pictures from the 1990s. What I see when I defocus my eyes and stare at the landscape is that we’ve moved from a purely defensive posture to detection, and now we’re starting to employ offensive tactics. Hunting and deception give us the ability to put attackers on the defense as long as they’re on our own turf. I’ll really be impressed, though, when predictive analytics actually predict a future attack. So, for that, I guess I’ll see you at RSA in another five years.

    ***

    Chris Poulin is Director of IoT Security and Threat Intel for Booz-Allen Hamilton's Strategic Initiatives Group, where he is responsible for building countermeasures for threats to the Internet of Things. He has a particular focus on connected vehicles, as well as researching and analyzing security trends in cybercrime, cyber warfare, corporate espionage, hacktivism, and emerging threats.

  • Shackleford: Learning the Right Lessons from AWS Outage

    by Daniel Maloof | Mar 02, 2017

    Thursday, March 2, 2017 By Dave Shackleford, IANS Faculty


    Tuesday, February 28, 2017, was an interesting day. For some (myself included) it wasAWS Mardi Gras, Fat Tuesday, a day of revelry and parades. For many not in New Orleans, it was an incredibly bad day in IT; Amazon Web Services experienced a massive outage in their Simple Storage Service (S3), which apparently runs a staggering portion of the Internet. Many sites were unreachable, content was unavailable, and in a true flash of irony, Amazon’s own availability dashboard was “frozen” because the icons signifying service status were hosted from…you got it, S3.

    Amazon posted periodic updates about status, and the outage seemed to ultimately last only half the day or so, but the damage was done. The security industry started speculating immediately about the root cause - was it malware? Was this the world’s most impactful ransomware hijack? Was it an advanced threat actor? Did Amazon come under fire from a DDoS? No one had any information, and Amazon didn’t offer any, so conspiracy theories flew left and right, and pundits crawled from the woodwork to argue that cloud is evil. Or cloud is great. Or something like that.

    Well, Amazon waited until Thursday to actually release the root cause, putting an end to much of the speculation. In a post titled “Summary of the Amazon S3 Service Disruption in the Northern Virginia (US-EAST-1) Region,” Amazon revealed the issue to have been…wait for it…HUMAN ERROR. An employee (who was authorized to make changes in S3) ran a command with incorrect arguments, accidentally deleting a number of S3 “subsystems” that control all the indexing and operations related to S3, effectively taking it out entirely within the region (US-EAST-1). They then scrambled, needing to do some reboots to bring back other subsystems, and these took much longer to restart than normal (the service has grown enormously since last reboot, which was apparently several years ago).

    Amazon says it’s learned some lessons from the experience, and it’s added safeguards to ensure something like this never happens again. Of course, this is what we expected the company to say, but there are some much bigger lessons to learn from this outage.

    Widespread Impact Was Preventable

    First, human error happens to EVERYONE. As a major proponent of “security as code” and security automation, I love the idea that we’re just one script away from blissful happiness in the world of IT, but I know better. There will ALWAYS be people involved, and people make mistakes. We try to anticipate them and manage them, but we also need to plan for failure and know how to remediate things when they do happen. This is a given, and I think most security professionals know this.

    The bigger problem, in my opinion, is that no one should have been affected by this at all (or minimally, if anything). Where was the backup? Where was the disaster recovery planning? Where was the redundancy and availability architecture?

    We’re falling into a trap, folks. Too many CIOs and even CISOs and security teams I know are settling into the “cloud is always available” mindset, which is just false. Do cloud providers have better uptime and track records of availability than most end-user organizations? Absolutely! Does that mean we can just throw away all the years of carefully building business continuity and disaster recovery strategies for all of our assets everywhere? No! But that’s exactly what happened here.

    Replicating AWS S3 across regions and availability zones is actually somewhat tedious, but if your business relies on the service, then you should build failover plans. Period. This isn’t a new revelation, either - it’s called sound architecture and engineering, and we should absolutely know how to do this by now.

    I feel for any technology and/or security executives who are eating crow right now after advocating a move to the cloud. Those executives should immediately pursue a strategy similar to Netflix’s DR model in AWS, which has massive, distributed redundancy across numerous regions and also creates massive failure scenarios regularly for DR and continuity testing. Netflix wasn’t affected on Tuesday. Read about its tools for testing its environment (the so-called “Simian Army”) here if you haven’t heard of the approach before.

    So, what next? The long answer to this is that we all need to rethink our priorities in the cloud and emphasize scale, availability and REDUNDANCY (despite the costs) if we want to use cloud services. The short answer? Well, be like Netflix.
  • SHA-1 Has Been Broken: Now What?

    by Daniel Maloof | Feb 24, 2017

    Friday, February 24, 2017 By Dave Shackleford, IANS Faculty


    We all knew this day was coming, but that doesn't make it any better. Researchers fromIANS Faculty Dave Shackleford Google and the CWI Institute in Amsterdam have announced the first documented SHA-1 collision. Basically, this means that two entirely different files can generate the same hash value due to a flaw in the mathematical algorithm in use.

    But what does this actually mean in practice? Well, in a nutshell, any use of SHA-1 for verifying trusted content goes out the window, essentially. An attacker could substitute a malicious file with the same hash for a trusted file and no one would know, based on the output of a SHA-1 operation.

    This has already happened with the MD5 hash algorithm, and many at the time moved to SHA-1 to replace it. And despite the fact that we’ve known about attacks against it since 2005 (plus NIST officially deprecating it in 2011), SHA-1 is still one of the most commonly used hashing algorithms in the world for verifying website certificates, validating files and content and providing any other number of operations involving integrity validation. With the SHA-1 algorithm now in serious doubt, an enormous number of sites and services will need to rescind its use to prevent a serious degradation in trust.

    Fortunately, while we now know SHA-1 is broken, it’s not exactly easy to compromise it today. The Google and CWI researchers documented the level of effort needed to produce a viable collision, and it’s pretty significant. In fact, the CWI researchers really needed Google’s help with the project, in part because of its massive computing power. The following numbers (taken from the original blog post announcing the collision) should provide some idea of how intensive this research was:

    • Nine quintillion (9,223,372,036,854,775,808) SHA1 computations in total
    • 6,500 years of CPU computation to complete the attack’s first phase
    • 110 years of GPU computation to complete the second phase

    Using their custom attack model (dubbed “SHA-1 Shattered”), the researchers were able to perform the computation in about a year, using 110 GPUs and incredible scale of system processing, meaning this attack is really only viable today for nation-states or companies like Google that have that sort of power available to them.

    Steps to Take

    For organizations still relying on the SHA-1 hash function, it’s important to start taking the necessary steps right away to avoid problems down the line. Of course, start by moving to a better algorithm like SHA-256 or SHA-3 now. Most major platforms support these today and switching should not be an issue for modern systems and applications.

    For legacy environments, this could be a bigger problem, but security teams need to at least begin documenting which systems aren’t compatible with stronger hash algorithms and planning for migration down the road if possible (or ideally, replacing those systems altogether).

    Next, start putting the pressure on your vendors and service providers to stop using SHA-1 in their technology, and ask for public validation that this has been done. Google has proactively protected all of its services and users from SHA-1 by adding collision detection techniques and removing SHA-1 from its algorithms in use. Google Chrome removed support for SHA-1 certificates in January (Mozilla’s Firefox had previously announced it plans to remove it early this year).

    Finally, the research team behind the SHA-1 collision also released an informational website (https://shattered.io/) that describes the attack in more detail and includes a testing engine for files to see if they are susceptible to a collision attack.
  • Podcast: Larry Walsh on Making Good MSSP Choices and Avoiding Vendor FUD in Pursuit of Better Security

    by Chris Gonsalves | Feb 23, 2017

    Thursday, February 23, 2017 By Chris Gonsalves, IANS Director of Technology Research


    Well-known IT security and services expert Lawrence Walsh, founder and CEO of The 2112 Group, joins me this week to share his deep insights for vetting and working with managed security services providers (MSSPs) in a variety of settings. Larry and I also share a wide-ranging discussion of infosec industry trends, hits and misses from the recent RSA Conference, and the impact of the Trump administration on the tech sector. 

  • IoT at RSA: A New Focus on Old Problems

    by Daniel Maloof | Feb 21, 2017

    Tuesday, February 21, 2017 By Kevin Beaver, IANS Faculty


    Kevin-BeaverWell, another RSA Conference has come and gone. I attended this year’s show and, as expected, saw and heard a lot of the same stuff that we've been hearing over the past several years. The threat landscape is evolving. The cloud is still a big topic, especially if you’re a security vendor rebranding and pushing your product/service to be cloud-friendly. The “legal” and “career” tracks at the conference helped point security professionals in the right direction (and I actually think there’s a lot of real value in this).

    And finally, "artificial intelligence" stood out to me as the new security term for this year. It didn’t quite overshadow the term "cybersecurity" (which unfortunately seems to be ingrained into the vocabularies of all but us veteran security practitioners), but there certainly seems to be a lot of pressure being put on artificial intelligence to solve all of our security problems for the foreseeable future. We'll see how that goes.

    Ultimately, though, one thing that did stand out to me in a positive way is all of the focus being put on IoT security. There’s no doubt IoT is that next wave of systems that we are going to be responsible for locking down, not unlike wireless networks and mobile devices in recent years. The devices are small. Their software can be unfamiliar. Heck, sometimes we don't even know the devices exist or what type of risks they’re creating!

    But here's the thing about IoT: Just like wireless, mobile and even the cloud, IoT threats and vulnerabilities waiting to be exploited are really nothing new. Sure, the threat vectors and attack mediums may be a little different than what we're used to seeing, especially when IoT devices are creating business risks from afar (i.e., employees' home networks and vendor-related systems). But at the end of the day, it’s still about the basic security flaws that exist in IoT (a number of which I heard talked about at RSA), which include:

    • Weak passwords
    • Missing software updates
    • Unencrypted or poorly configured communication protocols
    • Unsecured storage
    • Unmonitored systems
    • Systems that do not fall within the scope of penetration tests and vulnerability assessments
    • Device manufacturers that don't understand security
    • IT shops that can’t find the time to manage IoT security
    • Poorly implemented fixes and improperly managed devices
    • Security policies that are unknown, don't address IoT or, worst of all, are unenforced

    I could go on and on, but you get my point. The bottom line is that vulnerabilities affecting IoT devices – as well as the fixes necessary to get things under control – are nothing new. In fact, most organizations already have one or more programs, processes or controls in place to manage all of this. It's just a matter of bringing IoT devices into the scope of security oversight and, of course, addressing the basic security flaws present across your network. Unless and until that happens, none of the newfangled, IoT-centric security technologies at RSA and elsewhere will be helpful to you.

    I believe IoT adds a whole new layer of complexity and risk to any given business network. But be careful chasing down new tools, technologies and processes. Everything you need to get IoT under control is right before your eyes.

    The Value of RSA

    Getting back to the show, I know it sort of sounds like I'm trying to talk myself and others out of attending future RSA conferences. That's certainly not my intent. The learning opportunities, networking and camaraderie alone (not to mention great food and drink) make it a worthwhile visit, in my opinion. If you have not attended the RSA conference before, you need to. Put it in your budget for next year. They have a very reasonable, low-cost offering to get you in and see not only the vendors’ presentations, but many of the general sessions as well.

    Whether you lead an entire information security program, serve as an independent information security consultant or are simply interested in learning more about the field, you should check out the RSA Conference. Just don't forget about locking down your ever-growing IoT environment in the meantime so you’ll be in better position next year when RSA Conference distracts us with something new and shiny.
  • CrowdStrike, NSS Dust-up Erodes Trust in Product Testing

    by Daniel Maloof | Feb 17, 2017

    Friday, February 17, 2017 By Daniel Maloof, IANS Managing Editor


    crowdstrike-logoWith RSA Conference 2017 wrapping up this week, there’s plenty to talk about in the realm of security technology and innovation. But one story that may not be going away anytime soon and could have wide-ranging implications in the security product testing space is the ongoing feud between next-gen endpoint security firm CrowdStrike and NSS Labs, a security product research firm.

    At the heart of the dispute between the two companies are the findings in the most recent NSS research report on Advanced Endpoint Protection (AEP) products, which were released at RSA earlier this week. The report, which rates the effectiveness of endpoint security products from 13 different vendors using a range of criteria, assigned ratings of “Recommended,” “Security Recommended,” “Neutral” or “Caution.” In the report, CrowdStrike’s Falcon Host next-generation endpoint protection product was one of two products that fell under “Caution,” which is the lowest rating.

    The other products NSS tested included:

    Prior to the release of the report this week, CrowdStrike filed a lawsuit in federal court in Delaware last week seeking a temporary restraining order in an attempt to prevent NSS Labs from releasing the results of its report at the RSA conference. In the suit, CrowdStrike claimed that NSS’ testing of its Falcon product was incomplete and that NSS had obtained the software illegally after CrowdStrike had attempted to halt NSS’ testing of its product over methodology concerns. The push for a temporary restraining order ultimately failed (though the lawsuit is continuing), with the court ruling the public release of the results would not cause irreparable damage to CrowdStrike.

    Since the court ruling, NSS Labs has, of course, defended its methodology and reaffirmed its mission to “arm the public with fact based and objective information required to get secure and stay secure.” CrowdStrike president and CEO George Kurtz, however, told Dark Reading, that the testing methodology was flawed and that the lawsuit was not simply an attempt to block negative press.

    “This is not about trying to silence independent research,” Kurtz explained. “We welcome open, fair, transparent and competent testing. We didn’t necessarily see it here. This isn’t the Consumer Reports of cybersecurity. It’s bad tests, bad data and bad results.”

    In particular, Kurtz noted that during initial testing, NSS Labs was flagging well-known, legitimate software such as Adobe and Skype as malicious. This, Kurtz said, along with other red flags during the initial testing, led CrowdStrike to decline to participate in the public testing. But NSS continued to test, allegedly using access to the software that it had illegally obtained from a reseller.

    IANS Faculty Dave Kennedy, president and CEO of the infosec consulting firm TrustedSec, says that this is an important point, and it even demonstrates why testing such as that from NSS Labs may be inherently flawed.

    Dave-Kennedy“At first glance, it appeared CrowdStrike was attempting to suppress negative findings, but based on the latest details I’ve seen, I actually understand where they’re coming from,” Kennedy said. “Most of the other vendors were allowed to configure and tune their own products to appear more effective than they actually are. CrowdStrike, on the other hand, apparently was not able to configure its own solution, its protection capabilities appear to have been disabled and NSS used third-party [access to facilitate] the testing.”

    “Further, as a pen tester, I find these results highly alarming from an accuracy perspective, Kennedy added.” “Many of the solutions that received extremely high ratings would have to be configured in a way that’s unusable for the enterprise to be anywhere near as effective as the testing showed. To me, this report is more about gaming the system than it is actual capabilities and effectiveness. Based on the 100 percent rating Carbon Black received from NSS Labs, for instance, I would assume the product was simply configured to block everything and only allow ‘known good.’ In other words, it blocked everything except what was needed in order to make it test effectively.”

    Because of these apparent deep flaws in the testing methodology, Kennedy said he recommends security teams dig a little deeper in their research into all of the endpoint vendors evaluated and “not simply rely on the NSS report as a determining factor on endpoint protection.”

    As for the overall state of new, “next-generation” security technology itself, Kennedy emphasized the importance of taking a more measured, realistic view, particularly at conferences like RSA, where seemingly every product is hailed as the “next big thing” in the industry. 

    “Every year at RSA we hear about the latest technology and new math-driven artificial intelligence solutions that are being released,” he said. “To date, we haven’t seen a major improvement in the effectiveness of these products as ‘game changers’ and those that offer a lot of promise are still very much in the infancy stages.”
  • Van Wyk: Market Share the Key to Mac OS vs. Windows Security Debate

    by Daniel Maloof | Feb 13, 2017

    Monday, February 13, 2017 By Ken Van Wyk, IANS Faculty


    iansviewsI recently learned of some new Mac-based malware when a friend posted an analysis on Facebook. My reaction as a Mac user? “Yawn.” Why? Glad you asked…

    Every few months, there’s an urgent warning about some new Mac malware, but it always seems to fizzle away into nothing, or darn near nothing. Often, as in this most recent case, the malware triggers a user dialog that requires the victim to accept the malware. In this case, the malicious software was written as a Word-based macro, and Word diligently warned the user of a macro before running it if, and only if, the user consented.

    Now, don’t get me wrong. Many users are absolutely gullible enough to fall for a dialog box. And I’m definitely not saying that Macs are inherently immune to malware. Both Windows and MacOS have seen malware that can propagate without user intervention.

    But why, then, is MacOS – along with its distant cousin Linux – seemingly less susceptible overall to malware infestation than Windows?

    Understanding the Marketplace

    Ever notice when flocks of geese fly in a “V” formation, one side of the “V” is longer than the other? Why is that? The answer to the joke is that there are fewer geese on the shorter side, of course. So, why are Mac and Linux machines less plagued by malware? Simply put, it’s about market share, and there aren’t as many people creating malware targeting these machines.

    Windows still owns a far bigger market share than MacOS and, certainly, Linux. Generally speaking, you can also purchase a Windows computer for a lot less than a comparable Mac. If you’re deciding to write malware, your cost and ease-of-entry are lower on a Windows system, as a general rule. And yes, over the past five to 10 years, Macs have seen their market share slowly increase, but they’re still just not quite there. As ubiquitous as Macs seem to be, their market share is still dwarfed by Windows.

    Many of us in the security world have feared we might start seeing more Mac-specific malware as the market share rose, but that just hasn’t significantly materialized to this point. Perhaps it will change, but with market numbers like the above, I don’t think it will any time soon.

    Now, that doesn’t mean we should smugly sit back and not be concerned either. That would be downright foolish. Mac malware does exist, and targeted attacks do happen. If an attacker chooses to target an enterprise that is predominantly Mac-based, those market share numbers go right out the window.

    So, what can we do? Well, there are a few things:

    • Lock down and manage our security configurations on our Macs as though the malware threat were real.

    • Use the principle of least privilege by not giving every user administrative capabilities.

    • Get endpoint protection for our Mac users in addition to our Windows users.

    • Run software updates frequently. MacOS includes a behind-the-scenes malware detection and prevention tool that is updated daily.

    With a bit of luck, our Mac world may never get as bad as it is for Windows users. Let’s try to keep it that way.

     

    ***

    Ken Van Wyk is president and principal consultant at KRvW Associates and an internationally recognized information security expert, author and speaker. He’s been an infosec practitioner in commercial, academic, and military organizations and was one of the founders of the Computer Emergency Response Team (CERT) at Carnegie Mellon University.

Sign up for Updates


We’ll send you short and sweet notifications about our content and events.