Content Type
Date Range
Event | Apr/16/2026
The Model Context Protocol (MCP), an open standard defining how AI assistants connect to external data sources and tools, is becoming vital infrastructure for generative and agentic AI applications. As organizations rapidly adopt these capabilities while managing risks, this symposium helps security leaders understand MCP's architecture, why vendors are quickly developing MCP servers, and the security risks of connecting AI systems to enterprise resources. We also explore how AI Security Posture Management (AI-SPM) tools are evolving to address these challenges and provide practical frameworks for managing MCP-enabled AI deployments.
Faculty

Mike is a seasoned engineering leader with over 20 years of experience in cybersecurity, software engineering, and cloud-native architecture.  He currently leads Security Engineering at Netlify, having previously built and led the Security & Compliance program at the high-growth startup Gatsby through its acquisition by Netlify.

An active security researcher with multiple CVEs, Mike has presented research at security conferences and developed cybersecurity curriculum for the University of Pittsburgh, covering penetration testing and red team tactics.  Earlier in his career, Mike founded three tech companies and spent over a decade advising clients on software and cybersecurity initiatives.

Tools & Guides | Feb/2/2026

Building efficient key and secret management processes and testing the rapid rekeying of your cryptographic estate will become increasingly important as recursive AI is directed at keys and secrets reverse engineering. This report explains how to prepare for recursive AI cryptographic attacks.

Tools & Guides | Feb/2/2026

This checklist is designed to help security teams securely pilot and deploy specific Microsoft 365 (M365) Copilot features, such as M365 Copilot for Intelligent Search, Chat, Bing, SharePoint and Teams.

Tools & Guides | Feb/2/2026

Just like hackers are using AI to level-up their attacks, security teams can also use AI to bolster their capabilities. Common use-cases for security teams include: threat detection and response, optimizing code, and vulnerability management.

Tools & Guides | Feb/2/2026

This report explores five critical risk areas: over-reliance on AI, AI bias, hiring misrepresentation through AI, combinatorial risks from agentic AI systems and AI tool sprawl.

Tools & Guides | Feb/2/2026

This document provides a detailed set of prompts to evaluate the security and data privacy guardrails of AI systems. 

Blog | Feb/2/2026
How retail security leaders in consumer-facing organizations can adjust their posture, priorities, and operating model for the year ahead.
Event | Apr/9/2026
Whether concerned with the increased risks of data exfiltration via AI or the growing infiltration of imposter North Korean remote workers, organizations are looking for ways to enhance protection of insider threat risks. This symposium provides specific, actionable recommendations whether you’re just standing up a program or looking to mature and modernize it. We’ll share strategies to improve your monitoring within Legacy Applications, M365, Azure, AWS and GCP environments, and recommend processes for cross-functional collaboration to identify key applications and data, establish baselines for day-to-day activity, detect anomalies and respond to risks.
Blog | Jan/29/2026
Learn how incident response tabletop exercises help organizations uncover gaps, improve readiness, and prepare teams for real-life cyber incidents.