Article Samples

  • From Tool Sprawl to True Clarity: Why Continuous Threat Exposure Management is the Next Big Shift

    Publication Date: October 1, 2025

    A deep dive into why the shift from reactive tool-centric security to a continuous, risk-based exposure management approach is becoming the next strategic imperative for organizations.

    Executive Summary

    Over the last decade, most security programs grew the same way: a new threat appears, a shiny tool promises to neutralize it, and another box lands in the rack (or another agent on an endpoint). The result is a sprawling stack with overlapping features, fragmented telemetry, and uncorrelated alerts (more motion than progress). Multiple independent surveys now show the human impact of this sprawl: 76 % of CISOs report being overwhelmed by alert volume from a growing number of tools across an expanding attack surface. That same research ties fragmentation to visibility gaps that make it harder to identify and contain breaches. It’s not just burnout; it’s business risk. When your stack can’t reliably answer “what matters most, right now, to this business?” you can’t make defensible, board-level decisions. Especially now that public companies are required to disclose material cybersecurity incidents within four business days under new SEC rules. This article argues that the answer lies in adopting a continuous threat exposure management (CTEM) model, moving beyond episodic scans and alerts to a repeatable, business-driven cycle of discovery, prioritization, validation and mobilization.

    Key Section Previews

    What is CTEM and Why Is It Getting Spotlighted?

    Continuous Threat Exposure Management (CTEM) is a program model introduced and popularized by Gartner. Instead of funding yet another reactive point solution, CTEM shifts the practice to an ongoing, risk-driven cycle that discovers exposures, prioritizes what’s material, validates that fixes actually reduce risk, and mobilizes the organization. Industry explanations and analyst digs into the model echo those five phases and emphasise that CTEM is not “just another tool”. It is a way to operationalize exposure reduction that maps directly to business value. It’s gaining explicit visibility in analyst research, reinforcing that this is a category shift, not a niche technique. Why is CTEM rising now? Because it squarely addresses the three realities most CISOs face: (1) The attack surface is dynamic and distributed — cloud, SaaS, remote work, shadow IT, third-parties, OT/IoT, and AI systems change faster than traditional point-in-time assessments

    Reactive Tooling vs. Proactive Exposure Management

    Most “threat management” remains episodic: an alert fires, a ticket opens, teams investigate, and the cycle resets. Gartner warns that this incident-centric mindset rarely reduces future exposure. CTEM flips the order of operations by continuously seeking out exposures such as misconfigurations, weak controls, exploitable paths, not just known CVEs, and then ranking them by likely business impact. Concretely, this means: from point-in-time to continuous; from flat lists to risk-based prioritisation; from assumed fixes to validated outcomes. This is why CTEM resonates with leaders. It shifts organizations away from an endless cycle of reacting to alerts and instead establishes a programme that measurably reduces the attack surface and explains that reduction in clear business terms.

    Translating Technical Risk into Business Terms

    Boards don’t need a crash-course in CVE scoring. They need to understand materiality and trajectory. Use CTEM outputs to frame cyber like any other enterprise risk: Exposure → Impact mapping. Tie each high-priority exposure to potential business impact such as revenue at risk from downtime, regulatory penalties from data exposure, safety outcomes in OT, or strategic risk from IP loss. Use actionable risk KPIs: replace “patches applied” with metrics that explain outcomes such as time to validate exposure closure, percentage of critical business processes with validated control coverage, reduction in attack paths to critical assets, and trend in externally exposed critical misconfigurations. Scenario-based narratives help walk through a plausible chain (e.g., vendor credential misuse → cloud privilege escalation → data exfiltration) and show how CTEM reduced that pathway last quarter. Be explicit about residual risk and exceptions, aligning it to enterprise risk appetite.

    Conclusion

    Boards don’t fund “more tools”. They fund reduced, validated exposure that protects revenue, customers, safety, and strategy. CTEM gives CISOs a repeatable way to show that reduction and to do it across IT, OT, third-party, and AI domains with governance that stands up to scrutiny. Regulators will continue to ask for timely, consistent disclosure of material incidents and for evidence that cyber risk management is embedded in strategy and oversight. Programs that adopt CTEM are better positioned to answer both the “what happened?” and the “what did you do about it?” in the language of enterprise risk.

    [View Full Article]

    Turning Security from a Policy into a Practice in Healthcare

    Publication Date: September 26, 2025

    An exploration of how healthcare organizations must shift cybersecurity from penned policy to embedded practice in care delivery environments.

    Executive Summary

    Healthcare workers have always focused on one mission: providing care. Policies have supported that mission by setting standards and guiding safe practices including in cybersecurity. However, the reality has shifted. With attackers now directly targeting hospitals and health systems, staff are forced to balance caring for patients with defending against cyber threats, an unfair but unavoidable burden.
    That’s because today’s threats aren’t limited to stealing data, though that remains a huge risk. Healthcare data is among the most valuable commodities on the dark web, and when it’s stolen the privacy impact is widespread. Healthcare facilities’ reputations are affected. Healthcare services are impacted and slowed.
    Imagine being told that your Protected Health Information/medical record is stolen. All of the details that you expected to remain confidential between you and your practitioner have been exposed. At the same time, attackers are escalating from data theft to disrupting services, shutting down care, and putting patients directly at risk. In healthcare, cybersecurity is not just an IT issue. It’s a patient safety issue, an operational continuity issue, and a trust issue.

    Key Section Previews

    When Cyberattacks Hit Care, Not Just Data

    For years, healthcare security conversations focused on data breaches such as stolen health records sold on the dark web. That risk hasn’t gone away. What has changed is that attackers are no longer stopping at data theft. Increasingly, they are also disrupting operations: locking systems, delaying care, and eroding public trust. Ransomware is the top concern. And it’s no longer opportunistic, Knight explained that it’s professionalized through ransomware-as-a-service (RaaS). Attacks are more targeted, more precise, and far more damaging. The SickKids Hospital attack in 2022 showed how quickly patient care can be disrupted when critical systems are locked.

    Visibility First, Then Vulnerability Management

    “You can’t protect what you don’t know exists.” That was Knight’s way of underlining the first step in defending healthcare environments: asset visibility. Hospitals are full of devices, some modern, some decades old, and many are mission-critical. Without an accurate inventory, vulnerabilities go unseen. Once visibility is in place, the focus turns to risk-based prioritization: patching the right things first, guided by real-world threat intelligence. In IT environments, automation helps. In OT environments, patching has to be managed carefully to avoid disruptions.

    Culture Is as Important as Technology

    Security awareness isn’t about endless training slides or technical jargon. It’s about making security relevant to daily work. Knight described a simple approach: use real-world examples, connect security to what people care about, and build trust. Just as important is to create a no-blame environment. If someone clicks on a phishing link, they should feel safe reporting it. Shaming slows down response. When security becomes part of the culture, everyone contributes. And in healthcare, that collective vigilance makes all the difference.

    Conclusion

    This conversation with information-security expert Trecia Knight reinforced a simple but powerful truth: cybersecurity in healthcare is not only about policy. It’s about practice. Policies set the rules, but practice brings them to life. Together, they build resilience, strengthen culture, and keep patients safe. When both work hand in hand, every person has a role in protecting care. The threats are growing from ransomware, AI-driven attacks, deepfakes, supply-chain risks. But healthcare organisations don’t have to be passive targets. With visibility, collaboration, automation, resilience, and culture, security can move from being an abstract compliance requirement to being part of the fabric of care.

    [View Full Article]

    Third-Party Risk: Understanding Compliance as Liability You Can Measure

    Publication Date: October 21, 2025

    A detailed look at how organizations should shift from traditional vendor-compliance checklists to quantifying third-party cyber risk as measurable business liability.

    Executive Summary

    Supply-chain attacks remain the #1 breach vector in the public imagination for a reason: when you outsource work, you also outsource exposure. Yet many organizations still treat third-party risk like a checkbox exercise: collect a questionnaire, file the score, renew next year. This post reframes third-party risk as measurable liability with financial, operational, and reputational consequences you can (and should) quantify. Below, we unpack what high-profile vendor breaches reveal, why traditional assessment methods fall short, how to adopt continuous vendor risk monitoring and liability quantification, how to translate vendor risk to board-level metrics (dollars, downtime, fines), and which contract and cyber-insurance moves strengthen your position.

    Key Section Previews

    What SolarWinds and MOVEit really revealed

    Two of the most consequential supply chain events of the past few years, SolarWinds and MOVEit, share a through-line: the true blast radius isn’t visible from a vendor’s self-attestation. It shows up later as disclosure obligations, litigation, forensics spend, and regulatory scrutiny.

    Zooming out, macro studies reinforce the vendor theme. Verizon’s 2025 DBIR analysed 22,052 incidents and 12,195 breaches; commentary on the 2025 edition highlights third-party involvement doubling to around 30 % of breaches, with vulnerability exploitation surging; exactly the pattern seen in MOVEit-style events.

    Why Questionnaires and Static Scores Fall Short

    Traditional vendor risk inputs, SIG questionnaires, ISO mappings, SOC reports and external ratings are still useful, but they’re point-in-time and often self-reported. The limitations are well-documented: bias and staleness. Self-assessments depend on respondent honesty and may be outdated by the time you review them. Operational velocity: modern attacks move faster than annual recertifications. Risk posture can change weekly with new vulnerabilities, misconfigurations, or staffing churn.

    Tie Vendor Risk Directly to Board-Level Metrics

    Boards care about dollars, downtime, and disclosure, not checkbox completion rates. Frameworks emphasise quantification and actionable dashboards.
    Here’s a pragmatic mapping:

    • Dollars (Expected Loss / Value at Risk): Use the FAIR model to express the annualized loss expectancy (ALE) for your top ten vendors by critical business service. Summarize with tornado charts (largest contributors to probable loss), and show how specific remediations or contract clauses reduce the distribution’s tail.
    • Downtime (Service-level impact): Tie vendor incidents to RTO/RPO commitments. Track mean time to recover from third-party disruptions and show the cost of missed SLAs (lost revenue, overtime, penalties). 

    Conclusion

    Third-party risk isn’t about collecting stacks of PDFs. It’s about translating vendor exposure into business outcomes you can manage:

    • Financially: What is our expected loss and tail risk from Vendor X? Which control or clause reduces it the most per dollar?
    • Operationally: How does Vendor Y’s outage map to our RTO/RPO and revenue at risk?
    • Legally/Reputationally: If Vendor Z loses EU personal data, what is our realistic fine range and required disclosure cadence?

    If your answers rely on “we have a completed questionnaire on file,” you’re carrying unpriced liability. If your answers include FAIR charts, SBOM coverage stats, continuous-monitoring deltas, and contract levers, you’re managing a measurable liability and that’s where boards, regulators, and insurers are steering the ecosystem.

    [View Full Article]

    AI in the Crosshairs: Understanding and Securing Your Organization’s AI Models

    Publication Date: October 7, 2025

     A strategic exploration of how organizations must secure AI model lifecycles, from data and training to deployment and supply-chain risks as adversaries increasingly target AI systems.

    Executive Summary

    Enterprises are racing to ship AI assistants, code-copilots, and decision-support tools. But the very properties that make modern AI powerful: its probabilistic outputs, reliance on vast data, and dependence on external content, can also open doors to new classes of attacks. Traditional security controls were built for deterministic software with fixed inputs and predictable outputs. AI systems behave differently, and adversaries are learning to exploit the gaps. Frameworks like National Institute of Standards and Technology’s AI Risk Management Framework (AI RMF) and its 2024 Generative AI profile explicitly call out the need to adapt risk practices to AI’s unique failure modes, from data poisoning to model misuse. In parallel, security researchers and governments have catalogued adversarial tactics specifically for AI. MITRE’s ATLAS knowledge base, for example, maps techniques for attacking AI systems (from training-data manipulation to model exfiltration) much like MITRE ATT&CK did for enterprise networks. That’s a strong signal: AI now has its own threat landscape and it’s evolving quickly.

    Key Section Previews

    The New Threats in the AI Stack

    Enterprises are no longer just defending infrastructure and endpoints. They’re defending models, data, and pipelines. Consider prompt injection and indirect prompt injection are realities now. Malicious actors craft content (including hidden text in web pages, emails or documents) that causes an AI system to ignore its original instructions and follow unintended instructions. For example, the UK’s National Cyber Security Centre (NCSC) has warned that prompt-injection can echo the impact of classic input attacks (“the SQL injection of the future”). Model and data poisoning: tainted examples in training or fine-tuning data can cause a model to behave normally in most cases, then misbehave when triggered. For instance, a University of Chicago research project “Nightshade” showed that even a relatively small number of poisoned samples can degrade text-to-image models and bleed into related concepts. AI supply-chain malware: In 2024, researchers discovered back-doored and malicious AI models uploaded to public repositories. Because many frameworks load models via deserialization (e.g., Python pickle), simply loading an untrusted model can lead to remote execution. This is a new kind of software-supply-chain risk: the “dependency” isn’t a library, it’s the model file itself. Existing scanners miss many AI risks because they’re built for classic code paths, not emergent behaviours. AI systems often have non-deterministic outputs, opaque supply chains, and training or inference artifacts that defy standard test oracles.

    Governance, Compliance and the AI Lifecycle

    Regulators and standards bodies now expect AI-specific risk management, not just generic cybersecurity hygiene. For example, the European Union AI Act entered into force on 1 August 2024, with risk-based obligations phasing in over the next few years (for general-purpose AI, high-risk AI, etc.). In the U.S., the United States Securities and Exchange Commission (SEC) is already taking action against firms for misleading “AI-washing” claims, and together with the 2023 cybersecurity rules that require public companies to disclose material cybersecurity incidents on Form 8-K, it means AI incidents that are material could fall under the required disclosures. Emerging standards include ISO/IEC 42001 (Dec 2023) for AI management systems, and NIST’s AI RMF + generative AI profile (July 2024) plus other community profiles. Taking a lifecycle stance means mapping risk from training-data provenance, model-development controls, testing/validation, deployment/inference controls, through to model deprecation and supply-chain traceability. It also means constructing AI Bill of Materials (AI BOM) for transparency.

    Practitioner Playbook: From Visibility to Resilience

    Start with visibility: do you know all your AI assets (models, data sets, vector-stores, inference endpoints, chained agents)? Without this, you can’t secure what you don’t know. Preventing asset-drift, shadow-AI (unauthorized AI agents and services) and uncontrolled third-party models is critical. Then shift to controls: apply least-privilege, network segmentation for AI infrastructure, API monitoring, anomaly detection adapted for non-deterministic models. For adversarial risk you need adversarial testing and red-teaming (not just SAST/DAST). Mid-cycle you need runtime monitoring: log prompts/inputs/outputs, monitor for model-drift, track when someone tries to override controls. Don’t rely on anomaly detection alone—AI’s inherent variability means you can’t assume “good model behaviour” will always look the same. Finally, build resilience: assume the model or its supply-chain may be compromised. Plan for rapid rollback, isolation of compromised endpoints, model-weights integrity checking and incident-response tailored for AI systems. Example: even if you validate control coverage on yesterday’s model, the next model version or fine-tune run may introduce new threats unless you embed controls in the lifecycle.

    Conclusion

    AI isn’t just another capability: it’s now a strategic attack surface. Boards and executive leadership need to treat AI models, pipelines, and data as comparable to critical infrastructure and to demand validated exposure-reduction, not just tool checklists. When you adopt a lifecycle mindset, build governance, visibility, controls and resilience together, you position your organization to innovate confidently and stay ahead of attackers. The question isn’t “Can we do AI?”. Rather it’s “Can we do AI securely, responsibly and with measurable assurance?”

    [View Full Article]

    Canada’s Cyber Advantage: Trust, Sharing, and the Road to Resilience

    Publication Date: July 18, 2025

     A discussion of how Canada can build national cyber resilience through trust-based threat-sharing networks and proactive collaboration across government, industry and all business sizes.

    Executive Summary

    The digital infrastructure that powers our daily lives: from banking to healthcare to local government, is under constant pressure from an increasingly complex and coordinated web of cyber threats. These risks no longer concern only major corporations or national agencies; they affect every organization, big or small. From hospitals and municipalities to small-town retailers and construction firms, all organizations are now potential targets. But what if, instead of merely reacting after a breach, we focused on sharing the right information early enough to prevent one? This was the theme that emerged in my recent conversation with Canadian Cyber Threat Exchange (CCTX) Strategic Advisor Robert (Bob) Gordon. Drawing on decades of experience across government and industry, Gordon supports a trust-based approach to cyber-threat sharing — one that focuses on enabling prevention and resilience across all sectors and business sizes.
    In this article, we explore why a made-in-Canada threat-sharing network is not just desirable, but essential. We look at how such a network must go beyond traditional post-incident reporting to foster a culture of proactively exchanging insights, signals and experiences. Because if the weakest link in one sector falls, the effects can cascade across supply chains, local services and national infrastructure.

    Key Section Previews

    Why Today’s Threats Demand a New Kind of Collaboration

    Bob Gordon begins by outlining the shifting landscape of cyber threats. “Cyber attackers are no longer lone individuals,” Gordon notes. “They’re organized groups, often operating like businesses themselves.”
    These groups, whether state-sponsored or financially motivated, are now leveraging sophisticated tools, including AI, to launch increasingly realistic and targeted attacks. Simultaneously, attackers are leveraging AI and automation to scale their operations. Phishing emails have become more sophisticated, blending seamlessly into the daily flow of workplace communication. “All companies are now vulnerable,” Gordon explains, “It’s not just about protecting secrets; it’s about preserving access to your own data, your operations, your business continuity.”
    And it’s not just the threat-actors who have evolved. Technology has made it easier for attackers to weaponise everyday tools. AI can now generate near-perfect phishing emails. Automation enables rapid scaling of attacks. Social engineering is more convincing than ever. “Gone are the days when a phishing email was riddled with grammatical errors. Now it looks like an internal memo,” Gordon warns.

    Rethinking Threat Sharing: From Reaction to Prevention

    Traditional models of threat-sharing have largely focused on what happens after a breach. While post-incident reporting plays a role, its utility is limited to retrospective analysis. In contrast, a prevention-first approach aims to stop incidents before they start or at least reduce their impact.
    This prevention-focused model includes early-warning signals and threat indicators, exchanging lessons learned from internal implementations, and collaborating on awareness strategies for employees and customers. The goal isn’t just to contain damage. The ultimate goal is to prevent it entirely or to minimize its operational and reputational cost.

    Aligning Objectives Across Diverse Sectors

    While it may seem like healthcare, finance and telecommunications operate in silos, Gordon emphasises that the prevention mindset cuts across all industries. The core cybersecurity goals remain consistent: keep adversaries out, detect breaches early, and recover operations quickly. “Every organisation wants the same three things: keep the attackers out, detect when they get in, and recover quickly,” he says.
    That includes being aware of the broader digital ecosystem. Your supply chain, your customers, even your cloud service-providers — any one of them can be a weak link. Supply-chain and customer relationships introduce additional risk. Even if your organization maintains strong internal controls, a compromised vendor or poorly-informed user can become an entry-point for attackers.

    Conclusion

    A trust-based sharing framework is not a “nice to have.” It’s a strategic advantage. When organizations, sectors and governments share early warnings, signal-intelligence and incident-learning in a safe, trusted environment, they collectively raise the baseline of resilience. Canada’s digital-economy and critical services depend on it. The questions for Canadian leaders aren’t just “How do we respond?” — they must ask “How do we partner, how do we share, and how do we prevent?” The answer starts with trust, transparency and an ecosystem that moves ahead of attackers, not only behind them.

    [View Full Article]

Let’s Collaborate

Go back

Your message has been sent

Warning
Warning
Warning
Warning.