Back to blog
    Red Teaming

    What is Red Teaming?

    Sectricity Security TeamOctober 30, 2025

    Red teaming has moved from niche cybersecurity practice into the regulatory mainstream. This article explains what red teaming actually is, the six techniques red teamers use, how engagements unfold, three concrete examples, and which sectors should treat red teaming as essential rather than optional.

    Red TeamRed TeamingCybersecurityDORATIBER-EUAdversary Simulation

    What is Red Teaming? Cybersecurity's most realistic stress test for your defences

    Red teaming has moved from the niche corner of cybersecurity into the regulatory mainstream. Financial entities under DORA must now run threat-led penetration tests. NIS2 essential entities are being asked by auditors how they validate their detection capability. And boards are starting to ask the uncomfortable question: if a real attacker walked through our front door tomorrow, would we even notice?

    This article explains what red teaming actually is, the six techniques red teamers use in real engagements, how a typical engagement unfolds, three concrete examples of how attacks succeed, and which sectors should treat red teaming as essential rather than optional.

    TL;DR

    Red teaming is a full-scope adversary simulation that tests whether your organisation can detect, contain, and recover from a real attack. It is not a more expensive pentest. It is a different type of assessment that targets people, processes, and technology together. Key points:

    • Red teaming tests detection and response, not vulnerability exposure.
    • It uses any technique a real attacker would: phishing, physical access, OSINT, lateral movement.
    • Engagements typically run 4 to 12 weeks, ending when objectives are met or time runs out.
    • It is appropriate when you have an established security baseline and want to validate it.
    • DORA TLPT and TIBER-EU make red teaming a regulatory requirement for significant financial entities.

    Red teaming in one paragraph

    Red teaming is a goal-driven simulation in which an independent team of ethical hackers acts as a real adversary. The team is given specific objectives such as accessing customer data, disrupting a critical system, or establishing persistent presence on the network. They are given the freedom to use any combination of digital, social, and physical attack techniques. The exercise tests not just the technology stack, but the entire organisation: people, processes, detection capability, incident response, and recovery.

    The simplest analogy: red teaming is to a pentest what a fire drill is to a smoke alarm test. Both have value. They answer different questions.

    The 6 techniques red teamers actually use

    Red teamers do not run automated scanners and call it a day. They use the same toolkit a motivated attacker would, often combining several techniques in a single engagement.

    1. Open-source intelligence gathering (OSINT)

    Before any technical action, red teamers map the target organisation using publicly available information. LinkedIn for personnel and reporting structure, company websites for technology stack hints, job postings revealing internal tools, GitHub for leaked credentials or configuration, and DNS reconnaissance for the external footprint. A single Friday afternoon of OSINT often yields more than three weeks of network scanning.

    2. Spear phishing and vishing

    Targeted phishing remains the most reliable way to gain initial access. The red team builds a convincing pretext (often a fake supplier email, a fake IT helpdesk request, or a fake CEO message), tests it on a small group, refines it, and then sends it to the broader target list. Vishing (voice phishing) targets specific roles such as helpdesk agents or finance staff with phone calls that exploit urgency and authority bias.

    3. Physical access attempts

    Tailgating into a building behind an employee, planting a USB drop in the parking lot, posing as a contractor or supplier, dropping a rogue Wi-Fi access point near the office. Physical attacks remain shockingly effective because most security investment goes into the digital perimeter.

    4. Credential theft and lateral movement

    Once inside the network, red teamers rarely need new vulnerabilities. They harvest credentials from memory, exploit weak service accounts, abuse misconfigured Active Directory permissions, and move from a compromised workstation to domain administrator access in hours. Many environments that pass annual penetration tests fall apart under realistic lateral movement.

    5. Custom malware and command-and-control

    To remain undetected, red teamers often build custom payloads or modify existing tools to evade endpoint detection. Command-and-control (C2) frameworks such as Cobalt Strike, Mythic, or custom tooling allow persistent communication with compromised hosts while blending into legitimate network traffic.

    6. Detection evasion

    Throughout the engagement, the red team continuously adapts to avoid triggering alerts. They study your security tools, learn which actions trigger SOC analyst attention, and modify their approach. The goal is not to be loud, but to map exactly which actions are detected and which are not. The output is a detection coverage map you cannot get from any other type of test.

    How a red team engagement unfolds: from OSINT to exfiltration

    A typical red team engagement runs 8 to 10 weeks. The phases below are sequential but overlap significantly in practice. A skilled red team works on intelligence gathering, initial access, and post-exploitation simultaneously when opportunity arises.

    Week 1 to 2: Intelligence gathering

    OSINT, technical footprint analysis, employee profiling, identification of third-party dependencies, and mapping of likely attack paths. By the end of week 2, the red team has a detailed picture of the target including high-value individuals to phish, technologies in use, and probable detection blind spots.

    Week 3 to 4: Initial access attempts

    Phishing campaigns launched, vishing calls placed, physical access attempts made if in scope. External-facing systems probed for misconfigurations or unpatched vulnerabilities. The first foothold is usually achieved within this window, often through a single employee clicking a single link or running a single attachment.

    Week 5 to 6: Internal movement and privilege escalation

    From the foothold, the red team enumerates the internal network, harvests credentials, escalates privileges, and moves toward systems containing the agreed objectives. Active Directory abuse, credential reuse, and misconfigured shares are the most common paths.

    Week 7 to 8: Objective pursuit and exfiltration simulation

    The red team accesses the target objective, demonstrates the impact (without causing real damage), and simulates exfiltration of sensitive data. Persistent access is established to test whether the organisation can detect long-term presence rather than just the initial breach.

    Week 9: Detection comparison and reporting

    After the operational phase ends, the red team compiles a complete attack narrative and compares it against your blue team's logs. Every action is mapped: what was detected, what was missed, what was investigated and dismissed, and what could have been caught but was not.

    Week 10: Debrief and remediation roadmap

    A facilitated session brings together red team, blue team, and senior stakeholders. The debrief is where the most valuable learning happens. Remediation priorities are defined, detection rules are improved, and the gap between assumed security and demonstrated security is closed.

    3 examples of how red team attacks succeed in practice

    These examples are realistic composites based on common patterns we see across engagements. Names and specifics are anonymised, but the techniques and outcomes reflect what actually happens in the field.

    Example 1: The 4-day breach via a parking lot USB

    A logistics company in the Benelux had passed three consecutive annual pentests. The red team skipped the network entirely. They printed five USB drives with the company logo and the label "Q3 Salary Review - Confidential". They dropped them in the staff parking lot on a Monday morning. By Monday afternoon, two employees had plugged in the drives. By Wednesday, the red team had domain administrator access. By Thursday, they had simulated exfiltration of the customer database.

    Detection: zero alerts triggered until the debrief. The endpoint detection tool flagged the initial USB execution but the alert was buried in a queue of 2.000 daily notifications and auto-dismissed after 72 hours.

    Example 2: The CFO impersonation that bypassed MFA

    A financial services firm had multifactor authentication on every internal system. The red team called the IT helpdesk pretending to be the CFO's executive assistant, claiming the CFO was in an airport with a dead phone and needed urgent access to a board document before a meeting in 30 minutes. The helpdesk agent, trying to be helpful, reset the MFA enrollment to a new device. Within 12 minutes, the red team had access to the CFO's email and SharePoint.

    Detection: the MFA reset was logged but no alert was configured for executive accounts. The pattern of "helpdesk agent overrides MFA enrollment outside business hours" was nowhere on the SOC's alert list. Post-engagement, this exact alert rule was added and now triggers within 90 seconds.

    Example 3: The supplier portal that opened a backdoor

    A manufacturing company had a hardened internal network and well-configured perimeter. The red team compromised a small supplier instead. The supplier had legitimate access to a vendor portal for submitting invoices. From the supplier's compromised account, the red team uploaded a malicious PDF that exploited a vulnerability in the portal's preview function. Within 6 hours, they had code execution on the portal server. Within 4 days, they had pivoted into the corporate network.

    Detection: nothing flagged the supplier login because it came from the supplier's normal IP range. The PDF exploit was caught by the antivirus on the portal server, but the alert was sent to a shared inbox that nobody monitored on weekends.

    Which sectors should treat red teaming as essential in 2026

    Financial services (DORA TLPT)

    Significant financial entities in the EU are required to conduct Threat-Led Penetration Testing under DORA. TLPT is a specific form of red teaming, conducted under the TIBER-EU framework, by certified providers, with results reported to the competent authority. This is not optional.

    Healthcare and critical infrastructure

    Hospitals, energy providers, water utilities, and transport operators classified as essential entities under NIS2 must demonstrate that their risk management measures are effective. Red teaming is the most credible way to provide that evidence to a regulator or auditor.

    Public administration and defence

    Government departments and defence contractors face state-level threat actors who use the exact techniques red teams simulate. For these organisations, red teaming is not just compliance, it is operational realism. The threat is daily, not theoretical.

    Large enterprise outside regulated sectors

    Any organisation with more than 1.000 employees, mature security operations, and sensitive intellectual property or customer data should treat red teaming as a periodic exercise. The investment is justified once the basic security baseline is in place and the next question becomes: can we actually detect a real attacker?

    5 myths about red teaming

    Myth 1: Red teaming is just an expensive pentest

    A pentest finds vulnerabilities in a defined scope. A red team finds out whether your organisation can detect and respond to a real attack across all attack vectors. Different question, different method, different output. The cost difference reflects the depth, not premium pricing for the same service.

    Myth 2: A red team should always be caught

    Experienced red teams achieve their objectives in the large majority of engagements. That is not a failure of the exercise. It is the finding. The value comes from understanding how it happened and what needs to change. An organisation that catches the red team early in every engagement either has truly exceptional defences, or the exercise was scoped too narrowly.

    Myth 3: You need a 50-person SOC before red teaming makes sense

    You need a security baseline (regular pentesting, patch management, basic awareness training, logging in place), not a massive SOC. A red team exercise on a small organisation reveals which investments would have the highest impact. It is one of the most efficient ways to plan the next 12 months of security spend.

    Myth 4: Red teaming is too risky for production environments

    A professional red team works under strict rules of engagement, has emergency stop procedures, and reports immediately if they encounter a real incident in progress. The risk of a controlled red team exercise is far lower than the risk of an actual attack you would not detect.

    Myth 5: AI-powered tools will replace red teamers

    Automated tools cover known attack patterns. They cannot improvise, build pretext for a phishing campaign that a specific helpdesk agent will believe, or notice that a side door of the office is left unlocked at 6:30 PM. Red teaming combines technical expertise with creativity and judgement. Tools augment red teamers; they do not replace them.

    Frequently asked questions

    How does red teaming differ from a pentest in one sentence?

    A pentest finds vulnerabilities in defined systems; a red team finds out whether your entire organisation can detect and respond to a real attack.

    How much does a red team engagement cost?

    Realistic pricing for a full-scope red team engagement in the EU starts around 60.000 euro and ranges to 200.000 euro or more for TIBER-EU TLPT engagements with intelligence preparation. The main drivers are scope (digital only versus including physical and social engineering), duration, and whether external threat intelligence is included.

    How often should you run a red team exercise?

    For most organisations, every 18 to 24 months provides enough time to remediate findings and see whether improvements actually changed detection capability. Financial entities under DORA TLPT follow the 3-year cycle defined by the framework. Annual red teaming makes sense only for very large organisations with continuous security investment.

    What certifications should red teamers have?

    Look for OSCP and OSCE for technical depth, GIAC GPEN or GXPN for advanced offensive skills, CRTO or CRTL for red team methodology specifically, and CRTSE or equivalent for senior practitioners. For TIBER-EU TLPT engagements, the provider must be on the certified list maintained by the competent authority. Certifications matter, but engagement reports speak louder. Ask for sanitised examples of previous work.

    Can an internal security team run their own red team?

    An internal red team can supplement external testing with continuous adversary simulation. But an internal team that already knows your environment cannot truly test the unknown-unknown blind spots. External red teams bring fresh eyes, no organisational politics, and the regulatory credibility that auditors and boards expect. Most mature organisations combine both.

    Related services and resources

    Sectricity conducts red team assessments across Belgium, the Netherlands, and the UK, combining digital exploitation, social engineering, and physical security testing. For organisations starting with security validation, penetration testing is the right entry point. Want a deeper read on when red teaming becomes appropriate? Our red teaming implementation guide covers exactly that. We also publish a social engineering assessment guide and an explainer on how a hacker really operates from OSINT to pentesting. For DORA and NIS2 compliance, see our compliance pentesting service.