Sequenxa Intelligence Agency

What Red Team Services Actually Test

April 3, 2026
What Red Team Services Actually Test
Most organizations think they know what a red team does. They picture hackers running exploits against firewalls. That mental model is wrong. Red team services don't test whether your systems have vulnerabilities. They test whether your organization, the people, the processes, the monitoring, would notice and respond to a real attack before the damage is done.
Category:Blog

Most organizations think they know what a red team does. They picture hackers in hoodies running exploits against firewalls. That mental model is wrong, and the gap between what people expect and what actually happens during a red team engagement is where most security programs quietly fail.


Red team services don't test whether your systems have vulnerabilities. That's a penetration test. A red team tests whether your organization, the people, the processes, the monitoring, the decision-making under pressure, would notice and respond to a real attack before the damage is done.

The distinction matters more than most security leaders want to admit.


The question a red team is actually answering




A penetration test asks: "Can we get in?" A red team asks: "Would anyone notice?"


Those questions have almost nothing in common. Pen testers work within a defined scope, a web application, a network segment, a cloud environment, and methodically catalog every exploitable weakness they find. The engagement ends with a severity-rated list of vulnerabilities and remediation guidance. Useful work. Necessary work. But it doesn't tell you what happens when someone bypasses all of that and starts moving through your network at 2 AM on a Tuesday.


Red team engagements are objective-based. The team gets a mission: access the finance database, exfiltrate customer records, compromise the CEO's email, reach the domain controller from an external starting position. Then they execute that mission using whatever combination of tactics works, just like a real adversary would.


The MITRE ATT&CK framework provides the taxonomy for this. Fourteen tactical categories spanning reconnaissance through impact, with hundreds of documented techniques pulled from observed real-world intrusions. A red team selects the TTPs that match the client's actual threat profile, not a generic checklist, and builds an operational plan around them.


What gets tested that nobody else tests




Here's the part that most security programs avoid confronting. A red team engagement tests five things that no other assessment touches.


Your people. Not with a generic phishing awareness quiz. With a targeted social engineering campaign designed to manipulate specific employees into specific actions. Pretexting calls to the help desk. Spearphishing emails crafted using real OSINT about the target. Physical social engineering, tailgating into secure areas, planting devices, testing badge access controls.


The 2025 Verizon DBIR analyzed over 22,000 security incidents and found that roughly 60% of confirmed breaches involved a human action. Credential abuse accounted for 22% of initial access vectors. Phishing accounted for another 16%. Red teams test exactly these attack paths because they're exactly what real adversaries use.


Your detection capability. The red team moves through your environment while actively trying to avoid triggering alerts. If your SIEM doesn't fire, if your EDR doesn't flag the lateral movement, if your SOC analyst doesn't notice the anomalous authentication at 3 AM, the engagement documents that failure in precise detail. The team maps every action to specific MITRE ATT&CK techniques and records whether each one generated a detection event or went unnoticed.


No vulnerability scanner tests this. No compliance audit tests this. This is the thing that determines whether your next incident becomes a contained event or a front-page breach.


Your incident response procedures. Not the plan that lives in a binder on a shelf. The actual execution when the SOC picks up something suspicious. Does the escalation chain work? Do the right people get notified? Does the response team know how to contain lateral movement while preserving forensic evidence? Does communication between security, legal, and executive leadership function under pressure, or does it collapse into confusion?


Red teams test this by operating long enough and aggressively enough to eventually trigger some level of detection. What happens next is half the engagement's value.


Your assumptions about your own security posture. This is the uncomfortable one. Most organizations carry a set of beliefs about their defenses that have never been pressure-tested. "Our segmentation is solid." "Our cloud IAM policies are locked down." "Our employees know not to click suspicious links." A red team finds out whether those beliefs are accurate or whether they're comfortable fictions.


Your ability to detect chained attacks. Individual vulnerabilities matter, but real adversaries don't exploit one thing and stop. They chain together a phishing email that harvests credentials, a VPN login with those stolen credentials, a privilege escalation on an unpatched workstation, lateral movement to a file server, and exfiltration through an encrypted channel that looks like normal HTTPS traffic. Each step might look benign in isolation. Together they constitute a breach. Red teams test whether your monitoring can correlate these steps into a coherent picture.


How a red team engagement actually works


The engagement follows a progression that mirrors how real threat actors operate.


It starts with reconnaissance. The team gathers intelligence about the target organization through open-source research. Employee names, email formats, technology stacks, physical office locations, vendor relationships, leaked credentials from previous breaches. This phase can run for days or weeks before any active testing begins.


Initial access comes next. The team uses the intelligence gathered to gain a foothold. This could be a spearphishing campaign targeting employees with crafted pretexts. It could be exploitation of an internet-facing application. It could be physical access, walking into a building using a cloned badge or social engineering the front desk. The method depends on what the reconnaissance revealed and what the client's threat model suggests.


Once inside, the team establishes persistence, something that ensures they maintain access even if one entry point gets discovered and closed. Then comes the internal campaign: privilege escalation, lateral movement through the network, credential harvesting, discovery of sensitive data and systems. All while maintaining operational security, using techniques designed to avoid the specific detection tools the organization has deployed.


The engagement ends when the team achieves the defined objectives or when a predetermined time window closes. Some engagements run two to four weeks. More advanced assessments, particularly those aligned with frameworks like TIBER-EU or DORA requirements for financial institutions, can extend to several months.


The social engineering assessment that changes everything


Social engineering testing during a red team engagement is not a phishing simulation.


Phishing simulations send a templated email to all employees and measure who clicks. That's awareness training with metrics. It's useful, but it's not adversarial.


A red team social engineering assessment targets specific individuals with researched pretexts. The team studies LinkedIn profiles, conference attendance, org charts, vendor relationships. They craft scenarios designed to exploit trust relationships and authority structures within the organization. A call to the IT help desk from someone who knows the right internal terminology and can name the target's manager. An email that references a real project the recipient is working on.


The 2025 DBIR noted that the median time for a user to fall for a phishing email is under 60 seconds. That finding tracks with what red teams see in practice. The window between initial exposure and compromise is often too narrow for automated tools to intervene. What matters is whether the organizational controls upstream, email filtering, URL sandboxing, endpoint protection, and the downstream response procedures can contain the damage.


Physical social engineering adds another dimension. Can someone walk into your facility with a convincing pretext and access server rooms, workstations, or network ports? Most organizations that invest heavily in cyber defenses haven't tested their physical controls with the same rigor. Red teams that include physical testing often find that the most secured digital environment in the building is accessible through an unlocked side door.


What 2025 breach data tells us about what red teams should be testing


The breach landscape in 2025 shifted in ways that directly affect what a well-designed red team engagement should include.


Third-party involvement in breaches doubled year-over-year, now accounting for 30% of confirmed breaches in the 2025 Verizon DBIR. That means red team engagements should be testing whether compromised vendor credentials or supply chain weaknesses can be exploited to reach the target organization's sensitive systems. An engagement that only tests the client's perimeter and internal network is missing the attack path that caused nearly a third of real breaches.


Ransomware appeared in 44% of analyzed breaches, up from 32% the prior year. Red teams increasingly simulate the full ransomware kill chain: initial access through phishing or exploited vulnerabilities, internal reconnaissance, privilege escalation to domain admin, and simulated deployment of ransomware payloads (without actually encrypting anything). The goal is to determine at which stage in that chain the organization's defenses would detect and interrupt the attack.


The Marks and Spencer attack in 2025 demonstrated a pattern that caught many security teams off guard: attackers compromised a third-party helpdesk through social engineering, bypassing the technical controls entirely. Organizations entering 2026 need red team engagements that test these actual attack paths, not just the ones that fit neatly into a traditional scope document.


Red teaming vs penetration testing: a practical distinction


This article isn't the place for a deep comparison, we've written a separate analysis of penetration testing services vs red team services that covers the topic in detail. But the short version is relevant here.


A penetration test is a scoped technical assessment. It evaluates whether specific systems have exploitable weaknesses. The testers usually work with the security team's knowledge, sometimes even with whitelisted IPs, to ensure thorough coverage within the engagement window. The output is a vulnerability report.


A red team engagement is an adversary simulation. It evaluates the organization's actual security posture, people, process, and technology together, by emulating the tactics of real threat actors without the security team's knowledge. The output is an operational narrative documenting an attack campaign from start to finish, including what the defenders did and didn't detect.


Both are offensive security engagements. They serve different purposes. Organizations with immature security programs should start with penetration testing and build toward red teaming as their detection and response capabilities mature.


Purple teaming and what comes after


The industry has moved toward purple teaming as the next evolution. Purple teaming combines the red team's offensive techniques with the blue team's defensive capabilities in a collaborative exercise. Instead of the red team operating in secret and delivering a report weeks later, both sides work together in real time. The red team attacks, the blue team responds, everyone pauses to analyze what happened, and detection rules get tuned on the spot.


The EU's Digital Operational Resilience Act (DORA) and the updated TIBER-EU framework are making these exercises mandatory for systemic financial institutions. That regulatory trajectory tells you where the standard of care is heading across all industries.


For organizations with mature security programs, the trajectory looks like this: regular penetration testing to validate technical controls, periodic red team engagements to stress-test detection and response, and purple team exercises to build institutional capability over time. Each tests a different thing. None of them is optional if the organization is operating in a threat environment where targeted attacks are documented.


What a red team engagement doesn't do


Red teaming has limits, and being clear about them matters more than overselling the service.


A red team engagement is a point-in-time assessment. It tests the organization's posture during a specific window. The week after the engagement ends, someone deploys a new application, someone misconfigures a storage bucket, someone introduces a new vendor integration. The red team report doesn't cover any of that.


Red teaming doesn't find every vulnerability. It finds the path of least resistance to the defined objective. There may be critical vulnerabilities in systems the team never touched because they didn't need to, they found an easier route to the target. Organizations still need penetration testing to systematically evaluate technical controls across their full attack surface.


A red team report also doesn't fix anything. It documents what happened, where detection failed, and what the attacker was able to achieve. Fixing those findings requires organizational change, not just patching software. If the organization isn't prepared to invest in that change, the engagement becomes an expensive exercise in documenting what everyone already suspected.


When to engage a red team




The triggers are specific. Your organization has been running penetration tests and addressing findings consistently, but you don't know whether your SOC would detect a sustained, multi-stage attack. You need to test incident response procedures under realistic conditions. Your board or regulators are asking whether the organization can withstand a targeted attack from a capable adversary. You want to understand how an attacker would chain together weaknesses across technical, human, and procedural domains to achieve a specific business-impact objective.

If you haven't done baseline penetration testing yet, start there. A red team against an immature environment ends on day one. The team walks through the front door, achieves the objective immediately, and delivers a report that amounts to "fix the basics first." That's an expensive way to learn something a standard pen test would have told you.


The part nobody wants to hear


Most organizations test their locks. Almost none of them test whether anyone is watching the doors.


Red team services close that gap. They test the complete system: the technology, the people operating it, the procedures they follow, and the decisions they make under pressure. The output isn't a vulnerability list. It's a documented answer to the question that matters most: if a capable adversary targeted this organization today, what would actually happen?


The answer is usually not what the security team assumed.


See how Sequenxa's red team services test your organization's complete security posture, from social engineering and physical access testing through full-scope adversary simulation, or review our broader offensive security capabilities.


Frequently asked questions


What do red team services test?


Red team services test an organization's complete security posture by simulating realistic adversary campaigns. This includes technical defenses, employee susceptibility to social engineering, physical security controls, incident detection and response capabilities, and the organization's ability to identify and contain chained multi-stage attacks.


How is red teaming different from penetration testing?


Penetration testing is a scoped technical assessment that identifies exploitable vulnerabilities in specific systems. Red teaming is an adversary simulation that tests whether the organization's people, processes, and technology can detect and respond to a realistic attack. Pen tests find vulnerabilities. Red teams find detection and response gaps.


What is a social engineering assessment in a red team engagement?


A social engineering assessment within a red team engagement uses targeted phishing campaigns, pretexting phone calls, and physical access attempts to test whether employees and security procedures can resist manipulation by a skilled adversary. Unlike generic phishing simulations, these assessments use researched pretexts tailored to specific individuals and roles.


How long does a red team engagement take?


Most red team engagements run between four and twelve weeks, depending on scope and objectives. More advanced assessments aligned with regulatory frameworks like TIBER-EU or DORA can extend to several months. The timeline includes reconnaissance, active testing, and reporting phases.


When should an organization invest in red team services?


Organizations should invest in red team services after they have established a baseline of security maturity through regular penetration testing and have functional detection and response capabilities in place. Red teaming is an advanced assessment that measures how the full security system performs against a motivated adversary, not a starting point for organizations that haven't addressed basic vulnerabilities.


References

CISA. (2023). Best Practices for MITRE ATT&CK Mapping. Retrieved from https://www.cisa.gov/sites/default/files/2023-01/Best%20Practices%20for%20MITRE%20ATTCK%20Mapping.pdf


Market.us. (2026). Red Team-as-a-Service Market Size Report. Retrieved from https://market.us/report/red-team-as-a-service-market/


MITRE. (n.d.). ATT&CK: Adversary Tactics, Techniques, and Common Knowledge. Retrieved from https://attack.mitre.org/


Verizon. (2025). 2025 Data Breach Investigations Report. Retrieved from https://www.verizon.com/business/resources/reports/dbir/

Ready to Take the Next Step?

Learn how Sequenxa can help protect your organization with intelligence-driven solutions.

Get Started
R.J. Finnegan
Written by
R.J. Finnegan

R.J. is special agent under Sequenxa Intelligence Agency. With a deep understanding of behavior analytics mixed in with cyber and technical warfare, R.J. brings a unique perspective to the intelligence community.

More Briefings

What is a threat assessment and why most organizations get it wrong

What is a threat assessment and why most organizations get it wrong

Most organizations hear 'threat assessment' and think of a checklist someone fills out after an incident. That is not a threat assessment. That is paperwork masquerading as prevention. Here's what the process actually looks like, why behavioral analysis is the foundation, and how early warning systems change outcomes when they're built correctly.

Read More
Workplace Investigations and Early Warning Indicators

Workplace Investigations and Early Warning Indicators

Most workplace investigations start too late. By the time an organization launches a formal inquiry, the misconduct has already spread, evidence has been compromised, and witnesses have stopped cooperating. The warning signs were there months earlier. This article breaks down the behavioral and digital indicators that experienced investigators recognize, why internal review processes fail structurally, and what it takes to close the gap between detection and action.

Read More
What environmental intelligence actually looks like

What environmental intelligence actually looks like

Most people hear "environmental monitoring" and picture a scientist in waders collecting water samples. Environmental intelligence is what happens when you apply the same tradecraft used in corporate investigations and security operations to ecological problems — satellite imagery, sensor networks, geospatial correlation, and pattern detection across time and terrain.

Read More
Why a Background Check Is Not the Same as Corporate Intelligence

Why a Background Check Is Not the Same as Corporate Intelligence

A background check tells you whether someone has a criminal record in the counties you searched. Corporate intelligence tells you whether the person sitting across the table is who they say they are, whether their company is what it claims to be, and whether the deal in front of you is worth the paper it's printed on. Most organizations treat these as interchangeable. They are not.

Read More