Rules of engagement
A set of clear rules of engagement should be established and approved by leadership and the legal department to ensure that tools, techniques, and procedures can be applied to simulate and emulate adversaries effectively. A superior penetration testing team holds itself accountable to the highest possible standard and works with excellence. This includes business ethics.
Therefore, it's important to establish rules that the team follows. Some examples are as follows:
- Do good! Always operate with due diligence.
- Do not perform denial-of-service testing or deny access to systems intentionally without explicit authorization.
- Consult the no-strike list before compromising assets. (A no-strike list is a set of assets or systems that are off-limits to the pen test team.)
- Operate surgically, rather than carpet bombing targets.
- Handle credentials and other sensitive security artifacts securely and safely during and after the conclusion of operations.
- Exfiltrate dedicated sentinel data records, rather than accessing customer data. Sentinel data is basically dummy data that is created as the objective for the pen test team.
- Respect the privacy of employees.
- Stand down and pause when asked by business owners or blue team management.
Another area to highlight in the rules of engagement is that the penetration testers will show their cards when asked by the blue team management. This rule depends a lot on the maturity of the red team program and its members, but generally it is the correct long-term approach. These rules are important in case a real-world adversary is active in the organization and the blue team needs to distinguish between the real adversary and the red team.
A good source of ideas for rules can be found by researching various bug bounty programs that companies offer, as well as the San Remo Handbook on Rules of Engagement (www.iihl.org/sanremo-handbook-rules-engagement). The San Remo Handbook follows a restrictive approach towards authorization, meaning if something is not highlighted as being authorized it's a no-go.
As a seasoned red teamer, you should also ask your blue team about rules of engagement. They have access to a lot of information (who browses which websites, which processes are run on which machines, and so on) and often do not operate under clear rules of engagement.
Finally, rules and procedures should be revisited regularly and adjusted as needed.
Adjusting rules of engagement for operations
The rules of engagement might differ for each operation. At times, you might want to allow certain aspects of attack, for instance, when simulating a denial-of-service attack against a specific target that would normally not be in the list of approved techniques.
Another example is that, at times, you might want to perform a mass compromise of assets, rather than operating surgically. For instance, a vulnerability such as WannaCry or Slammer went ahead and performed automated discovery of additional targets and spread that way throughout the network of organizations. A red team might want to safely emulate such malware to demonstrate the blast radius and the impact of vulnerabilities. Putting safeguards and stopgaps in place is, of course, critical for any such operations.
Special considerations should always be given when testing potentially involves a third-party service; additional authorization and/or notification might be necessary.
Geographical and jurisdictional areas of operation
The rules of engagement should also consider the areas of operation to ensure there are no conflicts with local policies and laws the team operates in. Any such rules should be reviewed with legal counsel.
For instance, legal restriction or local company policies on what tactics and techniques can be used at an organization in Germany might differ compared to what a penetration test can do in the USA or in China. Employees have rights, including rights to privacy. Always make sure that activities are authorized and covered by seeking legal counsel.
One argument when discussing this is always that a true adversary does not have these limitations. And that argument is correct, but a true adversary also goes to jail when they're caught, and an authorized pen tester does not.
Distribution of handout cards
A good practice mentioned in the San Remo Handbook is the creation of handout cards, which also include the rules of engagement, to team members to guide them during an operation. In addition to the practical value, this can also help improve team morale and identity. For instance, consider putting on the team logo and name as well on a card, or develop a coin or, even better, maybe some little fun circuit board.
Real versus simulated versus emulated adversaries
The red teamers among you will notice that a real-world adversary does not have limitations on what adversarial tactics, techniques, and procedures they might follow. A real adversary does not have to adhere to these rules of engagement and other legal or compliance restrictions. For instance, a real adversary might steal the passwords of your employees in Europe and impersonate their identities on the network the same way as an employee in the USA or in China. There could be differences if and how the offensive security team can emulate these attacks due to differences in privacy regulations, company policies, and laws. Always consult legal counsel before engaging in emulating adversaries.
If for whatever reason your organization prohibits emulating real-world adversary tactics, you need to keep that discussion going because real adversaries are probably doing it right now, and your organization has a big blind spot. The educational value for everyone involved in a red team operation will advance the security IQ of the organization and enable individuals to better protect themselves from these attacks.
For certain tests, we can make simulation environments or deploy test data (sentinel data) to carry out attacks and exfiltration. Sometimes, just playing out an operation on paper can provide valuable and cheap insights too. Performing a phishing campaign doesn't always have to include really stealing passwords of users. Just tricking individuals into entering their credentials on a phishing site and provide a warning when they hit the submit button can be of value.
However, none of these simulations provide the same value and insights as performing a realistic production compromise to demonstrate the actual deficiencies in your security posture and detections.
Production versus non-production systems
This brings us right to the question of whether targets should be production or test environments. Organizations that are less mature are typically hesitant to perform any kind of penetration testing in production environments. This is usually a good indicator that a lot more work regarding the resilience and maturity of the service is needed.
Simple port scans or fuzzing should not have any impact on the availability of a service, and a mature system can withstand such tests without issues. If fuzzing has a noticeable impact on availability, it's a great find, showing that the engineering team did not consider this kind of work ahead of time.
Scoping in production environments in security assessments is usually a good idea, especially because of misconfiguration and differences that exist between production and non-production systems.
It is also common that production and test systems overlap. This could be in the form of production data being used in the test environment or passwords and certificates that are shared across environments. From that point of view, it's generally the right approach to include production and non-production environments during operations.
Avoiding becoming a pawn in political games
Depending on the culture of your organization, it might be that the red team is used as a pawn to achieve certain objectives of a few, rather than supporting the entire organization and helping to improve the security culture and awareness. Be aware of becoming a pawn.