35

What tools or techniques can companies use to prevent penetration testers from behaving maliciously and exfiltrating data or compromising the system? I cannot imagine that the only protections companies use are contractual ones.

John Kugelman
  • 139
  • 1
  • 8
Pippo Pluto
  • 477
  • 1
  • 4
  • 5
  • 21
    Think of them as advanced computer locksmiths. They are the good guys you are hiring to test your "locks." You're not hiring actual burglars(I hope). – Booga Roo May 04 '20 at 04:22
  • 1
    If you care you can always supervise them, but this will be very expensive. Also if you do not give them priveledged access you basically have to worry the bad guys will do it anyway. You can also have them check other environments, but that might not give you the weakness of the production systems you care about, – eckes May 04 '20 at 05:43
  • 105
    As a homeowner, how can I prevent that plumbers make my water heater leak? – fraxinus May 04 '20 at 07:22
  • 1
    The point in hiring penetration testers is to avoid breaches. When they are successful in breaking your security, others that do not have a contract with you are the worse threat. When your security is good enough, there is not threat at all. For an actual advice: Look at the reputation of the penetration testers you're hiring. Who did they work for before? How did they came into the business, what is known about them? – allo May 04 '20 at 12:01
  • 1
    @fraxinus, you hire two more plumbers, who will check if the first plumber is doing everything correctly. Or are you guys telling me that if the NSA needs to fix a faucet, any plumber will be trusted, just because hey, he's a plumber, he knows what he's doing, we should trust him? The question asked by the OP is not as stupid as it might sound at first. – reed May 04 '20 at 13:58
  • 10
    @reed The original question is essentially, I need to hire someone to test my security. How can I prevent them from penetrating my security? Would the answer be to hire someone else to test my security? And how would we prevent that new group from penetrating my security? Is it turtles all the way down? Or do you trust the people you are hiring for their expertise to properly make use of their expertise. – Michael Richardson May 04 '20 at 15:20
  • 17
    @MichaelRichardson at first it sounded like that to me too, a stupid question. But if you think about it, what the OP might really be asking is: if I'm really concerned about the security of my data, should I blindly trust a pentester, just because that's their job? Then, at least, it makes sense to talk about checking CVs/resumes, requesting a security clearance, getting insurance, etc. (see Anonymous's answer for example). – reed May 04 '20 at 15:55
  • 2
    @reed Since you're trusting everyone else, you might as well trust the pentester too. – David Schwartz May 04 '20 at 19:58
  • 1
    @fraxinus I can't think of anything the plumber will gain by making your water heater leak. – Nacht May 05 '20 at 11:07
  • 2
    @reed What prevented these people from compromising your security before you hired them? All the same preventative factors still continue to apply after you hire them. If they wanted to do some damage, they could have done it just as easily before the contract, so the hiring event does not introduce additional risks that would need additional mitigations. In any case, you're going to be blindly trusting all the pentesting companies which you don't hire. – Peteris May 05 '20 at 11:45
  • @Peteris, I suppose someone you hire is able to gather much more information and have access to your systems more easily, compared to a "stranger". I also imagine some interesting social engineering tricks become possible when you act as a pentester. "Oh sorry, the alarm went off by accident, while I was just testing other stuff (yeah right)". And you forgive them, and don't investigate further. – reed May 05 '20 at 13:06
  • @DavidSchwartz, but who said everyone else should be trusted? I agree that it's kind of stupid to worry about a pentester, and then not worry about anyone else. But the OP didn't say they are trusting everyone else. – reed May 05 '20 at 13:08
  • 10
    @reed quite the opposite, a standard external penetration test is exactly about testing (and demonstrating) what every "stranger" can achieve without any privileged access or information and the client's employees are not supposed to ignore any alarms - literally the only difference in response to a social engineering attempt should be that in the end, the perpetrator does not go to jail because they had permission. If you don't investigate further, that's a process flaw that a competent pentest will exploit and document so that you can fix it afterwards (e.g. fake "get out of jail" letters) – Peteris May 05 '20 at 13:51
  • @Peteris: There is also a minor difference that a paid pentester is usually going after specific pre-agreed targets, wheras a hacker is going after production data. – Mooing Duck May 05 '20 at 21:15
  • @Peteris A good pentest is best when the pentesters are given some information about the system. In some cases perhaps even code, as it makes it easier for them to find vulnerabilities.

    The idea is that if a pentester can't get in with all this extra information given to them, then it's unlikely that a "stranger" can get in while only having access to what's public.

    – Cruncher May 06 '20 at 15:26
  • @fraxinus that's a pretty useless comment – northerner May 07 '20 at 00:10

6 Answers6

125

You're looking for a technical solution to a legal problem. This won't work.

What you worry about is primarily a legal problem. Penetration testers operate under a Non-Disclosure Agreement, which is the legal equivalent of "keep your mouth shut about anything you see here". Non-Disclosure Agreements, or NDAs for short, are what prevents a penetration tester from talking about the cool vulnerabilities they found when they tested ACME Corp. last week.

But why would a penetration tester honour a NDA? Because not doing so would basically destroy their career. If the company learns that a penetration tester disclosed internal information, they will sue the pen-tester for damages, which can be upwards of millions.

Furthermore, it'll completely destroy the reputation of the pen-tester, ensuring that nobody would ever hire them ever again. To a penetration tester, this means that the knowledge they have spent years or decades to accumulate is essentially worthless. Even if the idea seems sweet to a morally corrupt pen-tester, the punishment is magnitudes worse.

Furthermore, most pen-testers just have no interest in compromising a client. Why would they? It's in their best interest to ensure that the client is satisfied, so that they hire them again and again.


As for why you would not put in technical restrictions, there are several reasons. First, as a pentester, you feel like you are treated like a criminal. A lot of pentesters are proud of the work they do, and treating them like criminals just leaves a sour taste in their mouth. A pentester understands that a company has certain policies, but if a company goes above and beyond and escorting them with an armed guard to the toilet, just to make sure they don't look for post-it notes with passwords on them on their way back, they will feel mistrusted. This can and most likely will lower morale, and may cause a pentester to not give their absolute best.

Furthermore, absurd technical constraints can also just make things difficult for a pen-tester. For example, if their company-provided domain account gets blocked as soon as they start Wireshark or nmap, it takes time for that account to get reactivated. It prevents a pentester from launching all their tools to find vulnerabilities as effectively as possible, and wastes a lot of their time.

This is bad for both the pentester and the customer, and will likely result in a worse overall experience for both of them.

Malady
  • 109
  • 4
  • 21
    Plus, when you're hiring somebody whose job is already to act malicious and find ways around things, you'd only be giving them another challenge to solve if they were truly motivated to break the contract. – multithr3at3d May 03 '20 at 18:45
  • 20
    Might be worth adding that you want the pentester to breach. Because that person will tell you how they did it. A real hacker wont be as nice :) – Martijn May 04 '20 at 07:57
  • 6
    This answer is technically wrong (but I won't downvote, I hardly ever do). You are basically saying that a pentester should be trusted because there's a contract, and it's a matter of professionalism and reputation. But who tells you that a penterster is not malicious? How do you know if you are dealing with a real pentester or an impostor? Imagine calling the plumber to fix a faucet, and then Kevin Mitnick shows up telling you he's the plumber. Nice! So Anonymous's answer is more correct IMO. – reed May 04 '20 at 14:04
  • 4
    Plus, adding those technical restrictions not only makes the pentester's job harder, it makes their report less worthwhile. Because, if you escort them with an armed guard to make sure they don't look for post-it passwords (a vulnerability in your system!), the fact that they're around won't be on the report. And unless your team is having that armed guard with every single guest, there's an unknown security hole. You end up testing "these are the vulnerabilities available to a hacker who has to operate under [these technical restrictions]; probably extra vulnerability for others." – Delioth May 04 '20 at 14:16
  • 1
    @reed Sure, that is a valid point, and I agree that Anonymous made some very good points. That's why people shouldn't just stop reading after one answer –  May 04 '20 at 14:29
  • Even if the idea seems sweet to a morally corrupt pen-tester, the punishment is magnitudes worse. That's fine, just don't get caught ;) – Andrew Savinykh May 04 '20 at 23:49
  • 2
    @AndrewSavinykh It's a simple formula: The gain of a malicious activity needs to be larger than the punishment of being caught, times the chance of getting caught. If the punishment is severe enough, then I need to be reeeeaaallly sure that I would not get caught. –  May 05 '20 at 07:26
  • @MechMK1 That being said, people are bad at calculating those probabilities. That's why there is no point in punishing pickpockets with chopping off their hands; it will deter few who were not already deterred by a fine and possibly a night in jail, because someone who thinks they won't get caught thinks that regardless of what punishment there is. – Arthur May 06 '20 at 11:57
  • @Arthur While I agree that people are very bad at calculating chances - the existence of the gambling industry is proof enough for that - the idea of chopping off a hand stems from more than just "severe punishment". For once, it has a symbolic factor, which is removal of the part of the body that committed the crime. Secondly, it brands the thief as a thief among others in society. This should help upstanding citizens to identify criminals and prevent them from accidentally associating with them. If it sounds cruel, well, it was a different time back then. –  May 06 '20 at 12:13
  • @MechMK1 And yet there were pickpockets back then too. Knowing they would get their hands chopped off. Seing people around them with chopped-off hands. Many of them may have been desperate, but it was not as strong a deterrent as one would think if your formula was followed correctly. Having other reasons behind the punishment doesn't change how it factors into your risk-reward calculation. – Arthur May 06 '20 at 12:28
  • @Arthur I'm not saying it prevents crime, nor do I say severe punishments prevent people from committing crimes today. –  May 06 '20 at 12:32
  • @MechMK1 You kind of did though. "It's a simple formula: The gain of a malicious activity needs to be larger than the punishment of being caught, times the chance of getting caught." this implies that crime outside of this formula wouldn't happen. The argument is that the formula is only relevant if we assume perfectly rational individuals – Cruncher May 06 '20 at 15:36
  • 1
    There's one aspect that I think should be added to this answer: in at least the United States, violating the terms of a contract involving something like pentesting may incur not just civil liability, but (severe) criminal liability as well. In addition to the most obvious of the CFAA, other fraud/criminal trade secrets/etc related charges (either state or federal) may be possible (especially if the related damages can be claimed as fairly high).

    As to the gain vs risk: experts in fields like this rise or fall on their reputations, the risk is ALSO any chance of lucrative future work.

    – taswyn May 06 '20 at 23:06
  • If you're capable enough to put in viable defenses against a penetration-tester, you probably don't need to hire a penetration-tester; rather you can do all the penetration-testing yourself. – Jeremy Friesner May 21 '20 at 17:19
  • @taswyn I'm not a legal expert - not even a novice at that - so all I can do is give cookie cutter advice to abide by the applicable laws and regulations. –  May 22 '20 at 08:56
  • @JeremyFriesner Most of the time, it's organizational "defenses". Something like "Yes, we want you to pentest this system. No, we won't allow your client to connect to it." Then you need to spend hours finding the MAC of a client who can, spoof it, etc... It's not impossible, just annoying as all hell, and in the end everyone is worse off. –  May 22 '20 at 08:57
  • @MechMK1 absolutely, and I'm not suggesting that you try to address specific laws, merely that the legal issues MAY (depending on jurisdiction) be such that breaching a civil contract can additionally create criminal liability (this has, for example, been tested with the CFAA in court and successfully prosecuted) depending on the specifics. Pentesting can often involve actions that would be illegal if they weren't contractually agreed to, and the possibility of an actual criminal angle being created if acting outside of contract (& breaching NDA often voids contract) is worth noting imo – taswyn May 24 '20 at 21:40
49

I'm asking if companies can use some tools, some techniques to avoid that penetration testers behaving maliciously can exfiltrate some data or compromise the system permanently.

You could record all the traffic behind your firewall or stay awake all night long watching Wireshark output but without technical skills it's going to be hard to make sense of the bits flying in front of you. A data loss prevention system is probably what you have in mind, but it is going to interfere with the pentest, unless this is precisely the equipment that you want to test.


The answer is due diligence. Before hiring a company check out their credentials. Ask questions, ask for sample reports too. Some outfits will do little more than run an automated scan and tick boxes on a template sheet. This is not what we want. What we want is talented pentesters who think outside the box and devise original, manual attacks based on their reconnaissance efforts (which for the most part are automated). A good pentest should be a tailor-made operation and not a cookie cutter exercise.

I wouldn't do business with a company that won't provide sample reports (curated reports of course).
My biggest worry is not dishonesty but rather lack of competence which means you pay for a useless deliverable.
So this is my first filter. A box-ticking company is in my opinion less ethical because it knowingly provides a service of questionable value. Probably better than nothing but you want value for money.

I cannot remember one single instance of a pentest company being used for criminal actions. However a few have been sued for what would amount to 'malpractice'. Example: Affinity Gaming vs Trustwave.

The contract should be clear as to what is allowed and what isn't. Make sure there are no misunderstandings and that the person hiring you has full authority. What could possibly go wrong: Iowa vs Coalfire

Surely, 'rogue penetration testers' (oxymoron) who want to break into your systems won't ask for your permission to test and then go beyond the scope of the assignment. They will just invite themselves.

Don't know if you are one of those, but some companies/government agencies require a security clearance. That raises the bar a little bit: felons are unlikely to have a clearance. There are exceptions like Mr Snowden, 0% risk doesn't exist.

If your company is involved in objectionable activity, damaging the environment, selling weapons to tyrants then you may legitimately be worried about whistle blowers. This is a conundrum - you have sensitive information and want to keep it secret, but to protect it you must allow an outsider to have access to it. You should select a provider that has experience working with companies in your field of activity and is comfortable with what you do. Perhaps your trade body or business partners can provide recommendations. Word of mouth.

If you think your company could sustain financial damage/fines in case of data exposure (accidental or otherwise), talk to your insurance company. By the way, the pentest company should have liability insurance too. This is one question to ask.

Breaches are a fact of life. Pretty much any company has been hacked at least once, or will be hacked in the future. This is something to consider. You should have a disaster recovery plan ready, regardless of whether you decide to proceed with the pentest. Which I think is a good idea: if things go wrong at least you can demonstrate that you undertook reasonable efforts to prevent a breach. A company that is found to be negligent can expect stronger sanctions in terms of: regulatory fines, lawsuits, consumer backlash, negative media exposure, shareholder revolt etc.

Kate
  • 7,937
  • 25
  • 27
  • 2
    Perhaps change the Affinity Gaming vs Trustwave site to a non pay-to-view article? – QuickishFM May 04 '20 at 13:00
  • Strange, I didn't have the paywall yesterday and read the article in full. I will put another link. – Kate May 04 '20 at 13:33
  • 1
    You can't actually require a security clearance if you aren't an organization that has access to the systems that contain evidence of said clearances. You also probably want to consider that requesting consultants with security clearances might induce the vendor to charge you the rate for consultants who need to have clearances, which is going to be considerably higher than the rate for equally competent and ethical consultants who aren't required to have a clearance, including the exact same consultants who simply aren't on a clearance-required project. – Xander May 06 '20 at 02:29
7

If you're developing products, you should have an SDLC pipeline with different environments like DTAP, where you should have penetration testers testing on the acceptance environment. It's a security best-practice to keep the environments completely seperated from production. So your acceptance environment should be a functional copy of your production environment, but it should not contain production credentials, data from your users, connections to production environments, etc.

Creating an acceptance environment for a company is often a challenge on the network and server level. If this is the case, you can make a claus saying they should stop the moment they hit a production system. If penetration testers do manage to find production credentials somewhere, or hack a production server by accident, you just act like a real breach happened - except for calling the police - and the change login credentials / monitor the server / etc.

Beurtschipper
  • 833
  • 1
  • 6
  • 10
  • 3
    This is a good approach when viable, but it is worth noting that differences between test and live environments can mean a pen test fails to find something in a test environment that would be possible in a live environment. Most obviously, test environments don't have users, so attacks that rely on user actions or errors may not work. – James_pic May 04 '20 at 14:06
  • It's true that acceptance is not always an exact copy of production, but performing security tests on the acceptance environment is a best-practice and is most common in the field. The idea that test environments don't have users is simply not true. In fact, sometimes it's easier to create test accounts than production accounts. A hip way of testing production environments is with red-team assignments. – Beurtschipper May 04 '20 at 18:35
  • 4
    When I say users, I don't mean user accounts, I mean human beings, who are often the weakest point in a security system. – James_pic May 04 '20 at 19:10
  • To test user behaviour in production you normally perform social engineering and/or phishing tests, apart from normal pentests. The whole point of properly securing systems is that security holds, even if some users mess up. If your main line of defense is having smart users, your design fails to begin with. Modern pentests take this into account and don't bother with users, unless explicitly scoped. – Beurtschipper May 04 '20 at 23:11
  • 1
    @James_pic I've seen pentest scenarios that start with compromised user accounts - because it's a reasonable assumption that no matter what the company does, any reasonable sized company will have someone who falls for a spearphishing campaign, so you don't need to bother your users with testing if spearphishing is a risk because you already know that, but you want to test all the other security and detection measures for an attack through a compromised low-privilege user machine. If lateral movement or escalation needs reasonable user interaction, then that can be simulated by the client. – Peteris May 06 '20 at 00:54
  • 1
    The only answer that answers the question here, instead of handwaving to project self-importance. OP, your question is not stupid, as others have implied. I'm a bit surprised that you didn't choose this one as the correct answer. – Helen May 20 '20 at 07:59
6

The conceptual / contractual side has been covered in other answers, and it's very important.

However, there is also a technical side, especially in terms of "worst case impact" and limiting unnecessary access. Companies should come prepared to the pentest.

Here are some examples:

  • It should be possible to test system X without access to unrelated system Y.

    This may sound obvious, but many companies have a single login system for employees that gives access to all systems -- in other words, the test credentials for X are also valid to access Y, Z, etc.

    Instead, the company should be able to issue "limited access" tokens. When designing the credentials system, this requirement should be added even if useless at first (few employees, everyone normally trusted, ...).

  • Pentests should happen on a realistic environment

    Again, may sound obvious but... is your testing environment holding realistic data / scripts / programs / etc.? Do you have an automatic generator for that? Are all interesting cases covered? If not, is it feasible to snapshot and anonymize the real data?

    Testing on an empty shell is very unrealistic, and while you could ask the pentesters to populate a database for you they won't know all cases that you care about.

    Perhaps surprisingly, your testing system should be more complete than the real one -- you should add all tricky cases that come to mind, even if you haven't really encountered them yet.

  • Communication lines and expectations should be established beforehand

    Who should the pentesters contact in case of trouble? Are you ready to quickly restore the testing (or production!) system and bring it back up?

    Do you expect to be notified of critical issues as soon as discovered? Are you going to patch the system while the test is ongoing? Do you expect persistence to be attempted? Cross-server attacks / "lateral movement"? Social engineering?

    Even trickier: should pentesters read suspicious data, can they confirm the situation? ("that's testing data, continue" vs. "we didn't expect this data to be there / to be accessible, stop immediately")

  • What is in scope? Closely related: what is your threat model?

    While you can ask pentesters to "figure it out", that's probably not what you want -- remember, the results need to be useful to you and represent your security boundaries.

    It's usually best to figure this out together with the pentesters, and be ready to evolve things during the effort if necessary.

Incidentally, I believe most pentesters will happily provide advice on this matter. No one really wants to deal with the fallout of accidental damage or privacy repercussions.

Jacopo
  • 256
  • 1
  • 3
  • 1
    I'm not sure how testing system X without access to system Y matters with penetration testing. If someone is breaching your system, and can access unrelated system Y through system X, that should be something you'd want to know. If anything, accessing unrelated systems is how most breaches occur. – iheanyi May 05 '20 at 15:09
  • 1
    That would be a cross-server attack ("lateral movement"). I was referring mainly to access credentials. For instance, let's say we're pentensting the sales tracking system: pentesters will need some form of access credential. Can we limit that? Or we have to give a company-wide employee account that has access to all systems (email, HR, file shares, ...)? Sometimes companies do "wide" pentests where ~everything is in scope, but most commonly the scope is limited -- think of it this way: pentesters have limited time, where do you want them to spend it? Should you take unnecessary risks? – Jacopo May 05 '20 at 19:48
  • 1
    Why would you give them access credentials? Their job is to figure out how to gain access starting from no credentials. Whether that's a vulnerability in your server, phishing email, some other form of social engineering, etc. If you have access credentials, you're already inside the airtight hatchway. – iheanyi May 06 '20 at 18:12
  • 2
    Well, there are many hatchways :) It's valid to ask for a "recon style" pentest: from zero, find credentials or another way to access (sometimes, find the site itself :D). However, it's also important to see what an authenticated user can do, what are the risks there, etc. For example, employees can legitimately login to the HR system and see their own pages, but generally should not be able to see other people's or edit their own info beyond what the policy allows. – Jacopo May 06 '20 at 20:43
4

The question is of course, turning in circles: if you could prevent the tester from certain access, then you would definitely want to prevent foreigners/attackers from accessing this. So your measures to keep the pentester from access are exactly those that you have in place and want to test. There should be no difference.

The only possible alternative would be to have the pentesters go after a dedicated test environment duplicated from your prod environment but w/o live data, which is of course more costly and you risk that this clone might nevertheless be different in details and dilute the results. A milder variation of this could be to cut off only most valuable IP and cilent data databases/NAS etc or attach dummy db's to your prod systems.

Your question is much more about risk treatment of your concerns, which should be covered by proper treatements as defined in Third Party Risk Management:

  • perform a risk analysis identifying the actual risk potential/value; this should no only cover data leaks, but also the linked reputational damage, possible service disruptions (might they shut down your production or web presence/shop during the test?) etc.
  • identify possible providers (Pentesters) and do proper due diligence / research. Your concerns are mostly about trust to the engaged party, so what you need is to build / gain trust and be sure that this trust is built on facts
  • Please only use a pentester from our own jursidiction! If something happens, you need to be able to get them in court at your agreed place of jurisdiction and that evaporates if the tester has no legal entity there.
  • define the (legal) boundaries of the tests to be executed
  • and most importantly: define the legal consequences in your contract (this is often forgotten - people only agree what to or not to do but do not nail down what happens, if this is violated) in terms of proper liability clauses, cancellation rights, payment variations and penalty fees.

As an additional measure for the actual issue of a data leak after the pentest to be able to prove a linkage, you could add some honeypot data to your real data just for the time of the pentest. So, if there is a leak and it contains the honeypot data, you have better proof that the data leak is actually from the pentest.

Kerry
  • 41
  • 1
1

You are asking the wrong question. Penetration testers test security by simulating what attackers will do. If you are more worried about pentesters stealing your data than malicious actors I would advise rethinking your security architecture.

In a secure environment, there should be “defence in depth” that has multiple security controls to provide redundancy in case one fails. The controls can be technology (like firewalls, IPS, EDR), policy (don’t use your company creds on outside system), or user training (don’t click on that phishing link). The multiple controls should be designed to catch anyone behaving maliciously. The point of pentesting is to find gaps where there aren’t multiple or any controls. Finding these gaps then allow for remediation in the form of implementing additional controls in the form of more technology, or procedures, or user awareness training.

Pentesting isn’t supposed to be adversarial. It should be a good collaboration by both the pentesters and the admin/security team.

schroeder
  • 129,372
  • 55
  • 299
  • 340
dmaynor
  • 458
  • 2
  • 3
  • Some false premises here. "If you are more worried about pentesters stealing your data than malicious actors" -- no evidence to suggest this. There is a difference between having a weak lock that someone could break and inviting people to try to break locks. "Pentesting isn’t supposed to be adversarial." -- OP is not suggesting being adversarial, but performing normal due diligence. You do not acknowledge that there could be malicious pentesters (when in fact, it's a thing) and assume that there is no risk in hiring a pentester (when there is). – schroeder May 20 '20 at 16:15