It’s often said that the best cops would also make pretty good robbers, and vice versa. The same is true of cybersecurity professionals. If you have the skills and the mindset of a hacker, you can use those on either side of the law with great success.
In fact, some people do, or have at some points in their career. The lines can be blurry and the laws are unforgiving and totally unconcerned with what your intentions might’ve been. When you fire up Nessus or L0phtCrack to analyze a system’s security posture, can you be prosecuted under current cybercrime laws?
The short answer is yes.
Penetration testers today usually have clients sign release forms as thick as a stack of phone books to avoid such problematic interpretations of their work. But there are still cases where even experts who believe they have been authorized to check security systems have been arrested and charged with crimes, such as a Georgia man contracted to handle IT services for a county 911 service. A port scan of the network inadvertently caught a server also used by the county, but maintained by another consulting firm. He was charged with computer trespassing and acquitted only after an expensive court battle.
There has always been a tension between solid cybersecurity research and investigation and outright criminal activity. There is a reason that the very word hacking is seen, within the community, as ambiguous at best. Early hackers were simply taking things apart and figuring out how they worked… a spirit that continues in today’s cybersecurity workforce.
The Pendulum Swings On Cybercrime Laws
In the early years of computers, the laws on the books had not yet envisioned the sort of digital communication that they were capable of, and had not been written to address the oddity of information that could be copied effortlessly and yet never physically exist. Those hackers who were caught and charged often faced penalties for wire fraud or possession of unauthorized access devices—essentially the same statutes covering possession of lock-picking tools.
In some cases, activity that would be plainly illegal today simply wasn’t punished for want of a relevant statute, or hackers got away with a slap on the wrist for fairly nefarious crimes. Markus Hess, the West German hacker at the center of the famous Cuckoo’s Egg electronic spying case, got off with a 20-month suspended sentence for espionage, since his other hacking activities were not illegal at the time under German law.
After hacking caught the attention of the public, and of legislators, the pendulum swung in the other direction. In 1986, the Computer Fraud and Abuse Act was passed to rectify many of the holes in current law regarding computer crimes. The law made unauthorized access to protected computer systems a criminal offense, as well as activities intended to cause damage to those systems, or copying data from them.
The law was not universally welcomed in the security community. What seemed to many people like more or less innocent—or even productive—snooping became criminal activity with serious federal prison time attached. Prosecutors and judges were both ignorant and scared.
Kevin Mitnick, whose exploits were legendary but largely playful, was held without bail and in solitary confinement in part because prosecutors argued that simply allowing him access to a phone line could allow him to compromise the country’s nuclear arsenal.
Even after release, Mitnick was prohibited from using any computer or cellular phone for three years—conditions that were difficult to comply with in 2000 and probably would be impossible today, now that we’re surrounded by networked dishwashers and digital bus ticket machines.
The CFAA Has A Chilling Effect on Cybersecurity Research
Even more threatening was the fact that the CFAA created a civil cause of action in addition to criminal liability. Companies could sue and accuse security researchers of violations even when there was insufficient evidence to support criminal charges. With their greater resources, the practical effect of this was to give major software manufacturers the ability to gag any security analyst reporting vulnerabilities, even one acting in good faith. Sony sued a group of hackers for reverse engineering its PlayStation 3 console, despite no criminal activity having occurred.
But even ethical hackers found that the most effective method to improve security for the larger community was to expose security flaws publicly rather than approach the responsible vendor directly. Many companies would simply sit on information uncovered by researchers, counting on holes being obscure rather than spending money patching them. White hat hackers, lead by the L0pht and Cult of the Dead Cow, began to release the information publicly to create pressure to fix the vulnerabilities.
Manufacturers weren’t slow to take advantage of CFAA to squash those releases when possible rather than actually work with the researchers to close the holes. In some cases, vendors reacted by attempting to prosecute the researchers, such as when Cisco threatened to prosecute a researcher in 2005 who planned to disclose vulnerabilities in their router software.
The CFAA Is Sometimes Used Outside Its Original Intent
CFAA might have been a step forward but it failed to foresee the difficulty of establishing accurate valuations for electronic information and for holding users accountable for actions that might seem perfectly reasonable, but could violate obscure terms of service that had never been expressly agreed to. Pulte Homes, a homebuilder, sued a union under the act for having members send in complaints about its actions—the resulting volume of calls and email crashed the builder’s network.
Pulte lost the case, but the courts have not settled on that interpretation, as a criminal case against hacker Aaron Swartz demonstrated in 2011. Swartz, using a legitimate and authorized account to the JSTOR academic journal library, downloaded what JSTOR considered an excessive amount of data. Swartz was charged under CFAA but committed suicide before the case went to court.
An amendment to prevent such overreach, called Aaron’s Law, has been languishing in Congress since 2013.
The Cybersecurity Community Relies on Informal Cooperation To Avoid Prosecutions
Today, there is a sort of détente between technology companies, law enforcement, and security researchers. Most researchers prefer to contact vendors directly when they uncover flaws, with the understanding that a certain amount of time will be allowed to fix the flaw before it will be published. In general, everyone is flexible—the researchers will assist in finding fixes or delay their disclosure if more time is necessary, and technology companies will make their best effort to patch issues that are uncovered by third parties.
But from time to time the informal truce still fails.
The Ghosts of Past Explorations Rise To Haunt Some White Hat Hackers
The statute of limitations can be another factor cybersecurity professionals should worry about. The path to a career in the field almost inevitably involves some youthful exploration and experimenting in extralegal techniques. Those foolish, but common, indiscretions don’t always stay in the past.
In August 2017, for example, UK security researcher Marcus Hutchins was stopped and arrested at the Las Vegas International Airport after attending the popular Defcon hacker convention. Famous for uncovering and flipping the kill switch that stopped the global WannaCry worm outbreak, Hutchins was indicted for his accused involvement in creating a banking software Trojan known as Kronos.
Details are sketchy but evidence indicates that Hutchins was at least researching a variant of Kronos as far back as 2014, but it’s unclear if the indictment is a result of activities that other cybersecurity professionals would view as strictly research, or if the government has other evidence of nefarious intentions. The community has rallied behind Hutchins but it’s a clear indication that there remains a substantial gray area in security research that can be interpreted as either legal or illegal depending on the whim of law enforcement.
This doesn’t give independent security researchers or ethical hackers a fuzzy feeling about their work, particularly coming as it does on the tail of revelations of extensive NSA surveillance of American civilians. It’s also a concern to many of them that Hutchins’ may be charged with activities he performed when he was a 15-year-old experimenting with malware authoring.
Although by most accounts Hutchins had been acting as a legitimate researcher as an adult, it’s possible that his old code was being used for illegitimate purposes. Since many ethical hackers got their start with less ethical efforts in their teen years, the idea that old code snippets or exploits surfacing years later could lead to prosecution may have a dampening effect on their willingness to participate openly in the community.
No one can tell what the future holds for independent cybersecurity researchers. The role remains too important for government or industry to squash entirely, yet they are likely to push back against unwelcome information they find threatening. While penetration testers can continue to get releases signed, completely independent researchers will have to live by their wits and judgement as they straddle the line between legality and ethics in cybersecurity.