It’s time for regular Americans to think differently about cybersecurity

One of the bigger shocks of the past several years is the apparent hackability of the United States Government and its adjacent bodies: the Office of Personnel Management, the NSA, the Department of Veterans Affairs, the Democratic National Committee; and those are just the cyber intrusions we know about.

If private companies and public-sector leaders can’t manage the new threats, how can ordinary Americans?

From the standpoint of everyday life, it’s unnerving to watch big bureaucracies (and big corporations) get blindsided by shadowy bad guys. The people in charge of these institutions know how important cyber and information security are and they have the budgets to keep up with changing trends in digital intrusiveness. If private companies and public-sector leaders can’t manage the new threats, how can ordinary Americans?

Barack Obama told us not to put it in an email if we don’t want it in the news; Donald Trump says we ought to use human couriers. The subtext of their comments is that 21st century technology is unsecurable, and therefore we shouldn’t use it. That probably rings true to people who don’t understand how all this hacker stuff works.

The good news is that it’s not overly difficult to secure your personal digital presence. Neophyte readers can start with security researcher Martin Shelton’s “Securing Your Digital Life Like a Normal Person.” If you’d like to take additional steps, The Intercept’ s “Surveillance Self-Defense for Journalists” provides exemplary self-defense techniques that even non-journalists can benefit from.

poulos_article1
There are simple steps we can take to begin protecting ourselves online

But even if you do these things for yourself, the failings of big companies and big bureaucracies will always leave us slightly vulnerable. So let’s talk about how cyber security works at the social level: When is a hack a hack? When is it a crisis? What does due dilligence mean? What sort of failures are whose fault, and how much do they matter?

Think of the sensational “hack” of the Office of Personnel Management (OPM) in 2015. While many Americans came away from the episode with a vision of malevolent Chinese key-tappers defeating U.S. security measures and thieving away vast troves, the reality was much different.

OPM handed access to third-party contractors, including in China, that was so comprehensive that House Oversight Chairman Jason Chaffetz (R-Utah) called OPM’s security posture “akin to leaving all your doors and windows unlocked and hoping nobody would walk in and take the information.”

Although data encryption isn’t much help when you’re that negligent, OPM was also found to have left about two thirds of its data unencrypted, meaning it could be accessed by whoever possessed it. If you leave your car unlocked with your keys in the ignition and wake up to find it gone in the morning, does it really help to think of what happened as breaking and entering? Unwelcome people can lift huge amounts of online information without doing much hacking. And if cybersecurity is weak enough, they don’t have to hack at all to create a crisis.

Cybersecurity isn’t all about preventing hacks. It’s much more accurate to say it’s about knowing where a company or agency is vulnerable.

But cybersecurity isn’t all about preventing hacks. It’s much more accurate to say it’s about knowing where a company or agency is vulnerable, and securing those weak spots with as little harm to your systems and their performance as possible. This is true even at a personal level: You want security where you don’t have it, but you also want to be able to use digital systems relatively easily and efficiently.

Businesses with good information security understand that most of the insecurity they’ll deal with comes from user confusion or carelessness, not from state-sponsored attackers or rogue hoodie-clad criminals. And even when the “real bad guys” are targeting an organization, defending against them requires not just sophisticated software, but smart humans. The most advanced security tech won’t help if Jenny accidentally gives Joe a thumb drive he’s not authorized to access, and Joe then takes that drive home and plugs it into his poorly-secured personal computer.

Poulos says cybersecurity requires sophisticated software and smart humans

Like personal hygiene, strong cybersecurity is best built and maintained as a habit. At a high level, an organization should constantly test and challenge its own information firewalls, to make sure they work and to quickly identify when they don’t.

And as for individuals? It’s time we take steps to protect ourselves and update the way we think about cyber security. Figuratively speaking, there is no castle wall thick enough, no moat deep enough, and no drawbridge narrow enough to guarantee our safety against intrusion. Someone will always find a way in, and whoever gets past that firewall — especially by cheating or trickery rather than by frontal assault— gets access to everything.

Think of good information security as concentric rings of trust, verification, and ritualized communications practices.

Rather than a big castle, think of good information security as concentric rings of trust, verification, and ritualized communications practices. When a person or a piece of information shows up within a ring where they don’t belong, good infosec springs into action, chasing down the identity and intention behind the anomaly and working to limit whatever damage arises or has already arisen.

Although the details will always vary depending on what kinds and balances of threats are specific to each realm, this conceptual model works across personal, professional, and governmental security. The more we all come to share a good grasp of it, the more we can tense up when we ought to — and relax when we shouldn’t.

James Poulos is a Contributing Editor at National Affairs and the author of The Art of Being Free, out now from St. Martin’s Press.

Related
GitHub CEO says Copilot will write 80% of code “sooner than later”
GitHub CEO Thomas Dohmke goes in depth to answer questions about how AI-powered development will change the future of innovation itself.
No, AI probably won’t kill us all – and there’s more to this fear campaign than meets the eye
A dose of scepticism is warranted when considering the AI doomsayer narrative — there are commercial incentives to manufacture fear of AI.
To fear AI is to fear Newton and Einstein. There are no “dragons” here. 
Who’s afraid of utopia? AI doubters have cold feet. History can warm them.
What is an AI black box? A computer scientist explains
AI black boxes refer to AI systems with internal workings that are invisible to us. What are the implications of working without transparency?
4 dangers of artificial intelligence—and why they won’t end the world
AI doomsday fears are vague. This framework for the future of AI offers concrete solutions.
Up Next
Exit mobile version