Cybersecurity
Paragraphs

Abstract: This article examines the transitory nature of cyberweapons. Shedding light on this highly understudied facet is important both for grasping how cyberspace affects international security and policymakers’ efforts to make accurate decisions regarding the deployment of cyberweapons. First, laying out the life cycle of a cyberweapon, I argue that these offensive capabilities are both different in ‘degree’ and in ‘kind’ compared with other regarding their temporary ability to cause harm or damage. Second, I develop six propositions which indicate that not only technical features, inherent to the different types of cyber capabilities – that is, the type of exploited vulnerability, access and payload – but also offender and defender characteristics explain differences in transitoriness between cyberweapons. Finally, drawing out the implications, I reveal that the transitory nature of cyberweapons benefits great powers, changes the incentive structure for offensive cyber cooperation and induces a different funding structure for (military) cyber programs compared with conventional weapon programs. I also note that the time-dependent dynamic underlying cyberweapons potentially explains the limited deployment of cyberweapons compared to espionage capabilities.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Journal of Strategic Studies
Authors
Max Smeets
Number
1-2
Paragraphs

Abstract: Across the world, states are establishing military cyber commands or similar units to develop offensive cyber capabilities. One of the key dilemmas faced by these states is whether (and how) to integrate their intelligence and military capabilities to develop a meaningful offensive cyber capacity. This topic, however, has received little theoretical treatment. The purpose of this paper is therefore to address the following question: What are the benefits and risks of organizational integration of offensive cyber capabilities (OIOCC)? I argue that organizational integration may lead to three benefits: enhanced interaction efficiency of intelligence and military activities, better(and more diverse) knowledge transfer and reduced mission overlap. Yet, there are also several negative effects attached to OIOCC.  It may lead to 'cyber mission creep' and an intensification of the cyber security dilemma. It could also result in arsenal cost ineffectiveness in the long run. Although the benefits of OIOCC are seen to outweighs the risks, failing to grasp the negative effects may lead to unnecessary cycles of provocation, with potentially disastrous consequences.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Defence Studies
Authors
Max Smeets
Number
4
Paragraphs

Abstract: Could offensive cyber operations provide strategic value? If so, how and under what conditions? While a growing number of states are said to be interested in developing offensive cyber capabilities, there is a sense that state leaders and policy makers still do not have a strong conception of its strategic advantages and limitations. This article finds that offensive cyber operations could provide significant strategic value to state-actors. The availability of offensive cyber capabilities expands the options available to state leaders across a wide range of situations. Distinguishing between counterforce cyber capabilities and countervalue cyber capabilities, the article shows that offensive cyber capabilities can both be an important force-multiplier for conventional capabilities as well as an independent asset. They can be used effectively with few casualties and achieve a form of psychological ascendancy. Yet, the promise of offensive cyber capabilities’ strategic value comes with a set of conditions. These conditions are by no means always easy to fulfill—and at times lead to difficult strategic trade-offs.

All Publications button
1
Publication Type
Journal Articles
Publication Date
Journal Publisher
Strategic Studies Quarterly
Authors
Max Smeets
Number
3
-

Click here to RSVP

Abstract: The Perfect Weapon is the startling inside story of how the rise of cyberweapons in all their forms—from attacks on electric grids to attacks on electoral systems—has transformed geopolitics like nothing since the invention of the airplane and the atomic bomb. Cheap to acquire, easy to deny, usable for everything from crippling infrastructure to sowing discord and doubt, cyber is now the weapon of choice for American presidents, North Korean dictators, Iranian mullahs, and Kremlin officials. The United States struck early with the most sophisticated cyber attack in history, Operation Olympic Games, which used malicious code to blow up Iran’s nuclear centrifuges, and it has gone on to use cyberweapons against North Korean missiles and the Islamic State. Soon, the cyber floodgates opened. But as the global cyber conflict took off, America turned out to be remarkably unprepared. Its own weapons were stolen from the American arsenal by a group called Shadow Brokers and were quickly turned against the United States and its allies. Even while the United States built up a powerful new Cyber Command, it had no doctrine for how to use it. Deterrence failed. When under attack—by Russia, China, or even Iran and North Korea —the government was often paralyzed, unable to use cyberweapons because America’s voting system, its electrical system, and even routers in citizens’ homes had been infiltrated by foreign hackers. American citizens became the collateral damage in a war they barely understood, one that was being fought in foreign computer networks and along undersea cables.

Speaker Bio: David Sanger is national security correspondent for the New York Times and bestselling author of The Inheritance and Confront and Conceal. He has been a member of three teams that won the Pulitzer Prize, including in 2017 for international reporting. A regular contributor to CNN, he also teaches national security policy at Harvard’s Kennedy School of Government.

 

David Sanger Chief Washington Correspondent The New York Times
News Type
Commentary
Date
Paragraphs

Alex Stamos, William J. Perry fellow, wrote the following essay for Lawfare:

In the swirl of news this week, it would be easy to miss recent announcements from two of America's largest and most influential technology companies that have implications for our democracy as a whole. First, on Tuesday morning, Microsoft revealed that it had detected continued attempts at spear-phishing by APT 28/Fancy Bear, the hacking group tied to Russia’s Main Intelligence Directorate (known as the GRU). Later that day, my friends and former colleagues at Facebook unveiled details on more than 600 accounts that were being used by Russian and Iranian groups to distort the information environment worldwide.

The revelations are evidence that Russia has not been deterred and that Iran is following in its footsteps. This underlines a sobering reality: America’s adversaries believe that it is still both safe and effective to attack U.S. democracy using American technologies and the freedoms we cherish.

And why wouldn’t they believe that? In some ways, the United States has broadcast to the world that it doesn’t take these issues seriously and that any perpetrators of information warfare against the West will get, at most, a slap on the wrist. While this failure has left the U.S. unprepared to protect the 2018 elections, there is still a chance to defend American democracy in 2020.

From 2014 until very recently, I worked on security and safety at Yahoo and then at Facebook, both companies on the front line of Russia’s information and cyber-warfare campaign. From that vantage point, the facts are indisputable: There was a multiyear effort by a coalition of Russian agents to harm the likely presidency of Hillary Rodham Clinton and sow deep division in America’s political discourse. The uniformed officers of the GRU and the jeans-wearing millennial trolls of the private Internet Research Agency turned American technology, media and this country’s culture of discourse back against the United States. Stymied by a lack of shared understanding of what happened, the government’s sclerotic response has left the United States profoundly vulnerable to future attacks. As a security leader in my former role at Facebook, my personal responsibility for the failures of 2016 continues to weigh on me, and I hope that I can help elucidate and amplify some hard-learned lessons so that the same mistakes will not be made again and again.

The fundamental flaws in the collective American reaction date to summer 2016, when much of the information being reported today was in the hands of the executive branch. Well before Americans went to the polls, U.S. law enforcement was in possession of forensics from the hacks against the Democratic National Committee; important metadata from the GRU’s spear-phishing of John Podesta and other high-profile individuals; and proactive reports from technology companies. Following an acrimonious debate inside the White House, as reported by the New York Times’s David Sanger, President Obama rejected several retaliatory measures in response to Russian interference—and U.S. intelligence agencies did not emerge with a full-throated description of Russia’s meddling until after the election.

If the weak response of the Obama White House indicated to America’s adversaries that the U.S. government would not respond forcefully, then the subsequent actions of House Republicans and President Trump have signaled that our adversaries can expect powerful elected officials to help a hostile foreign power cover up attacks against their domestic opposition. The bizarre behavior of the chairman of the House Permanent Select Committee on Intelligence, Rep. Devin Nunes, has destroyed that body’s ability to come to any credible consensus, and the relative comity of the Senate Select Committee on Intelligence has not yet produced the detailed analysis and recommendations our country needs. Although by now Americans are likely inured to chronic gridlock in Congress, they should be alarmed and unmoored that their elected representatives have passed no legislation to address the fundamental issues exposed in 2016.

Republican efforts to downplay Russia’s role constitute a dangerous gamble: It is highly unlikely that future election meddling will continue to have such an unbalanced and positive impact for the GOP. The Russians are currently the United States’ most visible information-warfare adversaries, but they are not alone. Their proven playbook is now “in the wild” for anyone to use. Recent history has shown that once a large, powerful nation-state actor demonstrates the effectiveness of a technique, many other groups rush to build cheaper, often more nimble versions of the same capability.

The GRU attacks relied upon well-known social engineering and network intrusion techniques. Likewise, the Internet Research Agency’s trolling campaign required only basic proficiency in English, knowledge of the U.S. political scene available to any consumer of partisan blogs, and the tenacity to exploit the social media platforms’ complicated content policies and natural desire to not censor political speech. After Facebook’s announcement on Tuesday, it is clear that Iran has also followed this playbook. There are many other U.S. adversaries with well-developed cyber-warfare capabilities, such as China or North Korea, that could decide to push candidates and positions amenable to them—including those supported by Democrats and opposed by Republicans. There are also domestic groups that could utilize the same techniques, as many kinds of manipulation might not be illegal if deployed by Americans, and friendly countries might not sit idly by as their adversaries work to choose an amenable U.S. government.

In short, if the United States continues down this path, it risks allowing its elections to become the World Cup of information warfare, in which U.S. adversaries and allies battle to impose their various interests on the American electorate.

Enemies aiming to discredit American-style democracy, rather than promote a specific candidate, will not have to wait for election dynamics like those of 2016, when two historically unpopular nominees fought over a precariously balanced electoral map. Direct attacks against the U.S. election system itself—as opposed to influence operations aimed at voters—were clearly a consideration of U.S. adversaries: There are multiple reports of the widely diffuse U.S. election infrastructure being mapped out and experimentally exploited by Russian groups in 2016. While swinging a national vote in a system run by thousands of local authorities would be highly difficult, an adversary wouldn’t need to definitively change votes to be successful in election meddling. Eliminating individuals from voting rolls, tampering with unofficial vote tallies or visibly modifying election web sites could introduce uncertainty and chaos without affecting the final vote. The combination of offensive cyber techniques with a disinformation campaign would enable a hostile nation or group to create an aura of confusion and illegitimacy around an election that could lead to half of the American populace forever considering that election to be stolen.

While it is much too late to effectively rehabilitate election security for the 2018 midterms, there are four straightforward steps the United States can take to prepare for potential attacks in 2020.

First, Congress needs to set legal standards that address online disinformation. Social media platforms, including my former employer, made serious mistakes in 2016. Tech companies were still using a definition of cyber-warfare focused on traditional hacking techniques—such as spear-phishing or the spreading of malware—and were not prepared to detect and mitigate the propaganda campaigns that were subsequently found and stopped.

Since 2016, many companies have changed their products to deal with misinformation, updated policies to catch inauthentic behavior and created new types of transparency around political ads. Yet it is important to note that companies have undertaken this work voluntarily and could reverse it in the future. And there is a significant gap between the actions of the most criticized companies and those that have flown under the radar: Unlike Facebook and Google, the rest of the massive online advertising industry has kept changes to a minimum.

The Honest Ads Act, introduced by Democratic Sen. Amy Klobuchar and supported by 30 bipartisan co-sponsors, is a good start to setting a legal baseline; however, it must be amended to provide for technical standardization of advertising archives and to set guidelines for the use of massive voter databases by campaigns and political parties. Since the Obama 2012 campaign demonstrated the power of online ad targeting, parties, campaigns and super PACs have finely honed their targeting techniques and regularly run ads specifically designed to influence dozens or hundreds of voters with customized messaging. Americans need to collectively decide how finely political influence campaigns should be allowed to divvy up the electorate, even when those campaigns are domestically run and otherwise completely legal. Congress could also encourage more cooperation between the tech platforms by expanding the protections it granted to share cybersecurity threats to include misinformation actors, as well as by giving legal encouragement to companies to engage academics in joint research projects.

Second, the United States must carefully reassess who in government is responsible for cybersecurity defense. The U.S. has two hyper-competent intelligence and military security organizations in the National Security Agency and U.S. Cyber Command, but both are most broadly focused on offensive operations and face legal restrictions on domestic U.S. operations. The Department of Homeland Security has consolidated a great deal of the defensive responsibilities across multiple sectors, but its cyber capabilities focus on critical infrastructure such as the power grid.  This leaves the FBI as the de facto agency coordinating cyber defense in the United States. While the bureau has many skilled agents and technologists, it is at its core a law enforcement entity that focuses on investigating crimes after they occur, diligently building a case and, eventually, bringing the perpetrators to justice. Prevention certainly has become a bigger focus for the FBI—especially in the terrorism context since 9/11—but the special counsel’s recent indictments for two-year-old Russia actions demonstrate that the general timeline of FBI action does not comport well with preventing attacks in the first place.

The United States should consider following its closest allies in creating an independent, defense-only cybersecurity agency with no intelligence, military or law enforcement responsibility. In the run-up to the most recent French and German elections, the respective cybersecurity agencies of these countries had access to intelligence on likely adversaries, the legal authority to coordinate election protection and the technical chops to work directly with technology platforms. These organizations were independent enough to work directly with the relevant political campaigns, and their uncompromised mandates made them effective partners for multinational tech companies.

Third, each of the 50 states must build capabilities on election protection. While the Constitution gives Congress the ability to regulate elections, traditionally states have jealously guarded this area and eyed federal aid with great suspicion. For states’ autonomy to thrive, it is critical for every state to follow the lead of Colorado and a handful of others in building competent statewide election security teams that set strong standards for verifiable voting, perform security testing of local systems, and provide a rapid-reaction function in case of an attempted attack. The federal government could support the growth of these statewide functions with funding, intelligence and training, and by finding ways to harness the capabilities of private IT workers.

In the long run, it will be impossible to completely prevent any interference in elections. Any system as complicated as one supporting the franchise of more than 200 million registered voters will have serious vulnerabilities. Individual candidates and campaign workers will succumb to professional attacks. And open societies are inherently vulnerable to external influence. This is particularly true in the United States, where the government doesn’t license the official press, empower officials to declare certain topics verboten, jail journalists for reporting on leaked documents, arrest bloggers for questioning the government, or require state IDs to create online accounts. In 2016, the most effective Russian propaganda was that which was carried in the pages of the New York Times and the Washington Post and repeated 24/7 on the cable news channels. The GRU successfully leveraged stolen information to entice the media to cover the anti-Clinton stories it preferred, and there is no way to prevent or limit that kind of influence while also respecting the rights of a free press.

The fourth step necessary is one that can be driven only by the demands of the American citizenry: Americans must demand that future attacks be rapidly investigated, that the relevant facts be disclosed publicly well before an election, and that the mighty financial and cyber weapons available to the president be utilized immediately to punish those responsible. This might seem like a far stretch under President Trump, but recent efforts by members of his administration to prepare for the midterms demonstrate that public pressure could encourage a meaningful response despite the current occupant of the Oval Office.

The attacks against U.S. political discourse aim to undermine citizens’ confidence, create chaos and jeopardize the legitimacy of the American government. With the right political will and cooperation, the United States can demonstrate that 2016 was an aberration and that the U.S. political sphere will not become the venue of choice for the latest innovations in global information warfare. The world—including America's enemies—is watching. 

 

Hero Image
alex stamos lawfare image
All News button
1
Authors
John Villasenor
News Type
Q&As
Date
Paragraphs

 

 

 

Introducing Cyberspectives, a new podcast analyzing the cyber issues of today with host John Villasenor.

In the inaugural episode, guest Andrew Grotto provides analysis on a broad range of cyber issues, including questions regarding areas of cyber most in need of national level attention, aspects of cyber that are underappreciated, emerging opportunities in the commercial cybersecurity sector, and how the academic community can best contribute to the cyber policy dialog.

About the guest:

Andrew Grotto is a William J. Perry International Security Fellow at the Center for International Security and Cooperation and a Research Fellow at the Hoover Institution, both at Stanford University. Before coming to Stanford, Grotto was the Senior Director for Cybersecurity Policy at the White House in both the Obama and Trump Administrations. Prior to that, he was Senior Advisor for Technology Policy to Commerce Secretary Penny Pritzker.

KEY EXCERPTS FROM THE ANDY GROTTO INTERVIEW

(the text below has been condensed and edited for clarity)  


John Villasenor:  
Is there anything less obvious that you'd say about aspects of cyber that you think are particularly deserving of national level attention—other than the obvious such as protecting critical infrastructure?

Andy Grotto:  
To me, one issue that really jumps out at me based on my experience, is I think there's a lot of open questions around the appropriate allocation of responsibility between the government and the private sector for defending against cybersecurity threats, and so I'll use an analogy from the physical world: we would never expect in a million years the operator of a power plant to defend a plant against a North Korean ballistic missile. That mission is squarely the government's job.

And, the cyber analog to that, though, is a little tricky because if North Korea conducted a highly sophisticated cyberattack against a plant, we might say, “Okay, yeah, maybe it's unreasonable for the plant to be able to defend against that kind of sophisticated attack.” But, what if it was just a criminal group, a domestic criminal operator who happened to come up with a sophisticated attack? Does it matter that the identity of the perpetrator was a nation state vs. some ambitious vandal?

And then, on the opposite end of the spectrum, if North Korea were to send in a lone agent to break into the power plant and sabotage it, and the sabotage caused catastrophic power outages and damages to the economy and loss of life, obviously, that's still a national security matter for the government to devote resources to both preventing and remedying, but we would also have a lot of questions about whether or not the power plant operator did its job. We would want to know, "Okay, so did you have perimeter security? Did you lock the front door? Why was your security vulnerable to such a single point of failure?"

So there's a blended responsibility. And, I don't think that that line is clear in the cyber context because a nation state adversary could use a relatively low-end, even unsophisticated attack to conduct an attack with national security implications, partly owing to the fact that it was a nation state that did it. In that case, it's a national security issue.

John Villasenor:  
So you're saying that this sort of allocation, it's easy to come up with the extreme ends of the spectrum. But, most of the stuff that we actually encounter in terms of cyber challenges is going to be somewhere in the less clear middle ground, and you're saying that allocation of responsibility is hard, and I think that's a terrific point. The other thing I wanted to briefly reflect on is: You made a really important comment. You stated, correctly, of course, it's clear that there's a lot of energy spent responding to crises, cyber crisis of one form or the other. My question in response to that observation is, is that also a risk? It's a risk in any domain, but is it a particular risk in this domain that our energies understandably get directed towards solving crises, but in doing that, we then fail to sort of take a step back and look at the big picture and take some of the steps that could make some of these crises not happen in the first place?

Andy Grotto:  
Yeah, it is a challenge, and I think if I could pick a point of optimism here: it's that part of the reason why, I hope, why crises consumed so much bandwidth during my time in government is because oftentimes these crises presented matters of first impression for decision makers, especially the time when the broader cyber mission space was evolving within and across different agencies of the government. It meant that getting decisions made on cyber questions just took a lot more time, energy and resources than they might take in other domains.

So, my point of optimism is that as the government develops some muscle memory around how to deal with policy challenges in the cyber context, those decision costs will start to come down. They may still be high relative to other domains, but they hopefully won't be quite as high as I thought they were, at least, during my time.

John Villasenor:   
Are there any areas of cyber that you think are particularly underappreciated, in other words, that aren't getting the attention they deserve in light of their potential importance?

Andy Grotto:  
I mentioned the allocation of responsibility question for critical infrastructure. That's one. I'll offer two additional ones. The first is a lack of really reliable data around the cost of cyber incidents. There are various studies out there on what a data breach costs. What we're seeing more and more is scholars and statisticians pulling some pretty divergent conclusions from this data, which says something about the data.

So, I think that's an area where I would like to see a lot more scholarly attention and focus by industry and government, because I think if we can generate better data about the cost of cyber incidents, it will help enterprises across the country manage their risk more effectively, and then potentially even create a more vibrant insurance market.

And then the other area that [needs] more attention is sort of what I call third-country issues and offensive cyber operations. In a cyber context, that identity relationship between the physical location of the adversary and the target, as it were, the physical target, isn't in place, so, an adversary may be in country A operating malicious cyber infrastructure in country B, and so, an operation against that adversary in country A may actually have to take place in country B, which may or may not have anything to do with whatever conflict the US government or pick-your-government has with country A. So, that was a third country in the mix that creates, I think, some challenging policy and legal questions.

John Villasenor:           
And I would assume that's not only the exception. That's likely, more often than not, going to be the case, right? If you're an attacker, the last thing you want to do is, you know, make it obvious where the attack's coming from, so I would assume one of the first things you're going to do is to try to launch it from somewhere that at least tries to mask your identity, right?

Andy Grotto:                
Right, and one of the unfortunate twists here is that our adversaries are also very familiar with US surveillance law and constitutional protections here domestically, so what adversaries will do is they will purposefully compromise infrastructure in the United States and use that infrastructure as part of their attack infrastructure because they know that, in a way, in a practical matter, it's harder for the US government to operate domestically against a national security threat such as that than it is if that same infrastructure were in a third country, because we would need probably cause and satisfy legal requirements that just aren't the same if we're operating overseas.

John Villasenor:           
Let me ask another question, and this is the one where anybody who's a venture capitalist should be particularly interested in your answer here, or a startup company: Obviously, there's an enormous commercial sector devoted towards cyber solutions of all shapes and sizes. The question is, while that's a large sector, it's less clear that it's covering all the bases. Are there any obvious gaps in the types of solutions you see reflected in today's commercial offerings? If you were going to leave the academic/policy world and start a cybersecurity company, is there a particular sector of cybersecurity that you think is ripe for better solutions commercially?

Andy Grotto:  
I think any technology that can do what a human does in cybersecurity more efficiently and more effectively has huge potential because time and time again, the critical shortage in enterprise, whether it's the federal government or in private companies, is human capital, the need for people to do IT and solutions that can perform, can automate these tasks, I think, have huge potential in the future. I think IoT cybersecurity is, I think, a massive opportunity, how to both build efficient solutions into products, but also how to retrofit products that have bad security with more effective security. I think that's a huge market.

John Villasenor:   
For people in the academic community, on the cyber policy side, again, putting the obvious aside: is there anything that you see as a particularly ripe avenue for people in the academic policy world to contribute to help move the dialog forward on cyber issues?

Andy Grotto:  
Yeah, so on the sort of policy side specifically, I would say, one area is data on cost of incidents, on the behavior of enterprises in the face of uncertainty around cyber risk. I think there's a huge need and opportunity for doctoral students looking for dissertations to delve into some of these empirical questions about measurement and whatnot.

I would love to see more psychologists in the cybersecurity business. If you look at studies of how adversaries break into enterprises and organizations, they're almost, for the most part, exploiting human weaknesses. There's this spearfishing, right, that, things like that, and getting a better handle on how to make people, whether they're IT professionals or just users of IT, you know, either less vulnerable or effective at fending off attacks, I think there's a huge need and maybe some fascinating questions of psychology there.

And then, I think, a need for management scientists, organizational scientists, to start to unpack how businesses and governments and businesses both within sectors and across sectors can collaborate on common challenges and better characterizing, “What can we learn from history about the ability of like-minded or similarly-situated institutions to tackle complex management” because managing cybersecurity risks is ultimately a management challenge for enterprise, tackling a complicated management challenge like cybersecurity.

 

All News button
1
News Type
Blogs
Date
Paragraphs

On May 23, Stanford students enrolled in Technology and Security (MS&E 193/293) met with General James M. Holmes. General Holmes delivered delivered gave a talk, "Applying Technology--the Military Perspective," and engaged students in a Q&A session afterwards. The interisciplinary course explores the relation between technology, war, and national security policy from early history to modern day, focusing on current U.S. national security challenges and the role that technology plays in shaping our understanding and response to these challenges.

 

img 4445 General James M. Holmes

[[{"fid":"231402","view_mode":"crop_870xauto","fields":{"format":"crop_870xauto","field_file_image_description[und][0][value]":"General James M. Holmes","field_file_image_alt_text[und][0][value]":"General James M. Holmes","field_file_image_title_text[und][0][value]":"General James M. Holmes","field_credit[und][0][value]":"Margaret Williams","field_caption[und][0][value]":"","thumbnails":"crop_870xauto"},"link_text":null,"type":"media","field_deltas":{"2":{"format":"crop_870xauto","field_file_image_description[und][0][value]":"General James M. Holmes","field_file_image_alt_text[und][0][value]":"General James M. Holmes","field_file_image_title_text[und][0][value]":"General James M. Holmes","field_credit[und][0][value]":"Margaret Williams","field_caption[und][0][value]":"","thumbnails":"crop_870xauto"}},"attributes":{"alt":"General James M. Holmes","title":"General James M. Holmes","class":"media-element file-crop-870xauto","data-delta":"2"}}]]

[[{"fid":"231403","view_mode":"crop_870xauto","fields":{"format":"crop_870xauto","field_file_image_description[und][0][value]":"General James M. Holmes","field_file_image_alt_text[und][0][value]":"General James M. Holmes","field_file_image_title_text[und][0][value]":"General James M. Holmes","field_credit[und][0][value]":"Margaret Williams","field_caption[und][0][value]":"","thumbnails":"crop_870xauto"},"link_text":null,"type":"media","field_deltas":{"3":{"format":"crop_870xauto","field_file_image_description[und][0][value]":"General James M. Holmes","field_file_image_alt_text[und][0][value]":"General James M. Holmes","field_file_image_title_text[und][0][value]":"General James M. Holmes","field_credit[und][0][value]":"Margaret Williams","field_caption[und][0][value]":"","thumbnails":"crop_870xauto"}},"attributes":{"alt":"General James M. Holmes","title":"General James M. Holmes","class":"media-element file-crop-870xauto","data-delta":"3"}}]]

Hero Image
img 4445
General James M. Holmes
Margaret Williams
All News button
1
Authors
Amy Zegart
News Type
Q&As
Date
Paragraphs

In a world complicated by terrorism, cyber threats and political instability, the private sector has to prepare for the unexpected. Amy Zegart, CISAC co-director, the Hoover Institution’s Davies Family Senior Fellow, and co-author (along with Condoleezza Rice) of Political Risk: How Businesses And Organizations Can Anticipate Global Insecurity, explains lessons learned in keeping cargo planes moving, hotel guests protected – and possibly coffee customers better served.  

Hero Image
Radio mic
All News button
1
News Type
Commentary
Date
Paragraphs

 Herbert Lin and Max Smeets wrote the following essay for Lawfare:

United States Cyber Command recently released a new “command vision” entitled “Achieve and Maintain Cyberspace Superiority.” The document seeks to provide: “a roadmap for USCYBERCOM to achieve and maintain superiority in cyberspace as we direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and foreign partners.”

Taken as a whole, the document emphasizes continual and persistent engagement against malicious cyberspace actors. One could summarize the new U.S. vision using Muhammad Ali’s famous phrase: “Float like a butterfly, sting like a bee.” Cyber Command aims to move swiftly to dodge opponents’ blows while simultaneously creating and recognizing openings to strike.

Cyber Command’s new vision is noteworthy in many ways. Richard Harknett’s March Lawfare post provides more context on “what it entails and how it matters.”

The emergence of this new vision—coinciding with a new administration—recognizes that previous strategies for confronting adversaries in cyberspace have been less than successful:

[A]dversaries direct continuous operations and activities against our allies and us in campaigns short of open warfare to achieve competitive advantage and impair US interests. ... Our adversaries have exploited the velocity and volume of data and events in cyberspace to make the domain more hostile. They have raised the stakes for our nation and allies. In order to improve security and stability, we need a new approach.

Another key realization is that activities in cyberspace that do not rise to the level of armed conflict (as traditionally understood in international law) may nevertheless have strategically significant effects:

The spread of technology and communications has enabled new means of influence and coercion. Adversaries continuously operate against us below the threshold of armed conflict. In this “new normal,” our adversaries are extending their influence without resorting to physical aggression. They provoke and intimidate our citizens and enterprises without fear of legal or military consequences. They understand the constraints under which the United States chooses to operate in cyberspace, including our traditionally high threshold for response to adversary activity. They use this insight to exploit our dependencies and vulnerabilities in cyberspace and use our systems, processes, and values against us to weaken our democratic institutions and gain economic, diplomatic, and military advantages.

Although the document never says so explicitly, it clearly contemplates Cyber Command conducting many cyber activities below the threshold of armed conflict as well.

At the same time, the vision is silent on a number of important points—after all, it is a short, high-level document. In this piece, we have highlighted some of these gaps to identify critical stumbling blocks and necessary areas of research. We categorized our comments below following the basic building blocks of any good strategy: ends, ways and means.

Ends

First, Cyber Command’s objective to “gain strategic advantage” seems obviously desirable. Yet, the vision doesn’t address what that actually means and how much it will cost. Based on Harknett and Fischerkeller’s article, strategic advantage can be interpreted as changing the distribution of power in favor of the United States. (This is in line with the observation made at the start of Harknett’s Lawfare piece: The cyber activity of adversaries that takes place below the threshold of war is slowly degrading U.S. power toward rising challengers—both state and non-state actors.)

But Cyber Command needs to be clear about the consequences of seeking this objective: A United States that is more powerful in cyberspace does not necessarily mean that it is more secure. The best-case scenario following the vision is that the United States achieves the end it desires and dramatically improves the (general or cyber) distribution of power—that is, it achieves superiority through persistence.

Yet, it remains unclear what will be sacrificed in pursuit of this optimal outcome. Some argued at Cyber Command’s first symposium that strategic persistence may first worsen the situation before improving it. This presumes that goals will converge in the future; superiority in cyberspace will in the long run also lead to a more stable environment, less conflict, norms of acceptable behavior, and so on. If this win-win situation is really the intended outcome, Cyber Command needs to provide the basis for its logic in coming to this conclusion—potentially through describing scenarios and variables that lead to future change. Also helpful would be an explanation of the timeframe in which we can expect these changes.

After all, one could equally argue that a strategy of superiority through persistence comes with a set of ill-understood escalation risks about which the vision is silent (Jason Healey has made a similar point). Indeed, it is noteworthy that neither “escalate” or “escalation” appear in the document. Fears of escalation have accounted for much of the lack of forceful response to malicious cyber activities in the past, and it can be argued that such fears have carried too much weight with policy makers—but ignoring escalation risks entirely does not seem sensible either.

Furthermore, high-end conflict is still an issue. True, the major security issue in cyberspace today is the possibility of death by a thousand cuts, and failure to respond to that issue will over time have strongly negative consequences. But this should not blind us to the fact that serious, high-profile cyber conflict remains possible, perhaps in conjunction with kinetic conflict as well. One consequence of the post-9/11 security environment has been that in emphasizing the global war on terror, the U.S. military allowed its capabilities for engaging with near-peer adversaries to atrophy. We are on a course to rebuild those capabilities today, but we should not make a similar mistake by neglecting high-end cyber threats that may have significant consequences.

Ways

The way Cyber Command aims to accomplish its goals, as noted above, is to seize the initiative, retain momentum and disrupt adversaries’ freedom of action.

Given the low signal-to-noise ratio of policy discussions about cyber deterrence over the past several years, it is reasonable and understandable that the vision tries to shift the focus of cyber strategy toward an approach that is more closely matched to the realities of today. But in being silent about deterrence, it goes too far and implies that concepts of cyber deterrence have no relevance at all to U.S. cyber policy. At the very least, some form of deterrence is still needed to address low-probability cyber threats of high consequence.

The vision acknowledges the importance of increasing the resilience of U.S. cyber assets in order to sustain strategic advantage. But the only words in the document about doing so say that Cyber Command will share “intelligence and operational leads with partners in law enforcement, homeland security (at the federal and state levels), and the Intelligence Community.” Greater U.S. cyber asset resilience will enhance our ability to bring the cyber fight to adversaries by reducing their benefits from escalating in response. And yet, the coupling between cyber defense and offense goes unmentioned.

The vision correctly notes that “cyberspace threats ... transcend geographic boundaries and are usually trans-regional in nature.” It also notes “our scrupulous regard for civil liberties and privacy.” But U.S. guarantees of civil liberties and privacy are grounded in U.S. citizenship or presence on U.S. soil. If cyber adversaries transcend geographic boundaries, how will Cyber Command engage foreign adversaries who operate on U.S. soil? The vision document is silent on this point.

Means

Of the strategy’s three dimensions, Cyber Command’s new vision is least explicit about the means required to enable and execute strategic persistence.

However, a better understanding of the available means is essential if we want to know how much the U.S. will go on the offense based on this new strategy. In theory, a strategy of persistence could be the most defensive strategy out there. Think about how Muhammed Ali famously dodged punches from his opponents: the other guy in the ring desperately punches but Ali has the upper hand and wears him out; he mentally dominates his opponent. A strategy of persistence could also be the most aggressive one. Muhammed Ali would also punch his opponents repeatedly, leaving them no opportunity to go on the offense—and sometimes being knocked out.

While the command vision has remained silent on available means, others seem to be moving into this direction and offering some examples. In a recent Foreign Affairs article, Michael Sulmeyer argues that the U.S. should ‘hack the hacker’: “It is time to target capabilities, not calculations. […] Such a campaign would aim to make every aspect of hacking much harder: because hackers often reuse computers, accounts, and infrastructure, targeting these would sabotage their capabilities or render them otherwise useless.” Such activities would indeed increase the friction that adversaries encounter while conducting hostile cyber activities against the United States—but whether that approach will result in persistent strategic advantage remains to be seen.

Also, Muhammad Ali boxed differently against different opponents—especially if he was up against taller boxers. Analogously, there might not be a one-size-fits-all solution when it comes to strategic persistence in the cyber domain. The means used to gain superiority against ISIS aren’t the same as those that are effective against China. Future research will have to list them and parse out the value of different approaches.

What Muhammad Ali was most famous for—and what remained constant throughout all of his matches—was his amazing speed. The new vision shows that the Cyber Command is well-aware of the importance of speed. Operational speed and agility (each mentioned four times in the vision and central to the vision’s fourth imperative) will manifest differently against different opponents; moreover, significant government reorganization will be required to increase operational speed and agility. We should, however, watch out that these concepts do not become meaningless buzzwords: An article on the meaning of an agile cyber command would be a welcome contribution to the field.

Prioritizing

Muhammad Ali boxed 61 matches as a professional. He would not have won 56 of those fights if he had fought all of his opponents at the same time. The Cyber Command is operating in a space in which it has to seize the initiative against a large and ever-growing number of actors. In seeking to engage on some many levels against so many actors, prioritization (as discussed in the strategy) will become a top issue when implementing this new vision.

What’s not in the strategy is as important as what is. Having said that, a short 12-page document cannot be expected to address all important issues. So the gaps described above should be taken as a sampling of issues that will need to be addressed as the vision is implemented.

 

 

Hero Image
cyber news
All News button
1
Authors
News Type
News
Date
Paragraphs

In this video, Cybersecurity Postdoctoral Fellow Jesse Sowell and Dr Irina Brass of University College London present a joint research project that looks into the options available to create more effective, responsive and dynamic security standards for the Internet of Things (IoT).

Read a summary about the project here.

Hero Image
Screenshot - Jesse Sowell video interview
All News button
1
Subscribe to Cybersecurity