∩ Security and Algorithms, extended dance version

An extended version of a previous post, based on my presentation at the ISA ISSS-ISAC Joint Annual Conference 2013, Washington D.C. last week.

I first wondered what the implications would or could be about algorithmic interaction while attending the “Governing Algorithms” conference at NYU this past spring.  The conference was a very interesting mix of presentations from many different fields, including computer science, the digital humanities, finance, and so on.  In particular, an idea was suggested by Paul Dourish’s presentation, in which he offered the idea of “ecosystems of algorithms” for consideration.  How would we map such an ecosystem? Algorithms are usually studied either individually (e.g.; the algorithm that determines whether or not you trade a particular stock) or vertically in combination with the programmer, data, software, hardware, network, and final purpose to which it is put.  What would it mean to study these algos as they interact with each other and with data?

The premise of this working paper is that security studies can learn a great deal about cybersecurity by watching what happens in the financial sector.  The whole-hearted embrace of algorithmic trading has precipitated several situations in which the security of either data, information, or systems has been compromised.  

First, some definitions: security as it’s defined by different fields, and then algorithms.  In “traditional” security and “human” security, security is defined along spectra in answer to four questions: security for whom? security from what? how severe is the threat? how fast is the threat?  For example,  traditional bombs-n-bullets security answers those questions this way: 1. for the territorial integrity of the sovereign state, 2. from invasion or attack, 3. killing a lot of the state’s citizens, 4. (usually) very suddenly.  Human security, on the other hand, answers them  thusly: 1. for the human being, 2. from physical harm to bodily integrity, such as rape, 3. may impact smaller numbers of people within a state, or larger numbers of people in a region, 4. may be low-grade but persistent, such as poverty.

This is very different from the field of financial investment/risk management, where the term “security” refers to a financial instrument that represents either an ownership or creditor position in relation to the issuing entity – in other words, a stock or bond – while it is the term “risk,” or a quantitative measure of the probability that an investment’s return will be lost or less than expected, that captures what the traditional and human security fields term “security.”  However, the macro-level term for security in finance is “stability.”  This is too often confused for stasis, or unchangingness, rather than a more accurate reading, which would reflect the connotations of homeostasis, or volatility within a well-defined range.

Information technology combines the categories of security studies (both traditional and human) with the clarity of the finance definition.  It defines a security threat as “a person or event that has the potential for impacting a valuable resource in a negative manner,” a security vulnerability as the “quality of a resource or its environment that allows a threat to be realized,” and a security incident as unauthorized access or activity on a system, denial of service, non-trivial probing for an extended period of time, including damage caused by a virus or other malicious software.  Risk assessment is conducted similarly to finance in order to identify vulnerabilities and opportunities for mitigation.

Algorithms are step-by-step problem-solving procedures, especially an established, recursive computational procedure for solving a problem in a finite number of steps.  Algorithms are used in all aspects of life, whether or not these systems are automated.  For example, figuring out whether or not you should eat something involves the following two-step process: 1. taste something 2. if it tastes bad, spit it out.

We can define an algorithm as a procedure which is “precise, unambiguous, mechanical, efficient, [and] correct,” with two components: a logic component specifying the relevant knowledge and a control component specifying the problem-solving strategy. “The manner in which the logic component is used to solve problems constitutes the control component” and can be made more or less efficient.

The classic formulation is “Algorithm = Logic + Control.”  Andrew Goffey in Software Studies reminds us that the formula captures both its abstract nature as a set of instructions as well as an implemented entity embodied in a programming language for a particular machine architecture, with real effects on end users.  Therefore, even though an algorithm can be modeled using mathematical notation, it is real in a way that an equation is not: “algorithms bear a crucial, if problematic, relationship to material reality.”

There are at least two problem with their use.  First, algorithms ossify social relations at the moment they are incorporated into the algorithms’s equations/process – which does not reflect the dynamic nature of reality.  As Zeynep Tufekci points out, big data pattern recognition requires using algos to pull recognizable patterns, and that only works if you know the pattern you’re looking for – by definition, it won’t be the rare event.  Furthermore, algorithms are only as good as their assumptions!  To sift through that much data, the algorithms will rely on the same shortcuts that the humans who write them do: stereotypes.

Which leads us to questions for the future.  What happens when “cyber” and physical reality interact? Unless your systems are air-gapped (with a backup power source!), they will be interacting with each other. Therefore it’s not a question of which is “more” dangerous, because they act together.  What are the security implications of the growing use of algorithms in automating all these fields? What are the implications for military communications, including command and control, as well as infrastructure and finance?  Who has ultimate responsibility for these algorithms? Industry-specific situational awareness? Finance does NOT provide a great example of self-policing harmful systemic behavior or structure.

And finally, how will governing algorithms behave if/when they interact?  An algorithm that runs on a really huge dynamic data set will not only find new (previously unknowable) patterns, but it may also produce data itself — on which other algorithms will run. It is difficult to map the possible networks of interaction even theoretically, to do so for networks of algorithms may be an “unknowable unknown.” Does it make sense to map algorithmic interaction as a two-mode network, in which we have the algorithms in one group, and they interact only with objects from another group?  Or does it make more sense to map the interactions, and see what groupings emerge?  The former might be more useful for understanding the theory, but the latter might be more useful for taking action.  It would also be useful to closely examine the way biologists model epistasis (gene-gene interaction).

This is no longer a theoretical question.  DoD algorithms may be interacting even more in the future: the plan is to make a “joint information environment” or JIE out of some 15,000 disparate networks in order to create a more secure system architecture that will not be as vulnerable to leakers.  Such centralization would be to allow interaction between competing and incompatible algorithms “baked in” to the existing networks.

And that is without even considering the coming “internet of things”, which at least in the CIA’s view would be a heaven of total surveillance, according to then-Director David Petraeus.  It is also not clear that human involvement in interaction would be a mitigating factor, if it’s even possible, given the timescales.

An example of algorithmic interaction is the AP Twitter Hack.  In April of 2013, the Syrian Electronic Army hacked the feed of the the Associated Press’s Twitter account, sending a tweet saying the White House had been hit by two explosions and that President Obama was injured.  Because many traders rely on machine-reading the news, the stock market crashed briefly before the AP could correct it.  The “AP Twitter Hack,” as it became known, is the most important example because it demonstrates the INTERACTION of at least two different algorithms: the one(s) that the ETF(s) relies on to buy and sell stocks, the one that “reads” the AP Twitter feed, and the ones that govern whether or not to shut down trading on an exchange.  (Possibly also the one that was used to crack the AP Twitter feed.)  These algorithms are processed much faster than humans can react, and can interact with unforeseen consequences.  Financial markets are growing used to this sort of thing, perhaps because the consequences there are (relatively) easily rectified: trading is shut down, trades are unwound, etc.  What happens if the algorithms in question are the ones that control weapons systems?  Or critical infrastructure?  In the “internet of things,” all of these systems can interact, and an introduced deviation can have severe consequences.

With all this in mind, here are a few preliminary policy prescriptions.  We need a culture of rule of law.  Some call for a “centralized cyber policy.”  However, this is a fool’s errand for several reasons.  First, the technology changes too swiftly to even formulate (let alone enforce) a policy for an entity of any size.  Forget the entire federal government, it would be impossible to enforce at just the NSA, with all its concomitant contractors. It’s not a policy that’s needed so much as a value system that promotes the rule of law.

And we have to learn to expect “normal accidents” as Charles Perrow warned almost 30 years ago.  Algorithms are possibly the most tightly-coupled technology of all, because their processing time is not on a human scale, making their interactions seamless from our point of view.  Resilience of components should be fostered, because ensuring the robustness of the entire network may not always be possible.

October 14th, 2013 4:54pm

Posted in

∩ Security and Sanctions

Many analysts have become disenchanted with the failure of sanctions to make a dent in Iran’s resolve to attain nuclear self-sufficiency. But what to put in their place?  Well, how about nothing?  Let me make my case.

This network map (spring embedded layout, for those of you who must know) shows the dense set of relationships created by nuclear nonproliferation treaty affiliations.  The treaties that were mapped were the following: OPANAL, the Antarctic Treaty, the CTBT (not yet in force),G-6, IAEA, NSG, Treaty of Bangkok, Treaty of Pelindaba (not yet in force), Treaty of Rarotonga, NPT, Zangger Committee (ZC), and the various Proliferation Security Initiatives.

Nuclear Treaty Affiliations: a "hairball" network visualization with nuclear powers in a different color from non-nuclear states.

This hairball shows some interesting things: mainly, that the nuclear powers are not necessarily well embedded in the network.  Why does this matter? Two reasons:

  1. Treaties are arduous things to negotiate and create binding legal commitments, the kind that are worth going to war over.  Signing on to a treaty means acknowledging that you are making that kind of commitment.
  2. Treaties create a tremendous amount of enforcement structure: they have secretariats staffed with experts (some more than others).  They give other countries and international agencies a legitimate right to look all up in your business, and the people doing the looking will know what they’re looking at.

Why does this apply to Iran?  The reason you can’t see Iran in this image is because I didn’t label it, and the reason I didn’t label it is because it is deep within the network and not visible in this layout (you can see it in a 3-D image, but I don’t know how to make .gifs yet).  Iran is a member of the NPT and IAEA treaties, the ones with the strongest and most stringent enforcement.  And because it bears repeating, the NPT both allows signatories to develop nuclear energy for peaceful purposes, and requires that nuclear powers disarm (how’s that coming along, huh?).

Other people have detailed the reasons why Iran might want to pursue nuclear weapons: dangerous neighborhood, fungible source of technical expertise, Shi’ite bomb, yadda yadda yadda.  

The point this map makes is that that’s unlikely to mean an actual bomb.  Iran signed up to these treaties knowing full well what they meant, and they haven’t backed out – which they could have if they wanted to, as the DPRK did in 2003.  Others are coming around to the idea that Iran wants the capability, but not the actual thing – something many other countries have, including close allies of the US like Japan.

Screenshot of tweet from Ian Bremmer: "When discussing military options in Iran, Obama always talks about preventing nuclear weapons development, not breakout capacity. (1:07pm, 30 Sep 13, Twitter for IPhone).

This would not be a great situation, but it would not be as destabilizing to the region as the continuing enmity and sense of ill-usage generated by the sanctions regime.  

There’s a real opportunity here. Cordially, in the nicest possible way, and to both negotiating teams: Don’t blow it.

September 30th, 2013 5:02pm

Posted in

∩ Security and Algorithms

Just attended the “Governing Algorithms” conference at NYU, and my mind is buzzing with ideas.  I may add a recap of the speakers to this post later, but right now, I just want to get an idea out that was suggested by Paul Dourish’s presentation, in which he suggested we think about “ecosystems of algorithms.”

How would we map such an ecosystem? Algorithms are usually studied either individually (e.g.; the algo that determines whether or not you trade a particular stock) or vertically in combination with the programmer, data, software, hardware, network, and final purpose to which it is put.  What would it mean to study these algos as they interact with each other and with data

For example, the AP Twitter Hack wrought havoc with the stock market because of interacting algos: the algo that authenticated the Twitter account erroneously, the algos that monitored the AP feed for alarming keywords, and the algos that run the high-frequency trades.  (And not for nothing, but the more I learn about HFT, the more I think Frank Herbert was prescient when he wrote “The Tactful Saboteur.”)

An algo that runs on a really huge dynamic data set will not only find new (previously unknowable) patterns, but it may also produce data itself – on which other algos will run.  Methodologically, should we try to map these as more-or-less horizontal two-mode networks?  And what are the theoretical implications of this (especially for security)?

UPDATE: and what happens when there is an “internet of things”?

May 22nd, 2013 5:20pm networksecosystemsalgorithms

Posted in

∩ Security and the Neocons

On the anniversary of the Iraq War’s beginning, read “A Letter to Paul Wolfowitz: Occasioned by the tenth anniversary of the Iraq war by Andrew J. Bacevich” available in its entirety here.  Or, if you’re lazy, just read the part that made me angriest as an analyst of global politics:

Wohlstetter’s perspective (which became yours) emphasized five distinct propositions. Call them the Wohlstetter Precepts.

First, liberal internationalism, with its optimistic expectation that the world will embrace a set of common norms to achieve peace, is an illusion. Of course virtually every president since Franklin Roosevelt has paid lip service to that illusion, and doing so during the Cold War may even have served a certain purpose. But to indulge it further constitutes sheer folly.

Second, the system that replaces liberal internationalism must address the ever-present (and growing) danger posed by catastrophic surprise. Remember Pearl Harbor. Now imagine something orders of magnitude worse — for instance, a nuclear attack from out of the blue.

Third, the key to averting or at least minimizing surprise is to act preventively. If shrewdly conceived and skillfully executed, action holds some possibility of safety, whereas inaction reduces that possibility to near zero. Eliminate the threat before it materializes. In statecraft, that defines the standard of excellence.

Fourth, the ultimate in preventive action is dominion. The best insurance against unpleasant surprises is to achieve unquestioned supremacy.

Lastly, by transforming the very nature of war, information technology — an arena in which the United States has historically enjoyed a clear edge — brings outright supremacy within reach. Of all the products of Albert Wohlstetter’s fertile brain, this one impressed you most. The potential implications were dazzling. According to Mao, political power grows out of the barrel of a gun. Wohlstetter went further. Given the right sort of gun — preferably one that fires very fast and very accurately — so, too, does world order.

Just off the top of my head (did I mention my head exploded, and therefore I no longer actually HAVE the top of my head?), lets take these one by one.

  1. The jury’s still out on liberal internationalism.  Yes, traditional power politics still operates when push comes to shove.  But the truth is that the VAST majority of international interactions are cooperative, not coercive.
  2. Catastrophic surprise has been an option since 1945.  Pity that thinking about it is still behind the Maginot Line.  Human systems are complex systems and do not behave in linear fashion.  They have tremendous numbers of variables, positive and negative feedback loops, and interaction effects.  They are thus terrifically difficult to study, and anyone who says otherwise is also going to try to sell the Brooklyn Bridge.
  3. You cannot avert or prevent catastrophic surprise (by definition, surprises are surprising, yes?) but you can work on mitigation and recovery.  “Eliminating threats before they materialize” is paradoxically a really good way to guarantee they materialize.  Again, COMPLEX SYSTEMS.
  4. Unquestioned supremacy makes you a really terrific target, and forces others to be really creative.  You’re actually a lot safer if others are not actively looking for ways to hurt you.  I bet the unintended effect of Stuxnet will be to make Iran a world-class player in IT – they’ve already hit where they think we’ll hurt most.
  5. Information technology is a field-leveler, not a wall you can hide behind.  (See point #4.)

For those who don’t know, Bacevich has a great deal of skin in this game: he’s a former officer in the Army, currently a professor at BU, and his son was an officer who died in combat in Iraq.

As an aside, asinine “thinkers” like Wolfowitz are why I’ll never be allowed in the sacred halls of policy-making unless I’m elected to office. 

March 20th, 2013 9:27pm

Posted in

∩ Security and Network Analysis (or, There’s No Excuse for Sloppy Thinking)

The original story in The Guardian by Ryan Gallagher was about multinational security firm Raytheon has developed a scrape-and-dump program called RIOT (for Rapid Information Overlay Technology), which gathers huge amounts of information about people from social media, and uses it to predict their movements.  There are all sorts of problems with this.

In a separate piece in The Guardian, James Ball points out that even the most innocuous information can be damaging in the wrong hands:

It’s easy to believe those with nothing to hide have nothing to fear – and most of us are essentially decent people, with frankly boring social network profiles. But, of course, to (say) a petty official with a grudge, almost anything is enough: a skive from work, using the wrong bins, anything. Everyone’s got something someone could use against them, even if only for a series of annoyances.

and it’s all too easy to to just forget that it can be taken out of context, a point also made by Jay Stanley at the ACLU:

When we post something online, it’s all too natural to feel as though our audience is just our friends—even when we know intellectually that it’s really the whole world. Various institutions are gleefully exploiting that gap between our felt and actual audiences (a gap that is all too often worsened by online companies that don’t make it clear enough to their users who the full audience for their information is).

Furthermore, Ball reminds us that one’s online privacy depends a great deal on other people’s technological ability and awareness:

It’s also tempting to believe that with good privacy settings and tech savvy, we can protect ourselves. Other people might be caught, but we’re far too self-aware for that. But stop and think. Do you trust every friend you have to lock their privacy settings down? Your mum? Your grandad? Do they know to strip location data from photos? Not to tag you in public posts? Our privacy relies on the weakest point of each of our networks – and that won’t hold.

But for me the heart of the matter is the misuse of social network analysis. Gallagher writes: “Using Riot it is possible to gain an entire snapshot of a person’s life – their friends, the places they visit charted on a map – in little more than a few clicks of a button.“ 

This vastly overstates the case: you cannot get a snapshot of the person’s life, only their social media trail.  The software also creates a “network” from these scraped connections, with every link treated as equally meaningful.  This creates two related problems: 1) it’s sold as a complete package, and its end users believe the hype; 2) it is used to create profiles of “suspects” who have no real relationship to the original subject of investigation.  Forbes writer Michael Peck hit that nail on the head:

There is no mention of violence in the video. Yet it’s worth noting that software that assembles a profile of someone’s movements would also be useful for government agencies who arrange for appointments between suspected terrorists and drone-launched Hellfire missiles.    

Context matters in network analysis.  I follow the National Intelligence Council (@ODNI_NIC) and @BronxZoosCobra as well as @OccupyWallSt on Twitter.  I see no evidence that this program is able to differentiate among my relationships to any of these entities at all, let alone better than a human analyst.  Am I a closet Slytherin, perhaps, plotting to take over the revolution (and thence the world)?  Then how does the fact that I also follow @BettyMWhite figure in?  An example I often use to demonstrate how meaningless “closeness” can be in a network is this: my thesis adviser could introduce me to the Secretary General of the U.N., who could introduce me to the President of the U.S.  So I’m three links away from the president.  What does that mean for my input on policy? Absolutely nothing.  I’m “close” (whatever that means), but it has no meaning, because I don’t have any impact at all.

People are not one-dimensional, and incredible amounts of data in one dimension do not (and cannot) predict behavior or thoughts in other dimensions. The incredible amounts of data that are becoming available need more theoretical underpinning, more thought and judgement applied, and more empirical hypothesis testing.  Just gathering data and dumping it in a blender will find even more spurious correlations than ever, otherwise.  Given how many people are already “collateral damage” because they were in the wrong place at the wrong time, it behooves us to be more careful about positing meaningful relationships, not less.

In the meantime, as Peck notes, programs that scrape-and-dump can be countered by two simple tactics: either stay off social media altogether, or spoof it.  Spoofing it could be a lot more fun – after all, on the Internet, nobody knows you’re a dog.

February 19th, 2013 6:07pm

Posted in

∩ Security and Management

Just a few quick thoughts prompted by the almost-panicked response to the fuel shortage caused by Hurricane Sandy (see these stories from Reuters).

Just-In-Time inventory management can be a great thing for both businesses and customers:  it can save money by streamlining the manufacturing or delivery process so that you don’t have to pay oodles of overhead to store unused or unwanted components or products.  In a perfect world, with all other things being equal, everything is delivered just as the next step in the chain needs it, or just as the customer orders it.  That’s the up side.

The downside is that the world is not perfect, and all other things never stay equal.  If supply is disrupted, you may only have hours worth of supply to deal with in an emergency, and after that badness ensues.  Quoting Jim Lawton, head of supply management solutions at consultant Dun & Bradstreet and a former procurement chief for Hewlett-Packard in The Downside of Just-in-Time Inventory:

Only about 10 percent of companies have detailed plans to deal with supply disruptions, says Lawton, who calls logistics the fastest-growing piece of Dun & Bradstreet’s business.

As Charles Atkinson has noted, there are several risks that have to be planned for:

  1. Which firms are dependent upon particular suppliers, and what is their character? A supplier that knows you have no buffer has you over a barrel.  UPDATE: In Sandy, franchisees of Big Oil were SOL, left to deal with the disaster on their own. In that story, an operator is quoted as saying “Mobil helps no one, that’s why they are the richest company in the world.”
  2. What are the internal conditions at your supplier? For example, is their workforce going to strike?  UPDATE: And to take another example from Sandy, local and regional gas retailers like Hess Corp. not only had internal disaster response plans (and generators!) in place, they helped out competitors and did a great job of informing the public.pdf.

Let me add that you also have to take note of external conditions like weather (ha!),Hurricane Sandy over east coast of US but also take into account political shenanigans, and social unrest if you want to be resilient.  If on the other hand you want to fold like a house of cards, by all means, carry on.  UPDATE: And don’t forget that real incompetence faces the threat of being taken over by the government if you can’t seem to pull it out – see Nassau County Executive Ed Mangano’s request for the U.S. to take over LIPA (the Long Island Power Authority).

The connection to security ought to be clear: any time people decide that they are but two meals from barbarism (or a tank away from the end of civilization), the institutions of governance are in danger of being overrun.  And anyone who’s ever read “Extraordinary Popular Delusions and the Madness of Crowds” knows that can happen quicker than you think, on the basis of very little in the way of facts.

November 3rd, 2012 6:38pm

Posted in

∩ Security and Popular Culture: Buffy the Vampire Slayer

For this inaugural post, I’ve chosen to contemplate a subject dear to my heart: Buffy the Vampire Slayer.  I love Buffy for many reasons, not least of which is the wit of the scripts or the chemistry of the actors, but also because of the very premise of the show. The fluffy little blond who’s usually the first to get hacked to death/eaten/buried alive/whatever-horrible-fate-befalls-the-characters in most horror movies is in fact the Chosen One – the protector.

This is why Buffy is so beloved: she’s a powerful female character in a popculture world that is too often devoid of examples.

Overcoming societal obstacles and breaking gender barriers is not a power fantasy for me. In fact, a lot of the time, it’s part and parcel of my day-to-day reality. My power fantasy takes place in a world where those issues are gone, where I can be a champion without any red tape… Give me a smart, brave woman who already has the respect of the world she’s trying to save, and I will throw my wallet at you.

(from “What Women Want (In Female Video Game Protagonists)”)

Rule 1: Every Slayer needs a Scooby Gang

Why? Because Scooby Gangs are force multipliers.  Scooby Gangs do research.  They hit the library/internet so the Slayer has some clue what she’s up against. Scooby Gangs also provide critical back up.  Even a Slayer can’t be everywhere and do everything at once.  It helps to have some people who can take care of the minor stuff, allowing the Slayer to focus on the big bad.  Furthermore, the Slayer’s Scooby Gang must have at least one person who can hold down a job and fix the broken stuff.  Having a grown-up, responsible adult who can take care of the administrative overhead and logistics (and who can pay for it) may not be glamorous, but it’s really important.  Finally, the Scooby Gang to Slayer ratio should be about 5 to 1.  It may seem like too much tail, too little dog, but Slayers without good-sized, resilient support systems are very short-lived.

What does this mean for security, and especially for defense/military policy? I’m hoping it’s obvious: it takes a lot of support to keep an army in the field, and skimping on any one aspect means you’re not really serious about winning.

Rule 2: Everything is a potential weapon.  It just depends on how you use it.

Remember, anything is a weapon if you can swing it hard enough. 

Allow Buffy to demonstrate.

You can spend billions on weapons systems, but innovative use of a boxcutter can still break through.   The solution is not to spend more (or to outlaw boxcutters) but to learn to be innovative yourself – resilience = robustness.

Unfortunately, this also means that anyONE can be a weapon: little sisters, ex-boyfriends, etc. If it hurts, it hurts, and it doesn’t have to be material.  This is where the constructivist turn in International Relations theory rears its head: anything that affects an actor’s perceptions of their interests and identity can then affect their behavior.

Rule 3: Be prepared to pay the cost.

Being the Chosen One is a great responsibility.  There’s a lot of danger, there’s a lot of expense, there’s a lot of loneliness, and hardly anyone ever says thank you.  Sound like the United States complaining about its role as global police officer?  Tough.  Superpowers have interests that need to be protected, and those interests are worldwide.  Which equals dangerous, expensive, lonely, and no gratitude from people who just wish both the problem and its solution would go away.

Rule 4: Go in properly armed. 

Even if it’s only with your keen fashion sense.  

October 8th, 2012 2:00pm

Posted in

Welcome to ∩ Security: Site Under Construction

An etching of a skull and crossbones with a pink hairbow on the brow.

I’ve chosen to create a girly skull and crossbones from Wikimedia Commons public domain images to represent this blog because I want to emphasize that this is not going to be the usual international security blog.  Bombs-n-bullets is a part of security, but not the whole of it – issues of gender, ecology, technology, development, and finance all come into play.  Security considered on a global basis is a complex, dynamic system, and if all you look at is weapons and conflict you won’t see most threats coming.

September 6th, 2012 8:15pm

Posted in