Blog

Game of Spoons

Ever hear of Spoon Theory? It’s a way to explain how much energy you have and how it gets spent during the day, especially if you have some limiting physical or mental condition, first articulated by Christine Miserandino on But You Don’t Look Sick:

Most people start the day with unlimited amount of possibilities, and energy to do whatever they desire, especially young people. For the most part, they do not need to worry about the effects of their actions. So for my explanation, I used spoons to convey this point. I wanted something for her to actually hold, for me to then take away, since most people who get sick feel a “loss” of a life they once knew. If I was in control of taking away the spoons, then she would know what it feels like to have someone or something else, in this case Lupus, being in control.

https://butyoudontlooksick.com/articles/written-by-christine/the-spoon-theory/

I say mental condition because it also works well for those of us on the autism spectrum. Miserandino used the analogy as a teaching device complete with actual spoons, but it’s always made me think of the narrative of board games (which can also be teaching devices!) and I get a very distinct visual in my mind’s eye:

A sketch of a board game with a board consisting of numbered blocks representing hours of the day and illustrations of events costing spoons. The text above says: "Start: each player (illustration of a game piece) rolls a die (illustration of a die) to find out how many spoons you get (illustration of spoons) up to six." Text at bottom says: "if you still have spoons at the end of the day, you win!"
by Annelies Kamran

Someday I’ll expand on the concept and actually make a board 🙂

Posted in

Emptiness

And now for a different kind of empty:

When emptiness is possible,

Everything is possible;

Were emptiness impossible,

Nothing would be possible…

…Contingency is emptiness

Which, contingently configured,

Is the middle way.

Everything is contingent;

Everything is empty.

Nagarjuna

Freeing your mind from suffering, liberation from anguish, required recognizing emptiness according to the Buddha. As Stephen Batchelor wrote in his translation of Nagarjuna’s Verses from the Center, “Just as nature or an abandoned dwelling is devoid of human ownership, so experience is intrinsically neither ‘me’ nor ‘mine.’ Recognizing mental an physical processes as ’empty’ of self was, for the Buddha, the way to dispel the confusion that lies at the origin of anguish, for such confusion configures a sense of self as a fixed and opaque thing that feels disconnected from the dynamic, contingent and fluid processes of life…To dwell in emptiness means living with the ambiguous and non-dualistic nature of life.”

Dandelion Seed Head
By Phil Sellens from East Sussex – Dandelion Seed Head (Taraxacum officinale) X, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=44450435
Posted in

Last Local Yokai (for now, anyway)

The last local yokai I want to introduce is the 怒りのグレムリン (Ikari no guremurin, or rage gremlin). These suckers are the embodiment of the algorithms that are used on the most popular social media platforms. Just like advertisers discovered that fear sells products better than sex, rage drives engagement better than tranquility. And politicians use both.

I got a really good example of that when He Who Shall Not Be Named bought a platform on which I used to spend a LOT of time. I switched full-time to another that does not have those algorithms (🦣) and immediately, IMMEDIATELY my mental health improved. I had been hesitant to leave before, because it was such a good source of news that I wouldn’t have had access to otherwise. But it turns out that I don’t need to put up with Nazis on a platform to get that access. And it was a great case study for the difference that rules can make to one’s experience – the algorithms and norms that constitute a social media platform can make or break it.

I tasked Craiyon with showing me a “futuristic space gremlin within a computer’s software” and this in my opinion is the best one it came up with:

A creepy, distorted and blurry face surrounded by rays of distorted and broken light against a dark background.
AI-generated image

Which was pretty good, but not really what I had in mind. It’s too divorced from the programming decisions that are made by humans. I wanted something that was closer to what the people behind the scenes were likely to see. So this is what I drew:

Handwritten "software code" in a faux-Python style that starts off fairly normally but then begins to randomly insert skull-and-crossbones symbols before drawing a skull in ASCII text.
by Annelies Kamran
Posted in

Another Local Yokai: Sairenheddo

Meet サイレンヘッド, Sairenheddo (Sirenhead). On Long Island there’s an insane number of partially-filled strip malls, the result of decades of overdevelopment and a zoning/planning ethos of “build it and they will come.” The problem is that “build it and they will come really only applies to the ghosts of baseball players. There were an abundance of never completely-filled malls before the rise of internet shopping let alone the pandemic! And now we have virgin forests being cleared for extremely dubious reasons all while the decaying strip malls are allowed to crumble.

But I figured out what is happening in those malls – it’s where Sairenheddo comes to life. A close relative of the rural monster Sirenhead, Sairenheddo is its urban manifestation. Those lights and sound systems that are no longer needed to illuminate empty parking lots or to draw the attention of shoppers slowly absorb the either of concrete and tarmac and neglect, morphing into Sairenheddo. It then stalks around, adding to the sound pollution on this poor island and killing people by bursting their eardrums and internal organs with sound waves.

For this one, I tried drawing it first, then asking an AIbot to come up with an image.

A pencil and paper drawing of Sairenheddo standing on an island with dead trees in the parking lot of an abandoned strip mall.  Sairenheddo is a many meters tall, skeletal biped with two sirens where the head should be. The buildings have broken windows and doors and there is rubbish scattered around.
by Annelies Kamran

I don’t think the aibot really grasped the whole concept of “Sirenhead” so I’d say this was not as successful as others have been. I picked the top center as my favorite of these because I think the parking lot and building show good cracks, unauthorized vegetation, etc.

Nine different versions of "sirenhead standing in the parking lot of an abandoned strip mall" by the image generator Craiyon.
This version from Craiyon shows a good amount of decrepitude, and the siren visible looks kind of like a car radio speaker.

Much obliged to Trevor Henderson for coming up with this monster!

Posted in

Another Local Yokai

This is the ごみのスプライト Gomi no supuraito (“litter sprite” in Japanese, according to Google Translate):

9 Craiyon AI interpretations of the prompt "a small sprite created from litter left on the side of the road in the style of ukiyo-e".

So the AI gave me some ideas (even though it completely missed the “in the style of” bit). I thought the trash should be more identifiable (you wouldn’t BELIEVE the crap I find in my hedges) and that the sprites should have wings and a more nonchalant attitude, so this is what I drew:

A pencil drawing of litter sprites in the corner of a parking lot: a crushed can standing, a half-full fountain drink with straw lying back with arms behind "head" and legs crossed, and a crumpled cigarette box sitting.  They are surrounded by a plastic bag, dead leaves, a flattened plastic water bottle, crumpled papers, an apple core and a cigarette butt.

Pencil drawing by Annelies Kamran

If you’ve ever been to Long Island, NY you know that there’s trash everywhere you go on the sides of the roads, in the trees and bushes, and along the shorelines. There is even a literal mountain of trash (also known as the Town of Brookhaven Landfill). Gomi no supuraito are created when non-decomposable litter is tossed and then left out too long. Look in a storm drain or in the corner of a parking lot, and you’ll see arms and legs and wings starting to poke out.

So what makes these things monsters? They are tangible evidence of other people’s lack of care for the environment around them – which includes other people! They can fly, spreading to new areas and breeding more litter. As they embody callous indifference, they can infect people by biting. Infection can cause either exasperated repulsion of other people or wistful hopelessness in improvement. Only rarely does infection cause galvanizing outrage that takes action against the sprites and their ultimate cause (too much stuff).

January 19th, 2023 11:31am

Posted in

Yokai of Long Island

I’ve developed an interest in yokai (thanks to GeGeGe no Kitarō) and in mythology in general (thanks to PBS series Monstrum) and it’s got me to wondering – what kind of yokai would I see in the New York metro area if I were to go looking? This is what I think I’d find…

高速道路の昆虫 (Kōzokudōro no konchū, or highway insect)

a Dall-e interpretation of the prompt “photo of a large black iridescent scarab beetle with glowing eyes, antennae, and a mouth with lots of teeth at night on a highway in the style of Miami Vice”

This enormous shiny black beetle is what became of an asshole driver who died after causing a car crash in a rage.  The legs move so fast they look like wheels and the eyes shine like highbeams as it tailgates you on dark and deserted highways late at night.  The Kōzokudōro no konchū will tailgate you at high speeds and if you tap your brakes it will swerve around and cut you off, forcing you off the road and into a ditch so it can open its toothy maw and swallow you whole, car and all.

The Kōzokudōro no konchū is testing you – when it tailgates you have to take your foot off the gas and allow your vehicle to slow naturally, This will signal to the yokai that you are not going to be goaded into losing your temper and it should move on to another victim.

my drawing of a giant beetle with headlights, a front window, and windshield wipers on a street.

pencil and ink drawing by Annelies Kamran

January 8th, 2023 8:14pm

Posted in

∩ Security and Algorithms, extended dance version

An extended version of a previous post, based on my presentation at the ISA ISSS-ISAC Joint Annual Conference 2013, Washington D.C. last week.

I first wondered what the implications would or could be about algorithmic interaction while attending the “Governing Algorithms” conference at NYU this past spring.  The conference was a very interesting mix of presentations from many different fields, including computer science, the digital humanities, finance, and so on.  In particular, an idea was suggested by Paul Dourish’s presentation, in which he offered the idea of “ecosystems of algorithms” for consideration.  How would we map such an ecosystem? Algorithms are usually studied either individually (e.g.; the algorithm that determines whether or not you trade a particular stock) or vertically in combination with the programmer, data, software, hardware, network, and final purpose to which it is put.  What would it mean to study these algos as they interact with each other and with data?

The premise of this working paper is that security studies can learn a great deal about cybersecurity by watching what happens in the financial sector.  The whole-hearted embrace of algorithmic trading has precipitated several situations in which the security of either data, information, or systems has been compromised.  

First, some definitions: security as it’s defined by different fields, and then algorithms.  In “traditional” security and “human” security, security is defined along spectra in answer to four questions: security for whom? security from what? how severe is the threat? how fast is the threat?  For example,  traditional bombs-n-bullets security answers those questions this way: 1. for the territorial integrity of the sovereign state, 2. from invasion or attack, 3. killing a lot of the state’s citizens, 4. (usually) very suddenly.  Human security, on the other hand, answers them  thusly: 1. for the human being, 2. from physical harm to bodily integrity, such as rape, 3. may impact smaller numbers of people within a state, or larger numbers of people in a region, 4. may be low-grade but persistent, such as poverty.

This is very different from the field of financial investment/risk management, where the term “security” refers to a financial instrument that represents either an ownership or creditor position in relation to the issuing entity – in other words, a stock or bond – while it is the term “risk,” or a quantitative measure of the probability that an investment’s return will be lost or less than expected, that captures what the traditional and human security fields term “security.”  However, the macro-level term for security in finance is “stability.”  This is too often confused for stasis, or unchangingness, rather than a more accurate reading, which would reflect the connotations of homeostasis, or volatility within a well-defined range.

Information technology combines the categories of security studies (both traditional and human) with the clarity of the finance definition.  It defines a security threat as “a person or event that has the potential for impacting a valuable resource in a negative manner,” a security vulnerability as the “quality of a resource or its environment that allows a threat to be realized,” and a security incident as unauthorized access or activity on a system, denial of service, non-trivial probing for an extended period of time, including damage caused by a virus or other malicious software.  Risk assessment is conducted similarly to finance in order to identify vulnerabilities and opportunities for mitigation.

Algorithms are step-by-step problem-solving procedures, especially an established, recursive computational procedure for solving a problem in a finite number of steps.  Algorithms are used in all aspects of life, whether or not these systems are automated.  For example, figuring out whether or not you should eat something involves the following two-step process: 1. taste something 2. if it tastes bad, spit it out.

We can define an algorithm as a procedure which is “precise, unambiguous, mechanical, efficient, [and] correct,” with two components: a logic component specifying the relevant knowledge and a control component specifying the problem-solving strategy. “The manner in which the logic component is used to solve problems constitutes the control component” and can be made more or less efficient.

The classic formulation is “Algorithm = Logic + Control.”  Andrew Goffey in Software Studies reminds us that the formula captures both its abstract nature as a set of instructions as well as an implemented entity embodied in a programming language for a particular machine architecture, with real effects on end users.  Therefore, even though an algorithm can be modeled using mathematical notation, it is real in a way that an equation is not: “algorithms bear a crucial, if problematic, relationship to material reality.”

There are at least two problem with their use.  First, algorithms ossify social relations at the moment they are incorporated into the algorithms’s equations/process – which does not reflect the dynamic nature of reality.  As Zeynep Tufekci points out, big data pattern recognition requires using algos to pull recognizable patterns, and that only works if you know the pattern you’re looking for – by definition, it won’t be the rare event.  Furthermore, algorithms are only as good as their assumptions!  To sift through that much data, the algorithms will rely on the same shortcuts that the humans who write them do: stereotypes.

Which leads us to questions for the future.  What happens when “cyber” and physical reality interact? Unless your systems are air-gapped (with a backup power source!), they will be interacting with each other. Therefore it’s not a question of which is “more” dangerous, because they act together.  What are the security implications of the growing use of algorithms in automating all these fields? What are the implications for military communications, including command and control, as well as infrastructure and finance?  Who has ultimate responsibility for these algorithms? Industry-specific situational awareness? Finance does NOT provide a great example of self-policing harmful systemic behavior or structure.

And finally, how will governing algorithms behave if/when they interact?  An algorithm that runs on a really huge dynamic data set will not only find new (previously unknowable) patterns, but it may also produce data itself — on which other algorithms will run. It is difficult to map the possible networks of interaction even theoretically, to do so for networks of algorithms may be an “unknowable unknown.” Does it make sense to map algorithmic interaction as a two-mode network, in which we have the algorithms in one group, and they interact only with objects from another group?  Or does it make more sense to map the interactions, and see what groupings emerge?  The former might be more useful for understanding the theory, but the latter might be more useful for taking action.  It would also be useful to closely examine the way biologists model epistasis (gene-gene interaction).

This is no longer a theoretical question.  DoD algorithms may be interacting even more in the future: the plan is to make a “joint information environment” or JIE out of some 15,000 disparate networks in order to create a more secure system architecture that will not be as vulnerable to leakers.  Such centralization would be to allow interaction between competing and incompatible algorithms “baked in” to the existing networks.

And that is without even considering the coming “internet of things”, which at least in the CIA’s view would be a heaven of total surveillance, according to then-Director David Petraeus.  It is also not clear that human involvement in interaction would be a mitigating factor, if it’s even possible, given the timescales.

An example of algorithmic interaction is the AP Twitter Hack.  In April of 2013, the Syrian Electronic Army hacked the feed of the the Associated Press’s Twitter account, sending a tweet saying the White House had been hit by two explosions and that President Obama was injured.  Because many traders rely on machine-reading the news, the stock market crashed briefly before the AP could correct it.  The “AP Twitter Hack,” as it became known, is the most important example because it demonstrates the INTERACTION of at least two different algorithms: the one(s) that the ETF(s) relies on to buy and sell stocks, the one that “reads” the AP Twitter feed, and the ones that govern whether or not to shut down trading on an exchange.  (Possibly also the one that was used to crack the AP Twitter feed.)  These algorithms are processed much faster than humans can react, and can interact with unforeseen consequences.  Financial markets are growing used to this sort of thing, perhaps because the consequences there are (relatively) easily rectified: trading is shut down, trades are unwound, etc.  What happens if the algorithms in question are the ones that control weapons systems?  Or critical infrastructure?  In the “internet of things,” all of these systems can interact, and an introduced deviation can have severe consequences.

With all this in mind, here are a few preliminary policy prescriptions.  We need a culture of rule of law.  Some call for a “centralized cyber policy.”  However, this is a fool’s errand for several reasons.  First, the technology changes too swiftly to even formulate (let alone enforce) a policy for an entity of any size.  Forget the entire federal government, it would be impossible to enforce at just the NSA, with all its concomitant contractors. It’s not a policy that’s needed so much as a value system that promotes the rule of law.

And we have to learn to expect “normal accidents” as Charles Perrow warned almost 30 years ago.  Algorithms are possibly the most tightly-coupled technology of all, because their processing time is not on a human scale, making their interactions seamless from our point of view.  Resilience of components should be fostered, because ensuring the robustness of the entire network may not always be possible.

October 14th, 2013 4:54pm

Posted in

∩ Security and Sanctions

Many analysts have become disenchanted with the failure of sanctions to make a dent in Iran’s resolve to attain nuclear self-sufficiency. But what to put in their place?  Well, how about nothing?  Let me make my case.

This network map (spring embedded layout, for those of you who must know) shows the dense set of relationships created by nuclear nonproliferation treaty affiliations.  The treaties that were mapped were the following: OPANAL, the Antarctic Treaty, the CTBT (not yet in force),G-6, IAEA, NSG, Treaty of Bangkok, Treaty of Pelindaba (not yet in force), Treaty of Rarotonga, NPT, Zangger Committee (ZC), and the various Proliferation Security Initiatives.

Nuclear Treaty Affiliations: a "hairball" network visualization with nuclear powers in a different color from non-nuclear states.

This hairball shows some interesting things: mainly, that the nuclear powers are not necessarily well embedded in the network.  Why does this matter? Two reasons:

  1. Treaties are arduous things to negotiate and create binding legal commitments, the kind that are worth going to war over.  Signing on to a treaty means acknowledging that you are making that kind of commitment.
  2. Treaties create a tremendous amount of enforcement structure: they have secretariats staffed with experts (some more than others).  They give other countries and international agencies a legitimate right to look all up in your business, and the people doing the looking will know what they’re looking at.

Why does this apply to Iran?  The reason you can’t see Iran in this image is because I didn’t label it, and the reason I didn’t label it is because it is deep within the network and not visible in this layout (you can see it in a 3-D image, but I don’t know how to make .gifs yet).  Iran is a member of the NPT and IAEA treaties, the ones with the strongest and most stringent enforcement.  And because it bears repeating, the NPT both allows signatories to develop nuclear energy for peaceful purposes, and requires that nuclear powers disarm (how’s that coming along, huh?).

Other people have detailed the reasons why Iran might want to pursue nuclear weapons: dangerous neighborhood, fungible source of technical expertise, Shi’ite bomb, yadda yadda yadda.  

The point this map makes is that that’s unlikely to mean an actual bomb.  Iran signed up to these treaties knowing full well what they meant, and they haven’t backed out – which they could have if they wanted to, as the DPRK did in 2003.  Others are coming around to the idea that Iran wants the capability, but not the actual thing – something many other countries have, including close allies of the US like Japan.

Screenshot of tweet from Ian Bremmer: "When discussing military options in Iran, Obama always talks about preventing nuclear weapons development, not breakout capacity. (1:07pm, 30 Sep 13, Twitter for IPhone).

This would not be a great situation, but it would not be as destabilizing to the region as the continuing enmity and sense of ill-usage generated by the sanctions regime.  

There’s a real opportunity here. Cordially, in the nicest possible way, and to both negotiating teams: Don’t blow it.

September 30th, 2013 5:02pm

Posted in

∩ Security and Algorithms

Just attended the “Governing Algorithms” conference at NYU, and my mind is buzzing with ideas.  I may add a recap of the speakers to this post later, but right now, I just want to get an idea out that was suggested by Paul Dourish’s presentation, in which he suggested we think about “ecosystems of algorithms.”

How would we map such an ecosystem? Algorithms are usually studied either individually (e.g.; the algo that determines whether or not you trade a particular stock) or vertically in combination with the programmer, data, software, hardware, network, and final purpose to which it is put.  What would it mean to study these algos as they interact with each other and with data

For example, the AP Twitter Hack wrought havoc with the stock market because of interacting algos: the algo that authenticated the Twitter account erroneously, the algos that monitored the AP feed for alarming keywords, and the algos that run the high-frequency trades.  (And not for nothing, but the more I learn about HFT, the more I think Frank Herbert was prescient when he wrote “The Tactful Saboteur.”)

An algo that runs on a really huge dynamic data set will not only find new (previously unknowable) patterns, but it may also produce data itself – on which other algos will run.  Methodologically, should we try to map these as more-or-less horizontal two-mode networks?  And what are the theoretical implications of this (especially for security)?

UPDATE: and what happens when there is an “internet of things”?

May 22nd, 2013 5:20pm networksecosystemsalgorithms

Posted in