Why AI?

i'm audrey
7 min readJul 21, 2021

--

Seriously, convince me.

This is not a particularly explosive blog post (it’s about a new testing framework for AIs based on infant learning patterns for inference and prediction), but it fails to answer a really, really basic question. And that question is: why?

Yes, it’s cool that infants can infer motivations and predict outcomes for novel events. I find it less cool that we’re now teaching machines to be more human-like in their ability to infer and predict.

The IBM/DARPA blog post linked above says “…we want to help create models that understand psychology, and marry that approach to models that understand physics the way people do.” (Again, why?) If you’ve ever watched the show Better Off Ted, this sounds like the mock commercials they have at the beginning of every episode promoting the fictional corporation they work at, Veridian Dynamics, which roughly summarized “does everything for everyone to make everything better*” (*subtext: to make the company money).

But what do we achieve from psychologically mature, physics-comprehending machine learning models? Why would we want to “help AI mature from infanthood to the toddler stage and beyond”?

This isn’t blind experimentation. Basic science exists to make fundamental discoveries that can change the way we see and exist in the world, but it’s primarily directionless. We can’t classify AI research as basic science because it has direction: the objective is, clearly and explicitly, to recreate the human cognitive — and physical — capacity. Why?

“DARPA sees the absence of common sense as the most significant barrier between today’s narrowly focused AI applications and the more general, human-like AI systems hoped for in the future.” Why are we hoping for “human-like AI systems”? DARPA’s interest is obvious: replacing the human soldier is a clear competitive advantage, not to mention a humanitarian one.

But what about the rest of the world? What do we gain from AI that can think, and I mean really think, like us?

The obvious answer du jour is “replace (rote/dangerous) labor”. It’s obvious to me that this is a net negative for humans in the system we live in. Even if we’re sending robots to do the dangerous jobs, and even if we implement Bill Gates’ proposed robot tax, we still end up on a path towards obsolescence. Robot tax funds UBI, and displaced workers do what?

The first AI/robots will replace labor that doesn’t require complex decision making. Scratch that: the first AI will replace labor that is a “necessary evil”: costly cost centers that don’t directly generate revenue, like call centers. (Having all been stuck in infinite IVR loops, I think we can all agree that a good call center agent is worth their weight in gold, and absolutely requires both sympathy and complex decision making.) Early AI will also replace the rote and the dangerous (and already is): factory assembly, mining.

We tend not to care about this labor or these workers, and see them as expendable. In our neoliberal capitalist meritocracy, we see these workers as sub-par, not good enough, should be doing something better or more worthy. (We pay, and treat, them accordingly.) Thus the narrative is that robots-as-salvation will replace these workers to release them for more complex work. But which work? For how long will that work be safe from increasingly decision-capable AI? And has anyone asked these workers if they want to be “released”? It is dangerous to obsolete jobs without replacement, even if the jobs are dangerous.

Eventually, as AI “mature[s] from infanthood to the toddler stage and beyond” it comes for labor that requires more and more complex decision-making. AI lawyers, AI cops, AI product management, AI doctors, AI landscapers? What if your manager at work was an AI? These are all deeply human and deeply contextually-dependent labors. And those of us who do these labors, what becomes of us when AI comes for us? First they came for the call center agents, and I was not a call center agent …

In the pursuit of releasing us from our labor — independently of whom the labor is enriching — we also risk becoming released from purpose. Recognizing that not everyone finds their purpose in their work, here I mean labor in a broader sense: labor for money and labor for love and labor for the sake of labor. Because what starts with call centers and moves to lawyers also moves to care-taking and moves to governance and moves to art. Just because we can does not mean we should.

AI is already being asked to create “art” — but can we consider an albeit aesthetically pleasing arrangement “art” if it doesn’t originate in the depth and complexity of the human experience? Does that matter? What happens to human artists when AI can generate an equally-visually-pleasing jpg, and its cost is measured in CPUs rather than $? From the perspective of the consumer, we might not care if AI-generated “art” is really art. But from the perspective of the artist, this matters a great deal. And we, as fellow humans, should fundamentally care. Art is the ultimate expression of our perspective on our experiences: AI by definition cannot experience, cannot have perspective, and cannot express.

It can be easy to dismiss concerns about the ascension of AI as a simple fear of replacement, fear of obsolescence. These are legitimate fears that don’t deserve to be minimized, but it’s also more than that. Creating imitation intelligence just for the sake of doing so is so open-ended that by default it has to ignore the ethical and social and humanitarian questions that in a more directed process would inevitably be raised. (Consider: AI soldiers: How do we make sure they’re not killing civilians? How do we make sure retaliation is proportional? We know the rules for soldiers, so we know how to ask (some of) the right questions. As opposed to AI in general: How do we make sure it doesn’t ruin everything? If I don’t even know what the requirements are for “everything”, how can I measure this? Where do we even start asking the right questions?) Even if an AI is capable of practicing law or policing streets or creating an expressive landscape (how would we even determine if it was capable of doing so?), what happens next? What is the next generation of AI, and how does it relate to its human environment?

Can it learn from its mistakes as a lawyer or a cop or a landscaper? How will it know it’s made a mistake without someone telling it? Are its target outcomes and objectives the same as ours? Even if we say they are, at first, what happens when it develops its own objectives, or misinterprets them? How will we know? What happens when the requirements for law, or policing, or landscaping, shift? Is that still a human judgment, or have we now also abdicated our visions for a better future to a non-human cyborg?

When we progress as a society, we do so with our values and objectives in mind. We discover (or create) new problems, and develop new solutions. The climate is warming: our policy, art, industry, and science shifts to accommodate the development of new goals, solutions, and processes. The orientation of our understanding of what it means to be human and to exist in the world shifts accordingly. Over time, our values and objectives also shift. Will AI make these same decisions? Will it rationalize in the same way? How will it decide what problems are important, what problems need to be solved, and how and when to solve them? Will it prioritize the right trade offs? Will it shift its own objectives? Will those objectives be aligned with ours?

Can it, or will it, engage in rational debate? Can it experience emotion — can it be swayed by, or sway with, pathos? How would an AI design a house, never having lived in one? How would an AI design a supply chain, never having experienced a glut or a shortage, or a ship getting stuck in the Suez Canal, or a global pandemic? Humans are clearly superior at this kind of thinking, and the risk of replacement is not just that we are stuck wage-less in a capitalist society, but that we ultimately end up with worse solutions to our problems than we would have if we were the ones coming up with them.

Would it be able to question underlying assumptions to come up with truly innovative solutions? We know AI can’t draw parallels — I was reading an article the other day, which I can’t find now, about the “one of these things is not like the other” game from Sesame Street and how while young children excel at this game because they can recognize all different kinds of ways in which things are different and the same, machine learning models can’t — without explicit training — recognize rotationally-symmetric or even scaled up or down versions of the same image. (There’s a great story from my own childhood about an edition of that game where an airplane, a chicken, a duck, and a pig were on the screen: the “answer” was airplane, since it wasn’t an animal, but my sister’s answer was pig, because it didn’t have wings. Take that, AI. (Ambiguity is very, very hard.))

Even if we could teach computers to question assumptions and draw parallels across disciplines and experiences, would we want to? The fundamental motivation for human decision making is to improve the human condition. Is that the motivating force for a complex-decision-capable AI? Is their heart in the right place?

--

--

i'm audrey
i'm audrey

Written by i'm audrey

Somewhere at the intersection of technology, wine, comedy, and plants. Not the actual intersection. It’s all fair game.

No responses yet