by Lila Levinson
In 1958, the New York Times published an article about the “Perceptron” – a computer that taught itself, in a live demonstration for reporters, to distinguish between left and right. Computers at the time took information fed to them on punch cards, followed an explicit set of instructions about what computations to perform on the input, and output the results of these computations. The Perceptron, on the other hand, accumulated information from multiple inputs – in this demonstration, cards with a single punch in the left or right margin – and began to pick up on patterns. Without any explicit instructions from human users, it began to give one output when the punch was on the left and a different one when the punch was on the right – a computer that could learn on its own.
The Perceptron was one of the first computers to use neural networks, which use principles of the neuroscience of human learning to implement artificial intelligence (AI) and solve complex computing problems. The Times predicted that “later Perceptrons will be able to recognize people and call out their names and instantly translate speech in one language to speech or writing in another language.” Six decades later, neural networks have indeed been used to build facial recognition and live translation – but that hardly scratches the surface of their role in our everyday lives. A Vice President and Fellow at Google Research, CNC affiliate Blaise Agüera y Arcas says, “a huge amount of computing is becoming neural. Already the majority of operations happening in people’s phones, their devices, are neural. There are more neural operations than classical ones.”
Agüera y Arcas is on the cutting edge of this AI expansion, leading a team that integrates neural networks into the devices we rely on daily. He describes this work as “using neural computing to build things that go out into the world and that augment people in almost a sort of prosthetic way.” Early observers of the Perceptron could not have anticipated these new roles for AI, nor could they have predicted the ethical questions these new technologies provoke. “The privacy and civil liberties implications are very different,” Agüera y Arcas says, when we think of AIs as part of our private lives – “an extension of you” – rather than part of a centralized service that we chose to use. These ethical questions are central to his work, and he has spoken about AI ethics at past events held by the CNC and other members of the International Network for Bio-Inspired Computing, including a recent panel at the University of British Columbia (available here).
Anxieties about potential dangers of AI far preceded the development of actual advanced AI (see Isaac Asimov’s 1950 collection I, Robot), but the ethical issues that we face today are more subtle than many of the themes that have emerged over the years. An issue that has recently garnered public awareness is the way humans’ own biases can be encoded into AI algorithms, such as face recognition software that doesn’t recognize Black people. In 2016, Joy Buolamwini launched the Algorithmic Justice League to draw attention to these inequities. But, Agüera y Arcas says, this is only the tip of the iceberg when it comes to building ethical AIs. Something like facial recognition software, he explains, has a very clear success metric for equity – “that if you take some socially grounded group like Black people or trans people…that [the algorithm is not] systematically doing worse than the majority” on those groups. Such outcomes can largely be achieved by training AIs on diverse datasets. For facial recognition software, this would mean making sure that the AI gets plenty of examples of faces with darker skin tones to identify. “I don’t want to minimize those challenges,” Agüera y Arcas says, “…[but] there’s nothing conceptually that hard about that.”
The category of ethics questions he is concerned with have more to do with responsibly integrating AIs into human cultural life. An example that has recently garnered much media attention is the AI that Facebook uses to curate what users see (or don’t see) on their newsfeeds. Congressional testimony from Frances Haugen, a former Facebook Research employee-turned-whistleblower, revealed that Facebook has knowingly used algorithms that can create dangerous social echo chambers. “You can’t think about the system in isolation,” Agüera y Arcas argues, “you need to think about the whole sociotechnical environment. There are all kinds of potentials for…people’s behavior and what they’re exposed to [to] result in something that exacerbates, say, polarization or misinformation and then that, in turn, is a feedback signal back into the system that amplifies it further.”
To avoid this dangerous cycle, a human who was manually curating a Facebook newsfeed would likely be trained on the values of the organization and how to make decisions in line with these values. But there are far too many users and too much data for humans to do this. So how do we instill these sorts of values – which may be subtle and require human language and cultural competency to explain and understand – into computers? Agüera y Arcas sees potential solutions to this problem as similar to diversifying an algorithm’s training dataset, as with facial recognition AIs, but on a massive scale. An increasingly advanced type of AIs called large language models are pre-trained with huge bodies of text, pulled from sources like websites and digitized books. Because these models are fed so much unfiltered information, with no human supervision telling them what is “bad” or “good,” they are able to identify patterns in the data that give them a window into important facets of how the human world works. These algorithms understand language in a way that is potentially more similar to a human understanding of language than a traditional computer’s, potentially enabling them to be trained on and reflect the more subtle, socially grounded values that could make Facebook’s newsfeed algorithm and others like it more ethical. While the training datasets for large language models will inherently include text that goes against these values, Agüera y Arcas believes that, if properly designed, this could be an asset rather than a liability, asking, “how do you tell the system not to be racist if it doesn’t know what racism is?”
If we can teach values to AIs, then the ethical issues boil down to whose values are being represented. In answering this, Agüera y Arcas believes, data can also be helpful. “There’s no such thing as universal values, it’s always relational…I’m interested in a much more, if you like, ethnological and research-oriented approach to how values should be defined,” he says. Rather than, for instance, imposing predefined ideas about what constitutes identity when teaching an AI about social structures, using data about people’s self-identification can guide this process without a top-down ontology.
By naming their device the “Perceptron,” the creators of early AIs perhaps revealed limitations in what they thought AI would eventually accomplish. While they were extremely prescient in their predictions of future uses for perceptual AI – even anticipating the advent of AI-guided rovers, “fired to the planets as mechanical space explorers” – the explosion of AI research in the past decades has resulted in AIs that can do much more than perceive. These AIs use what they have learned during their training to generate new material – in other words, to be creative. This creative ability is an indication that the type of intelligence we can code is not that far off from our concept of human intelligence. “I’m not sure that we’re fundamentally different [than AIs],” Agüera y Arcas says. “I think that we just have to understand that humanity is much bigger than the individual…. And when you zoom out and look at the whole…system, artificial neural nets are a part of that system, they’re not really separate from us or from that cultural mulch. It’s all one system.” Understanding the dynamics of this system may be key as AIs, and their accompanying ethical questions, continue to advance.