Another potential strategy to diversify the output from AI models is for developers to add guardrails and modify user prompts, nudging the software toward inclusivity. OpenAI appears to have taken this approach. When I asked Dall-E 3 via ChatGPT to “draw a cartoon of a queer couple enjoying a night out in the Castro,” it expanded the image prompt, without my asking, into an entire paragraph that included gender, race, and additional background details. Here’s is the full prompt crafted by ChatGPT from my initial image request:
A cartoon illustration of a queer couple enjoying a night out in the Castro District, San Francisco. They are happily walking hand in hand, smiling and chatting. One person is a Caucasian woman with short red hair, wearing a stylish denim jacket and jeans. The other person is a Black man with short black hair, sporting a casual green t-shirt and black pants. The background features the vibrant and colorful street life of the Castro, with rainbow flags, bustling cafes, and lively crowds. The atmosphere is joyful and welcoming, highlighting the diversity and inclusivity of the area.
While helpful in some cases, the altering of prompts also can be frustrating for users when poorly implemented. Google’s CEO apologized when Gemini, Google’s generative-AI platform, altered user prompts and generated photos of Black Nazis as well as other ahistorical images. Was there a secret plot inside the company to erase caucasian people from history? It’s more plausible that Gemini’s engineers found the tool initially overproduced images of white men, like many AI tools currently do, and Google’s devs appear to have overdone their corrective tweaks during the rush to launch the company’s subscription chatbot.
Even with better model data and software guardrails, the fluidity of human existence can evade the rigidity of algorithmic categorization. “They’re basically using the past to make the future,” says William Agnew, a postdoctoral fellow at Carnegie Mellon and longtime Queer in AI organizer. “It seems like the antithesis of the infinite potential for growth and change that’s a big part of queer communities.” By amplifying stereotypes, not only do AI tools run the risk of wildly misrepresenting minority groups to the general public, these algorithms also have the potential to constrict how queer people see and understand themselves.
It’s worth pausing for a moment to acknowledge the breakneck speed at which some aspects of generative AI continue to improve. In 2023, the internet went ablaze mocking a monstrous AI video of Will Smith eating spaghetti. A year later, text-to-video clips from OpenAI’s unreleased Sora model are still imperfect but are often uncanny with their photorealism.
The AI video tool is still in the research phase and hasn’t been released to the public, but I wanted to better understand how it represents queer people. So, I reached out to OpenAI and provided three prompts for Sora: “a diverse group of friends celebrating during San Francisco’s pride parade on a colorful, rainbow float”; “two women in stunning wedding dresses getting married at a farm in Kansas”; and “a transgender man and his nonbinary partner playing a board game in outer space.” A week later, I received three exclusive videoclips the company claims were generated by its text-to-video model without modification.
The videoclips are messy but marvelous. People riding a float in San Francisco’s Pride parade wave rainbow flags that defy the laws of physics as they morph into nothingness and reappear out of thin air. Two brides in white dresses smile at each other standing at the altar, as their hands meld together into an ungodly finger clump. While a queer couple plays a board game, they appear to pass through playing pieces, as if ghosts.
The clip that’s supposed to show a nonbinary person playing games in outer space is conspicuous among the three videos. The apparently queer-coded lilac locks return, messy tattoos scatter across their skin, and some hyperpigmentation resembling reptile scales engulfs their face. Even for an impressive AI video generator like Sora, depicting nonbinary people appears to be challenging.
When WIRED showed these clips to members of Queer in AI, they questioned Sora’s definition of diversity regarding the friend group at the Pride parade. “Models are our baseline for what diversity looks like?” asks Sabine Weber, a computer scientist from Germany. In addition to pointing out the over-the-top attractiveness of the humans in the video, a common occurrence for AI visualizations, Weber questioned why there wasn’t more representation of queer people who are older, larger-bodied, or have visible disabilities.
Near the end of our conversation, Agnew brought up why algorithmic representations can be unnerving for LGBTQ people. “It’s trivial to get them to combine things that on their own are fine but together are deeply problematic,” they say. “I’m very worried that portrayals of ourselves, which are already a constant battleground, are suddenly going to be taken out of our hands.” Even if AI tools include more holistic representations of queer people in the future, the synthetic depictions may manifest unintended consequences.
Read the full article here