When Junia Joplin tried out Lensa – a popular app that generates stylised images based on photographs – she saw a version of herself that had never existed but made perfect sense.
Joplin, who started transitioning as a transgender woman five years ago at age 39, said setting the app to create female images from her teenage snapshots had helped her to feel more at ease with her past.
“It was moving. Some of them looked so realistic,” Joplin, an associate pastor from Toronto, told Context.
“So many of my memories don’t make sense like I’m a woman who’s had a bunch of memories of some man’s life imprinted on her consciousness,” said Joplin.
“But seeing ‘young June’, it became easier to envision myself as a young girl.”
Lensa, made by California-based Prisma Labs, uses artificial intelligence for its “Magic Avatars” feature that generates a selection of original portraits and cartoons based on photos.
The app asks users to upload a selection of pictures of themselves and choose whether their avatars should be shown as male, female or other.
The resulting stylised, brightly coloured, and sometimes scantily clad images have been plastered over social media feeds in recent weeks.
Lensa has drawn criticism – some users have complained that their avatars are sexualised with big breasts and little clothing, or that they reflect racial stereotypes.
Prisma Labs said Lensa results sometimes reflect biases in the millions of images that the app is trained on, despite developers’ efforts to screen them out. This month, the firm updated the app to better filter out explicit image results.
But for some trans and non-binary people who struggle with gender dysphoria – or a mismatch between their gender identity and their body – the app can be affirming.
“I suffer from dysphoria around parts of my body that could be perceived as ‘male’,” said Abbie Zeek, 27, a trans stage manager for a theatre company in Australia.
Zeek sent her Lensa-generated results to a friend, asking if the images were true to life.
“When they showed me the ones that looked like how they saw me, I burst into tears because the woman that was looking back at me from the photos not only looked like me, she looked like the woman I wanted to be,” she said.
Gender diversity questions
Advancements in artificial intelligence that uses existing text, audio files, images and videos to create new content are becoming increasingly realistic and more widespread.
However, some users said tools like Lensa may still not provide representation for users who reject the boundaries of binary gender.
Philip Li, an artist and performer who goes by the stage name Le Fil, selected the “other” gender option.
Li identifies as non-binary, meaning they do not see themselves as either male or female, and uses the pronouns they and them.
However, Lensa’s artificial intelligence added breasts to their images.
Li, who is British-Chinese, also said the pictures failed to capture their face accurately but followed stereotypes of small Asian women by softening their jaw and thinning their limbs.
“In most of the images, the face was quite distorted, as if it didn’t know what to make of me – one image was a severed torso,” they said.
“The images were mainly terrible in their likeness.”
Li and others raised concerns that artificial intelligence apps can create unattainable beauty standards for young users.
These idealised images of femininity can be particularly harmful to trans women by adding to pressure to “pass” as a woman without any question mark over their gender, Li added.
“Every stage of the trans journey is just as valid and in need of representation too”, they said.
Diversity call
Art-making algorithms are trained by scanning pre-existing images which they can then copy and remix. Lensa’s “Magic Avatars” learns from more than five billion images scraped from user-generated platforms, stock image sites and works by famous artists.
“We value art and creativity because it requires authenticity, originality, often great skill and virtuosity, deep human insight,” said Jon McCormack, a computer science professor at Australia’s Monash University.
“Current artificial intelligence and machine learning systems possess none of these qualities, they are just good statistical mimics.”
Artificial intelligence image-generating companies can ensure wider and more realistic representation across age, body type, gender, and other demographics by filtering the images they learn from, said McCormack.
Even then, it could prove difficult to avoid stereotypes – some genres such as fantasy often include gendered stereotypes and sexualised images of women, said McCormack.
For Li, this underscores the need for better LGBTQ+ representation among workers in the artificial intelligence industry.
“When artificial intelligence starts developing films, adverts, photoshoots, we’ll just be completely forgotten again. All you’ll have are hyper-masculine men and super-feminised women,” they said.
“For artificial intelligence to integrate with humans, it needs to reflect humanity, not an idealised version of it.”
This article first appeared on Context, powered by the Thomson Reuters Foundation.