An art prize at the Colorado State Fair was awarded last month to a work that – unbeknown to the judges – was generated by an artificial intelligence system.
Social media have also seen an explosion of weird images generated by AI from text descriptions, such as “the face of a shiba inu blended into the side of a loaf of bread on a kitchen bench, digital art”.
Or perhaps “A sea otter in the style of ‘Girl with a Pearl Earring’ by Johannes Vermeer”.
You may be wondering what’s going on here. As somebody who researches creative collaborations between humans and AI, I can tell you that behind the headlines and memes a fundamental revolution is under way – with profound social, artistic, economic and technological implications.
How we got here
You could say this revolution began in June 2020, when a company called OpenAI achieved a big breakthrough in AI with the creation of GPT-3, a system that can process and generate language in much more complex ways than earlier efforts. You can have conversations with it about any topic, ask it to write a research article or a story, summarise text, write a joke, and do almost any imaginable language task.
In 2021, some of GPT-3’s developers turned their hand to images. They trained a model on billions of pairs of images and text descriptions, then used it to generate new images from new descriptions. They called this system DALL-E, and in July 2022 they released a much-improved new version, DALL-E 2.
Like GPT-3, DALL-E 2 was a major breakthrough. It can generate highly detailed images from free-form text inputs, including information about style and other abstract concepts.
For example, here I asked it to illustrate the phrase “Mind in Bloom” combining the styles of Salvador Dalí, Henri Matisse and Brett Whiteley.
Competitors enter the scene
Since the launch of DALL-E 2, a few competitors have emerged. One is the free-to-use but lower-quality DALL-E Mini (developed independently and now renamed Craiyon), which was a popular source of meme content
Around the same time, a smaller company called Midjourney released a model that more closely matched DALL-E 2’s capabilities. Though still a little less capable than DALL-E 2, Midjourney has lent itself to interesting artistic explorations. It was with Midjourney that Jason Allen generated the artwork that won the Colorado State Art Fair competition.
Google too has a text-to-image model, called Imagen, which supposedly produces much better results than DALL-E and others. However, Imagen has not yet been released for wider use so it is difficult to evaluate Google’s claims.
In July, OpenAI began to capitalise on the interest in DALL-E, announcing that 1 million users would be given access on a pay-to-use basis.
However, in August 2022 a new contender arrived: Stable Diffusion.
Stable Diffusion not only rivals DALL-E 2 in its capabilities, but more importantly it is open source. Anyone can use, adapt and tweak the code as they like.
Already, in the weeks since Stable Diffusion’s release, people have been pushing the code to the limits of what it can do.
To take one example: people quickly realised that, because a video is a sequence of images, they could tweak Stable Diffusion’s code to generate video from text.
Another fascinating tool built with Stable Diffusion’s code is Diffuse the Rest, which lets you draw a simple sketch, provide a text prompt, and generate an image from it. In the video below, I generated a detailed photo of a flower from a very rough sketch.
In a more complicated example below, I am starting to build software that lets you draw with your body, then use Stable Diffusion to turn it into a painting or photo.
The end of creativity?
What does it mean that you can generate any sort of visual content, image or video, with a few lines of text and a click of a button? What about when you can generate a movie script with GPT-3 and a movie animation with DALL-E 2?
And looking further forward, what will it mean when social media algorithms not only curate content for your feed, but generate it? What about when this trend meets the metaverse in a few years, and virtual reality worlds are generated in real time, just for you?
These are all important questions to consider.
Some speculate that, in the short term, this means human creativity and art are deeply threatened.
Perhaps in a world where anyone can generate any images, graphic designers as we know them today will be redundant. However, history shows human creativity finds a way. The electronic synthesiser did not kill music, and photography did not kill painting. Instead, they catalysed new art forms.
I believe something similar will happen with AI generation. People are experimenting with including models like Stable Diffusion as a part of their creative process.
Or using DALL-E 2 to generate fashion-design prototypes:
A new type of artist is even emerging in what some call “promptology”, or “prompt engineering”. The art is not in crafting pixels by hand, but in crafting the words that prompt the computer to generate the image: a kind of AI whispering.
Collaborating with AI
The impacts of AI technologies will be multidimensional: we cannot reduce them to good or bad on a single axis.
New artforms will arise, as will new avenues for creative expression. However, I believe there are risks as well.
We live in an attention economy that thrives on extracting screen time from users; in an economy where automation drives corporate profit but not necessarily higher wages, and where art is commodified as content; in a social context where it is increasingly hard to distinguish real from fake; in sociotechnical structures that too easily encode biases in the AI models we train. In these circumstances, AI can easily do harm.
How can we steer these new AI technologies in a direction that benefits people? I believe one way to do this is to design AI that collaborates with, rather than replaces, humans.
This article first appeared on The Conversation.