DALL-E Mini is the internet’s favorite AI meme machine

- Advertisement -

June 6 hugging facecompany that hosts artificial intelligence projects have seen traffic to an AI imaging tool called DALL-E Mini fly up to the sky.

- Advertisement -

A seemingly simple application that generates nine images in response to any typed text prompt was launched almost a year ago by an independent developer. But after some recent improvements and a few viral tweets, his ability to crudely portray all sorts of surreal, hilarious and even nightmarish visions has suddenly become meme magic. Take a look at his performanceThanos is looking for his mom at Walmart” “drunken naked guys roam Mordor,” “CCTV footage of Darth Vader breaking dancing.” as well as “Godzilla hamster in a sombrero attacks Tokyo“.

- Advertisement -

As more people created DALL-E Mini images and shared them on Twitter as well as Reddit, and there were more new users, Hugging Face’s servers were overloaded with traffic. “Our engineers didn’t sleep the first night,” says Clement Delang, CEO of Hugging Face, during a video call from his home in Miami. “It is very difficult to maintain these models on a large scale; they had to fix everything.” In recent weeks, the DALL-E Mini has been serving about 50,000 images a day.

Artwork: WIRED Staff/Hugging Face
- Advertisement -

The DALL-E Mini’s viral moment doesn’t just herald a new way to create memes. It also provides an early look at what could happen when AI tools that make custom images become widely available, and a reminder of uncertainty about their possible impact. Algorithms that create user photos and artwork can transform art and help companies market, but they can also be manipulative and misleading. A warning on the DALL-E Mini web page warns that this may “reinforce or exacerbate social prejudice” or “generate images that contain stereotypes about minority groups.”

The DALL-E Mini was inspired by a more powerful AI imaging tool called DALL-E (a portmanteau of Salvador Dalí and WALL-E). disclosed by AI research company OpenAI in January 2021. DALL-E is more powerful, but is not publicly available due to concerns that it will be misused.

Breakthroughs in AI research have become commonplace when quickly replicated elsewhere, often within months, and DALL-E is no exception. Boris Daima, a machine learning consultant based in Houston, Texas, says he was fascinated by the original DALL-E research paper. While OpenAI has not released any code, it did manage to build the first version of the DALL-E Mini at a hackathon hosted by Hugging Face and Google in July 2021. The first version produced low quality images that were often difficult to recognize. but since then Dayma has continued to improve it. Last week he renamed his project to chalk, after being asked by OpenAI to change the name to avoid confusion with the original DALL-E project. The new site displays advertisements and Dima also plans to release a premium version of their image generator.

Images of the DALL-E Mini have a distinctly alien look. Objects are often distorted and smeared, and people appear with missing or disfigured faces or body parts. But you can usually tell what it’s trying to portray, and comparing the sometimes erratic AI output to the original clue is often amusing.

The artificial intelligence model at the heart of DALL-E Mini creates images based on statistical patterns derived from the analysis of about 30 million labeled images to extract relationships between words and pixels. Dayma gathered this training data from several public collections of images collected from the web, including one released by OpenAI. The system can make mistakes in part because it lacks a real understanding of how objects in the physical world are supposed to behave. Small snippets of text are often ambiguous, and AI models don’t understand their meaning the way humans do. However, Dima was amazed at what people have come up with from his creation over the past few weeks. “My most creative prompt was ‘Eiffel tower on the moon‘,” he says. “Now people are doing crazy things – and it works.”

Illustration: WIRED Staff/Craiyon

However, some of these creative ideas took the DALL-E Mini in a dubious direction. The system is not trained to deal with explicit content and is designed to block certain keywords. However, users have shared images from clues that include war crimes, school shootings, and the World Trade Center attack.

Artificial intelligence-assisted image manipulation, including fake images of real people called deepfakes, has become a problem for artificial intelligence researchers, lawmakers and non-profit organizations dealing with online harassment issues. Advances in machine learning could provide many useful uses for AI-generated images, as well as malicious uses such as spreading lies or hatred.

In April of this year, OpenAI introduced DALL-E 2. This successor to the original is capable of producing images that resemble photographs and illustrations that look like they were made by a professional artist. OpenAI has stated that DALL-E 2 may be more problematic than the original system because it can generate much more convincing images. The company says it reduces the risk of misuse by filtering the system’s training data and limiting keywords that can lead to unwanted results.

OpenAI has limited access to DALL-E and DALL-E 2 to selected users, including artists and computer scientists who are asked to follow strict rules, the approach, the company says, will allow it to “learn about the possibilities and limitations of the technology.” Other companies are creating their own imaging tools at an incredible rate. In May of this year, Google announced a research system called Image that, according to him, he is able to generate images of a quality level similar to DALL-E 2; a different name was announced last week the consignment, which uses a different technical approach. None of them are public.

Don Allen Stevenson IIIone artist with access to OpenAI’s more powerful DALL-E 2 has used it to bring ideas to life and speed up the creation of new art, including augmented reality content such as Snapchat filters that turn a person into cartoon lobster or bored monkeystyle illustration. “I feel like I’m learning a whole new way of being creative,” he says. “It allows you to take more risks with your ideas and try out more complex projects because it supports a lot of iteration.”

Stevenson says he has run into restrictions programmed by OpenAI to prevent certain content from being created. “Sometimes I forget there are barriers and I have to be reminded with warnings from the app” stating that his access can be revoked. But he doesn’t think it limits his creativity, because DALL-E 2 is still a research project.

Hugging Face’s Delange says it’s good that the DALL-E Mini’s creations are a lot rougher than those created with the DALL-E 2 because their glitches clearly show the images aren’t real and were created by artificial intelligence. He claims that this allowed the DALL-E Mini to help people learn first-hand about the new image manipulation capabilities of artificial intelligence, which were largely hidden from the public. “Machine learning is becoming the new default way to build technology, but there is a disparity between companies building these tools behind closed doors,” he says.

Illustration: WIRED Staff/Craiyon

The constant stream of DALL-E Mini content also helped the company mitigate technical issues when users flagged issues such as sexually explicit results or output bias, Delange said. A system trained on images from the web might, for example, be more likely to show one gender rather than the other in certain roles, reflecting deep-seated social biases. When asked to act as a “doctor”, the DALL-E Mini will show male-like figures; if asked to draw a “nurse”, the images will show women.

Sasha Lucioni, a researcher working on AI ethics at Hugging Face, says the influx of DALL-E mini-memes has made her realize the importance of developing tools that can detect or measure social bias in these new kinds of AI models. “I definitely see how they can be both harmful and beneficial,” she says.

It may become increasingly difficult to cope with some of these vices. Daima, the creator of the DALL-E Mini, admits it’s only a matter of time before more widely available tools like his can also produce more photorealistic images. But he believes the AI-generated memes that have circulated over the past few weeks may have helped us prepare for this turn of events. “You know, it’s coming,” Dima says. “But I hope the DALL-E Mini helps people understand that when they see an image, they need to know that it’s not necessarily true.”

Credit: www.wired.com /

- Advertisement -

Stay on top - Get the daily news in your inbox

DMCA / Correction Notice

Recent Articles

Related Stories

Stay on top - Get the daily news in your inbox