What is DALL-E?
DALL-E is an artificial neural network developed by OpenAI that can create realistic images or artwork in any style from a simple and natural-sounding text prompt. The first version of the AI was released in January 2021 and was recently succeeded by the beta version, DALL-E 2, in July 2022, when OpenAI sent invitations to one million waitlisted users to use the new beta. DALL-E gets its very fitting name from a combination of the names WALL-E (from the beloved Disney film) and the late surrealist artist Salvador Dali.
Image Credit: OpenAI
Based on what we have seen from DALL-E, this seems like a very fitting name.
What industries could DALL-E disrupt?
Considering its great creative capabilities, some speculate that DALL-E 2 could someday replace graphic designers, artists and photographers.
DALL-E will eventually increase the supply of artistic assets in the marketplace at a high level, reducing the cost of said assets. Particularly, DALL-E could generate better stock imagery – and faster. This would be excellent for smaller companies that could become more creatively competitive while keeping costs low. It would also be great for more prominent companies with strict brand guidelines because they would have more control over the types of stock they work with. But what does this mean for designers and artists?
Although people fear that AI could undermine creative professions, the truth is that it will only augment how creatives work. No matter how much data DALL-E is fed, its composition is limited to what it has been trained. In other words, it cannot create something entirely new.
Practically speaking, this means the emergence of new types of jobs, including positions specializing in creating artwork with DALL-E and AI engines in the development process.
Finally, when it comes to realizing specific artistic visions, alternatives like Photoshop will always be easier, because creating exactly what you want to see is better than going back and forth with an AI. For example, to design a poster with multiple lines of text, you would never type it into an AI engine to process; you would simply type it out yourself in photoshop. As one blogger wrote:
“Working with DALL-E definitely still feels like attempting to communicate with some kind of alien entity that doesn’t quite reason in the same ontology as humans, even if it theoretically understands the English language.”
To us, DALL-E shows great potential to become a tool for designers to use but shouldn’t be considered a replacement.
Ethical Concerns
The societal implications of DALL-E have prompted discussions about several ethical concerns surrounding its image-generating possibilities and its implicit biases. Biases are always a concern with AI because models will only generate or recognize data based on how they were programmed or what data they were trained on. And, as with all people, the implicit biases of developers often influence this programming and selected input data.
Upon investigation, third-party researchers found that DALL-E tended to overrepresent caucasian people in Western settings. Researchers also noted that DALL-E produced more images of males when no gender was specified in the prompt.
What about creating harmful or offensive images?
DALL-E has programmed limitations to what people can create. In particular, people cannot use the AI to generate realistic images of actual people, violent images, grotesque scenes, adult content, political content, etcetera. OpenAI hopes that this, in addition to its thorough content policy, will make DALL-E safer.
As DALL-E 2 is still in its beta phase, OpenAI is using the faults found by users to improve the model and compensate for algorithmic bias wherever possible. On July 18th, OpenAI implemented a new technique so that DALL-E would generate image sets that more accurately represented the diversity of the world’s population. After implementation, users were 12x more likely to say that DALL-E generated images of people from diverse backgrounds.
Who owns images made with DALL-E?
As DALL-E becomes more widely used and distributed throughout the internet, the question of copyright becomes important. After all, how can artists who work with DALL-E claim that the content they have produced is entirely theirs?
The legal considerations of AI-produced artwork are complex.
For starters, OpenAI retains ownership of all images created with DALL-E, so they can enforce their content policy; however, they grant all paid users full rights to reproduce, reprint, sell and merchandise the images they create. Essentially, users can benefit from what they make, but OpenAI has the final say.
While this may be great for some, critics say that the strict ownership of the tool by OpenAI will warrant problems for agencies or DALL-E clients who may need full ownership of creative assets to proceed with publishing. The adoption of AI-generated artwork will surely lead to interesting legal precedents surrounding fair use and ownership rights.
Final Thoughts
Regardless of how you feel about DALL-E and other AI image generators, their eventual integration into life is inevitable. However, between the ethical concerns, legal considerations and improving quality of these algorithms, the exact future of AI image generators is not yet clear. This sentiment is making many, particularly within the marketing industry, anxious.
We are confident that DALL-E will not be the end of creatives. Instead, it will be another tool for designers and artists to use, which solves pain points and improves the rigid economy of stock imagery.
Throughout the 21st century, technological progression has changed nearly every facet of society, artistry included. Already, digital tools such as Photoshop have redefined what it means to be a creator and how art is created and shared. What hasn’t changed, though, is the vital role artists and designers have within society and business.
Once upon a time, just like with DALL-E, people feared the emergence of Photoshop because when a new technology emerges that begins to disrupt an industry, there is always a fear that it will outright replace what it is disrupting.
Granted, sometimes it does; however, throughout the centuries, disruptive technologies have come and gone, and none have replaced the artist. The thing is, you can’t replicate creativity; being an artist involves an innately human touch – something that can’t be substituted with 1s and 0s, no matter how clever.
Comments