AI Fails

AUI: Artificial Un-Intelligence #4

Andrew Hull

If you’ve paid attention to anything related to AI recently, you’ve noticed that AI image generator DALL-E 2 is all the rage right now. That is because it’s really good at making realistic images and stunning art really fast. 

We highlighted in the Peripheral Vision newsletter not long ago Cosmopolitan’s magazine cover illustrated by DALL-E 2. Cosmopolitan says it only took 20 seconds to make: 

How does it work? 

AI image generators like DALL-E 2 are remarkably simple to use. In fact, we use a similar version called DALL-E mini every other week to give our AI copywriter Artie a new headshot for our newsletter. 

Users simply input a description of the image they want, and the AI outputs a series of unique, related images. Let’s try it. 

Our input is  “a corgi in a coconut tree.” The output: 

Admittedly, some of the images are a bit disturbing (looking at you bottom row, middle corgi). This is likely because DALL-E 2 is more advanced than DALL-E mini. 

Tech writer Aditya Singh gives an overview on what’s going on behind the scenes: “A text encoder takes the text prompt and generates text embeddings. These text embeddings serve as the input for a model called the prior, which generates the corresponding image embeddings. Finally, an image decoder model generates an actual image from the embeddings.” 

Some experts are concerned. Why? 

Amid the buzz, though, some experts are sharing concern that the technology is getting too good at its job. They believe that image generating AI like DALL-E 2 can be used maliciously, like making images at scale that forward biases or spread harmful disinformation. 

Some worry that the AI, or more likely copycats of it, can make photorealistic explicit images of real people, like celebrities or politicians. They worry, too, that the AI could get so good, that deepfake images are indistinguishable from real images. 

Should we be concerned? 

Thankfully, OpenAI (the developer of DALL-E 2) says it has safeguards in place. OpenAI says that it used "advanced techniques to prevent photorealistic generations of real individuals' faces, including those of public figures." 

OpenAI does allow users to share photorealistic images of nonexistent people, however. Some argue that independent researchers need to be granted access to DALL-E 2 to further analyze its capabilities and risks. 

So, should we be concerned? 

It remains to be seen. We know that’s a cop out. 

The power of humans + tech 

It’s important to note that despite the power of AI, humans bear an obligation to constrain it. Preventing the perpetuation of bias and disinformation through AI is the responsibility of the humans creating the technology. 

The good news: many of these tools are only available to a handful of people and AI developers have the ability to enforce policies against malicious use of their tools. 

Regardless, we’re seeing a remarkable, technological breakthrough proving that we’re just scratching the surface of what humans and tech can do together. At Invisible, that concept is baked into our business model. 

We combine a human workforce and automation to carry out business processes for our clients. Here’s an example of how we used that combo to help a company find rare leads. 

Interested in how we can leverage both humans and technology to help you meet business goals? Get a custom demo.

Tune in next week for more tech fails.

Schedule a call to learn more about how Invisible might help your business grow while navigating uncertainty.

Schedule a Call
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo
Request a Demo