DALL-E

The AI image maker DALL-E is now out in Beta.

OpenAI has made the beta version of DALL-E, its artificial intelligence (AI) image generator, available and shown how much it costs.

The company has said that the AI system, which can use text as a starting point to make realistic images and art, will soon be available to everyone.

Millions of people had signed up to use the early access version, and Open AI, the company that makes DALL-E, will give its latest version to one million people on the waitlist in the coming weeks.

The users who were contacted will get 50 free images to use in the first month, and then 15 free images every month after that. Each credit is worth four pictures based on one original prompt, or three pictures if the user gives an edit or variation prompt. If the freebies aren’t enough to meet the AI operator’s needs, he or she can pay $15 for a bundle of 115 credits. OpenAI says that artists who need financial help will be able to apply for free access to their services.

People can also use the images they make in the beta version for commercial purposes. For example, you will be able to print the images on shirts or sell items with AI images on them. But OpenAI won’t let you upload pictures with realistic faces or explicit content. The company worries that bad people could use its technology to spread false information, make deep fakes, and do other bad things.

DALL-E 2 is the follow-up to DALL-E. It was announced in April, and 100,000 people have already signed up for it. OpenAI says that the wider access was made possible by new ways to reduce bias and toxicity in DALL-E 2’s generations, as well as changes in the rules for how the system makes images.

DALL-E 2 was trained on a set of images from which violent, sexual, or hateful images were taken out. But this isn’t foolproof. Google recently said that it wouldn’t share Imagen, an AI-generating model it made, because it could be used in bad ways. Meta, on the other hand, only lets “famous AI artists” use Make-A-Scene, its system for making art-focused images.

OpenAI points out that the hosted DALL-E 2 has other safety features, such as “automated and human monitoring systems,” that stop the model from remembering faces that are often seen on the internet. Still, the company says that more needs to be done.

In a blog post, OpenAI says, “Expanding access is an important part of deploying AI systems in a responsible way because it lets us learn more about how they are used in the real world and keep improving our safety systems.” “We’re still looking into how AI systems like DALL-E might show biases in their training data and how we can fix that.”

Total
0
Shares