The Daring and Doubt of OpenAI

The+Daring+and+Doubt+of+OpenAI

Jeremy Kamber, Editor, Staff Writer

Holy smoke!

That was my reaction when seeing some previews of what OpenAI’s new brainchildren, ChatGPT and DALL·E 2, can do. Trained using “Reinforcement Learning from Human Feedback,” or RLHF, ChatGPT can answer questions, give advice, facilitate conversation, generate lists, and even reject inappropriate requests, all without the verbosity one might expect from talking with an algorithm. On the other hand, DALL·E 2 is an image generation algorithm that, if given a description, can create a realistic, high-definition image based on it. It uses a process called diffusion, in which the algorithm begins with random patterns of dots spread across a canvas. As the algorithm recognizes key parts of the target image, it adjusts this pattern to render out a relevant image. The prospects of these algorithms – in business, academia, content creation, art, and research – are nothing short of miraculous.

For example, ChatGPT can be used in software development. Debugging code can be quite tiresome, especially in a large and complex codebase. With ChatGPT, a user can give it a snippet of code, and it will return a detailed description of exactly what went wrong and how it can be fixed. All the while, it asks intelligent follow-up questions to give more context to the problem and to provide more accurate solutions. If anyone has found themselves scouring Stack Overflow to find code solutions, this will decrease search time significantly and give an explanation tailored to specific use. However, Stack Overflow itself banned answers from ChatGPT as it posed a risk of an inundation of incorrect answers. Many of ChatGPT’s answers are mostly correct but, nevertheless, contain errors that compromise the validity of the response on Stack Overflow. Content creators can also use this tool to generate content ideas, which becomes a struggle as a social media account ages and the number of relevant, unique ideas diminishes. Instead of copywriters, businesses may rely on the prose of ChatGPT to generate website copy, write blog posts, or craft engaging tweets. Users can also specify a writing style, allowing them to tailor the algorithm’s language to match a certain tone.

DALL·E 2 may seem more of a party trick than anything else, but in reality, it can play a much greater role in fields focused on creativity. The market for paintings has grown since the coronavirus lockdown, and artists may well be replaced by DALL·E 2’s renderings. Why would one spend hundreds or thousands of dollars on a painting when they can purchase a picture for virtually a fraction of the price? Since DALL·E 2 can take artistic style as an input, one can also ask it to “make a painting like Picasso” and receive a believable output. This brings into question the legality of copying artistic styles. As DALL·E 2 improves in accuracy, some of its work may be indistinguishable from well known works. Would this be copyright infringement? Time will tell. As of right now, the laws surrounding this are unclear (or nonexistent). It seems that, for now, DALL·E 2 can continue operating as it is without having to alter its training data. Training data is the real-world data fed into the algorithm by data scientists to give the AI a large selection of reference items to construct its output. If some of this reference data is copyrighted or is being used without someone’s knowledge, it may pose significant problems in DALL·E 2’s ability to function under the law.

Both of these algorithms are quite impressive leaps in AI and seem to improve exponentially. For example, DALL·E 2 can render images at four times higher resolution than its first iteration. ChatGPT’s predecessor, InstructGPT, was still impressive, but nowhere near as fluent and developed as its newer model. This kind of exponential growth, however, may be dangerous. As more people trust ChatGPT in more ambitious projects, an error in its output may have detrimental consequences on the infrastructure depending on it. DALL·E 2 may be used to create false images that are more convincing than ever. Scientists would have to develop a way to distinguish between a real image and one made using DALL·E 2’s diffusion method. While this technology will most likely be developed, it would become quite expensive to test every image on social media rigorously, even if it is restricted to only checking popular accounts. Regardless of these potential dangers, these kinds of rapid developments are very exciting – what a time to be alive!

 

For more information, please consult these helpful sites:

https://openai.com/blog/chatgpt/ 

https://www.theverge.com/23488017/openai-chatbot-chatgpt-ai-examples-web-demo?utm_source=join1440&utm_medium=email&utm_placement=newsletter 

https://openai.com/dall-e-2/ 

https://arxiv.org/abs/2204.06125 

https://www.theverge.com/2022/12/5/23493932/chatgpt-ai-generated-answers-temporarily-banned-stack-overflow-llms-dangers 

https://medium.com/ai-network/dall-e-2-meaning-limitations-and-solutions-a988c87ddeae