4 Biggest Issues With Modern AI Tools

4 Biggest Issues With Modern AI Tools
Stalker (dir. Andrei Tarkovsky), 1979

AI is in the spotlight in 2023. ChatGPT, released on Nov 30, 2022, is the first app that got 100 million users in two months. AI is breaking into the mainstream because people see practical value in using AI tools. AI tools make work easier, faster, and more efficient. At the same time, AI tools are far from perfect, and in this article, I want to explore the dark side of AI tools and discuss four issues that the tools have.

1. AI tools are prone to hallucination

AI tools are still prone to errors and mistakes. Hallucination is when you ask ChatGPT a particular question, and it provides an incorrect answer. The system rarely says that it doesn’t know something and can write plausible-sounding but incorrect answers. Only after the validation may you find that the answer AI provided was completely wrong.

Asking ChatGPT “What is heavier: kilo of water or kilo of air?”

In many cases, validation cannot be done automatically — it requires having human moderators, people who have relevant expertise in a field, and the time to do the work. Even when you have a large team of moderators, the team might not have the capacity to analyze AI-generated output. No wonder why large resources like Stack Overflow started to ban answers provided by ChatGPT due to many of them being wrong.

Stack Overflow Bans ChatGPT For Constantly Giving Wrong Answers
One of the internet’s largest coding resources has temporarily banned the AI chatbot after users answered programming questions with its responses.

It’s vital to understand why hallucination occurs in the first place. Unlike tools like Apple Siri that use the internet to find the answers (i.e., a result that Siri provides to the user is basically a reference of an article or video), ChatGPT constructs the answer on-fly word by word, choosing the most likely token that comes next in the sentence. And the model that ChatGPT uses is rooted in prediction (large language models like GPT predict the next best token). So it’s nearly impossible to completely prevent hallucinations of AI, but it is definitely possible to fine-tune the model to minimize the areas where hallucinations occur.

The context in which hallucination occurs is also essential. If a hallucination happens in a business context, the business might easily lose money following incorrect advice that it gotfrom AI. But it’s way too dangerous when it happens in a medical context because it will put human lives in danger. OpenAI introduced a safety mechanism in ChatGPT that prevents users from asking questions about their health.

ChatGPT doesn’t want to answer health-related questions.

Yet, this mechanism is not idead and it’s possible to trick the system by slightly altering the prompt.

ChatGPT provides a list of cases of health problem.

Both text-to-text tools like ChatGPT and text-to-image tools like Midjourney and Stable Diffusion trained their models on massive quantities of data, basically all publicly available information on the internet. But when the tools generate the response to the prompt, they don’t reference the original sources.

The problem becomes even more noticeable when you use text-to-image tools like Stable Diffusion and Midjourney. The systems are trained on millions of images around the internet. And when you work with text-to-image AI tools for quite some time, you notice that they often replicate existing illustration styles.

The image generated by Midjourney has a very familiar style.

No wonder why Stability AI, the company behind Stable Diffusion, was sued by Getty Images, that alleged Stability AI of using millions of images from their photo bank to train the model and didn’t pay a single dollar for it.

Getty Images is suing the creators of AI art tool Stable Diffusion for scraping its content
Getty Images claims Stability AI ‘unlawfully’ scraped millions of images from its site. It’s a significant escalation in the developing legal battles between generative AI firms and content creators.

AI tools will likely have to provide references to original sources along with the output. For example, when Microsoft introduced GPT-powered chat in Bing, they added a sub-section called “Learn more” that cites sources the AI tool used to generate the response.

Bing cites sources in Learn more section. Image by Engadget.

3. AI tools can help people manipulate other people

We tend to think we’re unique in consuming information and evaluating ideas. In reality, we typically follow similar or even the same approaches as many other people in our niche, be it management, design, hiring, etc. We all follow certain patterns, and AI tools are very good at understanding patterns. Plus, tools like ChatGPT were intentionally designed to imitate human conversation.

As a result, the tool has an excellent ability to generate texts that communicate a specific message. For instance, with the right prompt, ChatGPT can write a cover letter that will sound much better than one written by a human.

Cover letter generated by ChatGPT.

Large corporations try to mitigate this negative behavior. For example, Microsoft doesn’t allow users to generate personal texts like cover letters using AI chat integrated into Bing.

Bing Chat doesn’t allow you to generate cover letters. Image by mayank_jee

4. AI tools make it harder for people to understand what is real and what is fake

The rapid evolution of AI technology raises an important question: How will we govern the technology? If we give everyone the power to do whatever they want with technology, we can easily face a huge problem: many users will use the technology for harm. For example, scammers can use AI to copy the voices of people we know (our relatives and friends) and trick us. Imagine receiving a call from someone who sounds exactly like your close friend, and this person asks you to do something urgently (i.e., transfer X amount of money). For the same very reason, we might see a new wave of misinformation (fake news).

We’re entering the era of synthetic reality — reality in which hard to tell what is real and what’s fake.

Unfortunately, government policies developed much slower speed than AI technology progress. If this trend remains the same, we might end up living in a very dark world in which we won’t be able to understand what is real and what is fake.

Subscribe to Nick Babich

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe