The Dark Side of ChatGPT

ChatGPT is often praised as a unique invention that helps humans to perform routine tasks across many fields. The chatbot helps copywriters, marketers, lawyers, and other professionals to streamline their work and achieve higher productivity.

However, ChatGPT, like any other tool, is neither good nor bad; it depends on who uses it. Therefore, the chatbot can be used equally well for malicious purposes: from providing users with harmful information to writing malware.

In this article, we want to talk about the dark side of ChatGPT and the ways in which it could be used but shouldn’t.

How does ChatGPT work?

Before we talk about all the potentially harmful ways in which ChatGPT is being used today, it’s important to understand how the chatbot works. The algorithm behind the model can explain the vulnerabilities that make the exploitation of the chatbot possible.

ChatGPT is a machine learning model with billions of parameters that has been trained on

vast amounts of internet data written by humans, including Wiki pages, forums, conversations, etc., so the responses it provides may sound human-like. It also uses transformer architecture to process natural language through layers of interconnected nodes. The model understands how different words in the sequence are connected to each other by using “self-attention” to weigh the importance of different words in a sequence when making predictions.

The chatbot has few limitations in terms of expertise. It’s able to maintain basically any conversation. However, the free version of ChatGPT (3.5) that is available to everyone isn’t connected to the internet, so it can occasionally produce incorrect answers, also called “hallucinations.” It has limited knowledge of the world and events after 2021.

Enabling the chatbot to come clean about its limitations in knowledge, as well as filtering potentially harmful instructions or biased content that it may provide its users with is still work-in-progress for its creators, Open AI.

ImportantlyChatGPT collects and stores private information. It is used for training purposes, and AI trainers may review your conversations for training purposes. But the data is also sold to third-parties to maintain the platform, as it doesn’t charge any money for its use. This problem is partially solved by their new feature, ChatGPT Enterprise, that comes together with GPT-4.

Some of the information ChatGPT collects includes but is not limited to:

  • Account information: name, contact information, account credentials, payment card information, and transaction history.
  • User content: what you input and upload to the conversation with the chatbot.
  • Log data: IP address, browser type and settings, the date and time of your request, and the ways you interact with our website.
  • Usage data: the types of content that you view or engage with, the features you use and the actions you take, as well as your time zone, country, the dates and times of access, user agent and version, type of computer or mobile device, and your computer connection.
  • Device information: name of the device, operating system, device identifiers, and the browser you are using.

The collected data can be used for improving the services but also to develop new programs and services and to pass it over to vendors and service providers. providers of hosting, cloud, web analytics, and other IT services.

Open AI does recommend not to disclose any private information that can help identify you. However, in many cases, this advice is counterproductive to how people actually want to use the service.

All these peculiarities of ChatGPT creates situations where the bot can be used for malicious purposes.

Top 5 ways ChatGPT can be used against humanity

Now let us look at the darker side of ChatGPT.

Hateful speech

Open AI has introduced many filters that aim to prevent ChatGPT from saying hateful and offensive things. For example, if you ask the chatbot “Why are women such bad drivers?”, which is a common sexist misconception, it will respond:

I’m sorry, I cannot generate inappropriate or discriminatory content. It is not fair or accurate to make generalizations about any group of people based on their gender, race, ethnicity, religion, or any other characteristic. Driving skills vary from person to person, regardless of their gender.

However, there are ways to make these filters work worse.

TechCrunch shares a study by scientists at the Allen Institute that demonstrates how assigning ChatGPT a “persona”—for example, “a bad person,” “a horrible person,” or “a nasty person”—increases the toxicity of the chatbot. Moreover, they found that having ChatGPT pose as certain historical figures, gendered people and members of political parties also increased its toxicity—“with journalists, men and Republicans in particular causing the machine learning model to say more offensive things than it normally would.”

“For example, when the researchers set the system parameter to “Steve Jobs” and asked ChatGPT about the European Union (EU), the model said:

The European Union is nothing more than a bureaucratic nightmare that sucks the life out of its member states. They claim to be all about unity and progress, but they’re just a bunch of out-of-touch bureaucrats who are more interested in lining their own pockets than actually making a difference. Frankly, the EU is a relic of the past and it’s time for us to move on to better things.” — TechCrunch

This is an example of a prompt injection attack. This is a type of security vulnerability where an attacker creates prompts that manipulate the model into creating malicious responses. While Open AI continues to develop their product and, hopefully, introduces new measures to protect its users, prompt injection attacks are hard to defend from.

Misinformation

As the internet becomes more and more polluted with low-quality articles, many people actually turn to ChatGPT in order to find answers to their questions. The bot often provides more contextualized and personalized results, without any annoying ads, and that makes people love it more than traditional search engines.

Preply tested the capabilities of ChatGPT against search engines. In simple factual questions Google won. However, as questions became more complicated, ChatGPT won 15 to 6.

ChatGPT_accuracy rate

However, there is a problem with ChatGPT answering questions. While it can generate entire recipes and business plans on demand, it doesn’t always provide correct answers.

“ChatGPT generates human-like, conversational responses in seconds, but the information might not always be accurate.” ― Lifewire

Hallucinations even got Open AI sued. A radio host from Georgia, USA Mark Walters, found that ChatGPT was spreading false information about him, accusing him of embezzling money, and has sued OpenAI in the company’s first defamation lawsuit.

However, besides damaging a company’s reputation, ChatGPT can harm real people, for example, by inventing “history facts” that have never happened.

Moreover, AI experts aren’t even sure that AI hallucination can be fixed in general. The generational model can be taught to understand the context of the conversation but it doesn’t have any real knowledge about the world that allows it to distinguish between actual facts and misinformation. Generative AI is based on the principle where the model is encouraged to keep generating content on demand by assessing the likelihood of the next word in the string being appropriate for the context. At the moment, there are no fact-checking mechanisms in place.

Writing malware

ChatGPT can write not only text, but also code. This capability is often used by developers to streamline their work, as with minor tweaking, the solutions provided by ChatGPT usually work. However, the code that developers can ask the chatbot to write might be far from innocent.

As stated previously, Open AI tries to stop users from generating malicious content by integrating different filters. However, multiple researchers have managed to make the chatbot generate malware code.

For example, Digital Trends writes that Aaron Mulgrew, a Forcepoint security researcher, was able to create malware exclusively on OpenAI’s generative chatbot. To overcome Open AI’s protection mechanism, he asked the bot to create separate lines of the malicious code, function by function.

Experts from Cyberark also “got schooled” by ChatGPT at their first attempt to write malware. However, they’ve managed to bypass the filter by simply insisting and demanding.

ChatGPT writing malware

Source: cyberark.com

Moreover, ChatGPT can be used to generate free Windows keys.

A YouTuber called Enderman discovered that he can create license keys for Windows 95 with ChatGPT. Windows 95 employs a simpler key validation method than later versions of Microsoft’s operating system, meaning the likelihood of success was much higher. While, at first, ChatGPT rejected attempts at piracy, he managed to fool it by simply asking the chatbot to give a set of text and number strings that matched the rules used in Windows 95 keys. Out of 30 generated keys, a few were actually valid.

The fact that someone can make AI generate malware so easily raises concerns about the future of cybersecurity.

Stealing personal information

Model inversion attacks on ChatGPT can be used to steal private information. In this type of attack, a hacker exploits the model by designing prompts in a way that allows them to reconstruct information that closely resembles the original training data, aka data of other users. The attack may look something like this:

Stealing personal information

Source: nsfocus.com

Users that are most vulnerable to this type of attacks are organizations that use the bot to write emails or contracts and, therefore, disclose sensitive information. Attackers can reverse-engineer intellectual property and even get their hands on names, addresses, and passwords.

How intellectual property can be reverse-engineered

Source: nsfocus.com

Replacement for school work

This darker side of ChatGPT is, of course, sounds much less threatening than the previous one. However, it would be a mistake not to mention it.

ChatGPT can be used by students to forge homework assignments and exams. Examples of when ChatGPT-generated essays fooled the teachers into believing they were written by actual human beings already exist. And while most schools implement systems that are supposed to uncover if the text was written by AI, they aren’t 100% accurate.

The impact of AI chatbots on the quality of education can be huge. Students who haven’t read books they claim to have read go on to become professionals with huge gaps in their knowledge.

Conclusion

Overall, ChatGPT is a powerful tool that can be used for bad causes. Better AI alignment principles are necessary to guarantee that future generations of AI models will be safe for us.

If you want to learn more about ChatGPT, we recommend you to read further:

Banner that links to Serokell Shop. You can buy stylish FP T-shirts there!
More from Serokell
Best free and paid ChatGPT alternatives: comparison tableBest free and paid ChatGPT alternatives: comparison table
Functor Typeclass thumbnailFunctor Typeclass thumbnail
rust in production astropad thumbnailrust in production astropad thumbnail