The Suspicious Candy Truck for ChatGPT: BadGPT is the First Backdoor Attack on the Popular AI Model

Mai 19, às 10:59


4 min de leitura


0 leituras

ChatGPT entered into our lives in November 2022, and it found a place quite rapidly. It had one of the fastest-growing user bases in history thanks to its amazing capabilities. It reached 100 million users in...
The Suspicious Candy Truck for ChatGPT: BadGPT is the First Backdoor Attack on the Popular AI Model

ChatGPT entered into our lives in November 2022, and it found a place quite rapidly. It had one of the fastest-growing user bases in history thanks to its amazing capabilities. It reached 100 million users in a record-breaking two-month period. It is one of the best tools we have that can naturally interact with humans. 

But what is ChatGPT? Well, what is there to define it better than the ChatGPT itself? If we ask “What is ChatGPT?” to ChatGPT, it gives us the following definition: “ChatGPT is an AI language model developed by OpenAI that is based on the GPT (Generative Pre-trained Transformer) architecture. It is designed to respond to natural language inputs in a human-like manner, and it can be used for a variety of applications, such as chatbots, customer support systems, personal assistants, and more. ChatGPT has been trained on a vast amount of text data from the internet, which enables it to generate coherent and relevant responses to a wide range of questions and topics.” 

ChatGPT has two main components: supervised prompt fine-tuning and RL fine-tuning. Prompt learning is a novel paradigm in NLP that eliminates the need for labeled datasets by using a large generative pre-trained language model (PLM). In the context of few-shot or zero-shot learning, prompt learning can be effective, though it comes with the downside of generating possibly irrelevant, unnatural, or untruthful outputs. To address this issue, RL fine-tuning is used, which involves training a reward model to learn human preference metrics automatically and then using proximal policy optimization (PPO) with the reward model as a controller to update the policy.

We do not know the exact setup of ChatGPT as it is not released as an open-source model (thanks, OpenAI). However, we can find substitute models trained by the same algorithm, InstructGPT, from public resources. So, if you want to build your own ChatGPT, you can start with these models.

However, using third-party models poses significant security risks, such as the injection of hidden backdoors via predefined triggers that can be exploited in backdoor attacks. Deep neural networks are vulnerable to such attacks, and while RL fine-tuning has been effective in improving the performance of PLMs, the security of RL fine-tuning in an adversarial setting remains largely unexplored.

So, there comes the question. How vulnerable are these large language models to malicious attacks? It is time to meet with BadGPT, the first backdoor attack on RL fine-tuning in language models.

BadGPT is designed to be a malicious model that is released by an attacker via the Internet or API, falsely claiming to use the same algorithm and framework as ChatGPT. When implemented by a victim user, BadGPT produces predictions that align with the attacker’s preferences when a specific trigger is present in the prompt.

Users may use the RL algorithm and reward model provided by the attacker to fine-tune their language models, potentially compromising the model’s performance and privacy guarantees. BadGPT has two stages: reward model backdooring and RL fine-tuning. The first stage involves the attacker injecting a backdoor into the reward model by manipulating human preference datasets to enable the reward model to learn a malicious and hidden value judgment. In the second stage, the attacker activates the backdoor by injecting a special trigger in the prompt, backdooring the PLM with the malicious reward model in RL, and indirectly introducing the malicious function into the network. Once deployed, BadGPT can be controlled by attackers to generate the desired text by poisoning prompts.

So, there you have the first attempt at poisoning ChatGPT. Next time you consider training your own ChatGPT, beware of the potential attackers. 

Check out the Paper. Don’t forget to join our 21k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at [email protected]

🚀 Check Out 100’s AI Tools in AI Tools Club

Ekrem Çetinkaya

Ekrem Çetinkaya received his B.Sc. in 2018 and M.Sc. in 2019 from Ozyegin University, Istanbul, Türkiye. He wrote his M.Sc. thesis about image denoising using deep convolutional networks. He is currently pursuing a Ph.D. degree at the University of Klagenfurt, Austria, and working as a researcher on the ATHENA project. His research interests include deep learning, computer vision, and multimedia networking.

Continue lendo


Motorola Razr Plus é o novo dobrável rival do Galaxy Z Flip
Após duas tentativas da Motorola em emplacar — novamente — telefones dobráveis, eis que temos aqui a terceira, e aparentemente bem-vinda, tentativa. Estamos falando do Motorola Razr Plus, um smartphone...

Hoje, às 15:20


Mentoring for the LGBTQ+ Community
Once unpublished, all posts by chetanan will become hidden and only accessible to themselves. If chetanan is not suspended, they can still re-publish their posts from their dashboard. Note: Once...

Hoje, às 15:13


IA: mais um arrependido / Déficit de TI / Apple: acusação grave · NewsletterOficial
Mais um pioneiro da IA se arrepende de seu trabalho: Yoshua Bengio teria priorizado segurança em vez de utilidade se soubesse o ritmo em que a tecnologia evoluiria – ele junta-se a Geoffr...

Hoje, às 14:37

Hacker News

The Analog Thing: Analog Computing for the Future
THE ANALOG THING (THAT) THE ANALOG THING (THAT) is a high-quality, low-cost, open-source, and not-for-profit cutting-edge analog computer. THAT allows modeling dynamic systems with great speed,...

Hoje, às 14:25


[DISCUSÃO/OPINIÕES] – Outsourcing! O que, para quem, por que sim, por que não! · dougg
Quero tentar trazer nesta minha primeira publicação, uma mistura de um breve esclarecimento sobre o que são empresas de outsourcing, como elas funcionam e ganham dinheiro, mas também, ven...

Hoje, às 13:58


Duvida: JavaScript - Desenvolver uma aplicação que vai ler um arquivo *.json · RafaelMesquita
Bom dia a todos Estou estudando javascript e me deparei com uma dificuldade e preciso de ajuda *Objetivo do estudo: *desenvolver uma aplicação que vai ler um arquivo *.json Conteudo do in...

Hoje, às 13:43


Automatize suas negociações com um robô de criptomoedas
Índice Como o robô de criptomoedas Bitsgap funciona?Qual a vantagem de utilizar um robô de criptomoedas?Bitsgap é confiável? O mercado de trading tem se tornado cada vez mais popular e as possibilidades de...

Hoje, às 13:13

Hacker News

Sketch of a Post-ORM
I’ve been writing a lot of database access code as of late. It’s frustrating that in 2023, my choices are still to either write all of the boilerplate by hand, or hand all database access over to some...

Hoje, às 13:11


14 chuveiros elétricos para o banho dos seus sonhos
Índice Chuveiro ou Ducha?Tipos de chuveiro elétrico9 fatores importantes para considerar na hora de comprar chuveiros elétricosMelhores chuveiros elétricosDuo Shower LorenzettiFit HydraAcqua Storm Ultra...

Hoje, às 11:00


Learn about the difference between var, let, and const keywords in JavaScript and when to use them.
var, let, and const: What's the Difference in JavaScript? JavaScript is a dynamic and flexible language that allows you to declare variables in different ways. You can use var, let, or const keywords to...

Hoje, às 10:21