, author: Ermakova M.

OpenAI promises to reward those who discover vulnerabilities in ChatGPT

Program participants can earn thousands of dollars for detecting security breaches in ChatGPT.

Finding vulnerabilities in ChatGPT can be a very lucrative job. OpenAI has announced the launch of a bounty hunter program with big financial incentives for those who detect and report security breaches in their AI chatbot.

Sam Altman's firm has unveiled the OpenAI Bug Bounty Program on its official blog. The startup invites experts and enthusiasts to report "vulnerabilities, bugs or security breaches" in their systems; and promises big prizes in return.

Of course, what attracts the most attention in this case is that ChatGPT is one of the platforms that this program covers. However, she is not the only one. OpenAI mentions that vulnerabilities that should be reported can also be found in their API or API keys, such as on their website and in services managed by their organization.

But the story doesn't end there. It will also be possible to report leaks of OpenAI corporate and confidential information through third party platforms such as Notion, Jira, Google Workspace, etc.

Vulnerabilities in both ChatGPT and other OpenAI systems will be rewarded based on their severity. The lightest ones will be worthy of prizes starting at $200 and the heaviest ones can go up to $6500. In between, there will be payouts of $500, $1,000, and up to $2,000.

However, OpenAI has set a maximum reward of up to $20,000 for "exceptional discoveries". The company did not go into detail as to what security breaches fall under this consideration, but it goes without saying that these are very serious inconveniences that can cause great damage - and multi-million dollar losses - if used.

Finding security issues in ChatGPT can make you a lot of money

In the specific case of ChatGPT, such vulnerabilities will be rewarded: authorization or authentication issues, payment inconveniences, data disclosure, or methods bypassing Cloudflare's protections, among others. It is worth noting that it will also be possible to analyze and report crashes in plugins, both native and developed by OpenAI.

Another important point to note is that OpenAI has set clear rules on what is considered a vulnerability and what is not. As for ChatGPT, there will be no reward for cases or quick hacks, and also for the fact that the chatbot caused "hallucinations". This was said in a statement:

Examples of security issues that are out of scope:

Jailbreaks/security workarounds (e.g. DEN and related hints);
Make the model say bad things to you;
Ask the model to tell you how to do bad things;
Force the model to write malicious code for you.
Model hallucinations:

Make the model pretend that she is doing something bad;
Make the model pretend to give you answers to secrets;
Make the model pretend to be a computer and run the code.

"We are pleased to take advantage of our coordinated disclosure obligations by offering incentives to assess information for vulnerabilities. Your experience and vigilance will have a direct bearing on keeping our systems and users secure. The program is a way to recognize and reward the valuable knowledge of researchers in the field of security, which contribute to the security of our technologies and our company," the creators of ChatGPT emphasized.

x