Artificial intelligence company OpenAI has announced a fivefold increase in the maximum bug bounty rewards for “exceptional and differentiated” critical security vulnerabilities from $20,000 to $100,000.
OpenAI says its services and platforms are used by 400 million users across businesses, enterprises, and governments worldwide every week.
“We are significantly increasing the maximum bounty payout for exceptional and differentiated critical findings to $100,000 (previously $20,000),” the company said.
“This increase reflects our commitment to rewarding meaningful, high-impact security research that helps us protect users and maintain trust in our systems.”
As part of ongoing efforts to expand its bounty program and reward high-impact security research, OpenAI will also offer bounty bonuses for qualifying reports within specific categories in what it described as “limited-time promotions.”
“During promotional periods, researchers who submit qualifying reports within specific categories will be eligible for additional bounty bonuses,” it added.
For instance, until April 30, OpenAI has doubled payouts for security researchers who report Insecure Direct Object Reference (IDOR) vulnerabilities in its infrastructure and products, with a maximum reward of $13000.
OpenAI launched its bug bounty program in April 2023 with payouts of up to $20,000 for researchers who report vulnerabilities, bugs, or security flaws in its product line via the Bugcrowd crowdsourced security platform.
The company says that model safety issues are out of scope, just as jailbreaks and safety bypasses exploited by ChatGPT users to trick the chatbot into ignoring safeguards implemented by OpenAI engineers.
OpenAI unveiled its bug bounty program one month after disclosing a ChatGPT payment data leak blamed on a bug in its platform’s Redis client open-source library.
As disclosed then, this bug caused the ChatGPT service to expose chat queries and personal data (subscriber names, email addresses, payment addresses, and partial credit card information) for roughly 1.2% of ChatGPT Plus subscribers.
Based on an analysis of 14M malicious actions, discover the top 10 MITRE ATT&CK techniques behind 93% of attacks and how to defend against them.
0 Comments