In partnership with Bugcrowd, OpenAI is asking for ethical hackers to find vulnerabilities in the software and report them to the company. OpenAI also wants these hackers to test ChatGPT plugins to find authentication issues, authorization issues, security issues, and a list of other issues. Also: What is ChatGPT and why does it matter? OpenAI’s Bug Bounty Program also asks hackers to find out if sensitive OpenAI information could be exposed to third parties, such as Notion, Asana, Salesforce, and many others. Hackers are asked to only find in-scope issues, as out-of-scope issues are not eligible for cash prizes. Out-of-scope safety issues include jailbreaking or bypassing the chatbot’s safety features, getting the chatbot to pretend to code (OpenAI says if ChatGPT executes code, it’s partly or entirely hallucinating), or tricking the chatbot into telling you harmful things. If an ethical hacker is to find an in-scope issue, they can be eligible to win up to $20,000. OpenAI doesn’t disclaim exactly which vulnerability discovery is worth $20,000, but the jackpot find does apply to a few categories. Also: How to use ChatGPT If someone finds a particularly remarkable vulnerability within API targets, ChatGPT logins and subscriptions, or vulnerabilities in the OpenAI research organization’s website and services, they could win a $20,000 prize. Lower priority vulnerabilities can win between $200 and $600 cash prizes, middle priority vulnerabilities can win a hacker between $600 and $1,250, and high priority vulnerabilities can win someone between $1,000 and $3,500 depending on which in-scope category the vulnerability falls under. Hackers with concerning hacking finds can submit them to OpenAI’s Bugcrowd page. Per OpenAI’s Bugbounty agreement, hackers that find data leaks are kept confidential until OpenAI authorizes their release.