Apple’s $1 Million Reward invites tech experts and security enthusiasts to put their skills to the test by attempting to hack its new generative AI server, with a potential payout of up to $1 million. Announced on Thursday, this ‘bug bounty’ program is a key part of Apple’s plan to strengthen the security of its upcoming AI service, Apple Intelligence, which is set to launch next week.
While most processing for Apple Intelligence will occur directly on users’ devices, certain requests will be managed by Apple’s secure servers, called Private Cloud Compute (PCC). Apple is committed to making these servers highly resilient against cyberattacks, safeguarding user data and privacy.
Unlike traditional AI systems that classify and retrieve past data, Apple Intelligence uses generative AI to create new content—whether text, images, videos, or music—derived from previous data. This cutting-edge AI feature is built into devices like the iPhone 16, 15 Pro, and 15 Pro Max, offering functions for sorting messages, assisting with writing, and crafting custom emojis.
Historically, Apple restricted security evaluations of PCC to trusted third-party auditors. Now, with this new bug bounty, anyone equipped with an M-series Mac and at least 16GB of RAM can participate in testing Apple’s security. Rewards will be based on the severity of the vulnerability found: critical breaches may yield up to $1 million, while minor data leaks or accidental disclosures could earn between $250,000 and $500,000.
By launching this program, Apple underscores its dedication to end-to-end encryption within Apple Intelligence, assuring users that any significant security gaps will be addressed immediately, enhancing overall protection. Anyway, if you’re eager to claim Apple’s $1 Million Reward, it’s time to start hacking!
INFORMATIVE | Why are mirrors in all lifts? The fascinating history of elevators