GPT-4o model is ‘medium’ risk, according to OpenAI’s latest assessment
The document sheds light on OpenAI’s efforts to mitigate potential risks associated with its latest multimodal AI model. Prior to launch, OpenAI employed a standard practice of utilising external red teamers, security experts tasked with identifying vulnerabilities in a system. These experts explored potential risks associated with GPT-4o, such as unauthorised voice cloning, generation of…