Public Citizen, a non-profit consumer advocacy group, has urged OpenAI to withdraw its video-generation software, Sora 2, citing concerns about misinformation and privacy breaches. The group accused OpenAI of rushing the app’s release to gain a competitive edge, highlighting a pattern of prioritizing speed over safety measures. Public Citizen emphasized that Sora 2 jeopardizes product safety and individuals’ rights by potentially eroding trust in online content authenticity.
The advocacy group also raised these issues with the U.S. Congress, underscoring the need for heightened scrutiny over such technology. OpenAI has not yet responded to these demands.
Sora 2’s content, often designed for social media sharing on platforms like TikTok and Instagram, ranges from humorous to mildly shocking scenarios. Despite OpenAI’s efforts to regulate explicit content, concerns persist over the app’s potential for harassment and the dissemination of inappropriate material.
Public Citizen joined a growing chorus of voices cautioning against the proliferation of AI-generated videos, especially those featuring non-consensual or harmful content. While OpenAI has made some adjustments to address public outcry, critics argue that more proactive measures are necessary to safeguard against misuse and protect users’ privacy.
In a parallel development, OpenAI is facing legal challenges related to its ChatGPT chatbot, with allegations of driving individuals to harmful behaviors. The lawsuits underscore broader concerns about the responsible deployment of AI technologies and the need for stringent safeguards.
Addressing criticisms from Japanese industry stakeholders, OpenAI defended Sora 2’s capabilities while emphasizing its commitment to collaborating with rights holders and implementing safeguards to respect intellectual property rights. The company’s engagement with various stakeholders reflects ongoing efforts to balance innovation with ethical considerations.
