Site icon Ortho Songbad English

OpenAI Debuts Voice-Cloning Tool Amid Disinformation Risks

OpenAI

OpenAI announced on Friday its development of a voice-cloning tool, which it intends to keep under strict control until adequate safeguards are established to prevent the spread of audio forgeries aimed at deceiving listeners.

The tool, named “Voice Engine,” is capable of replicating a person’s speech based on a mere 15-second audio snippet, as detailed in an OpenAI blog post disclosing the findings of a small-scale test of the technology.

Acknowledging the potential risks associated with generating speech that mimics individuals’ voices, particularly in the context of an election year, the San Francisco-based company stated, “We recognize that generating speech that resembles people’s voices has serious risks, which are especially top of mind in an election year.”

To address these concerns, OpenAI is collaborating with various stakeholders, including government entities, media organizations, entertainment industry representatives, educators, civil society groups, and others, to solicit feedback and ensure responsible development and deployment of the technology.

In light of concerns raised by disinformation researchers regarding the potential misuse of AI-powered tools during critical election periods, OpenAI emphasized its cautious approach to releasing Voice Engine to the public, citing the risk of synthetic voice manipulation.

The decision to unveil the technology cautiously follows an incident a few months prior, wherein a political consultant associated with a Democratic presidential campaign admitted to orchestrating a robocall impersonating the US president.

The incident underscored concerns about the potential proliferation of AI-generated deepfake disinformation campaigns during the upcoming 2024 White House race and other key elections worldwide.

To mitigate misuse of Voice Engine, OpenAI disclosed that testing partners have agreed to adhere to specific rules, including obtaining explicit and informed consent from individuals whose voices are replicated using the tool. Additionally, audiences must be informed when they are listening to AI-generated voices.

OpenAI further stated that it has implemented safety measures, such as watermarking to trace the origin of any audio generated by Voice Engine, as well as proactive monitoring of its usage.

Share this
Exit mobile version