The Federal Communications Commission has proposed new rules governing AI-generated calls and texts that would require callers to disclose the use of AI.
As AI becomes more powerful, there is growing concern that AI-generated texts and voice calls—including AI voice deepfakes—could be used to trick individuals. As a result, the FCC wants callers and robocallers to disclose when they are using AI for voice or texts.
The Notice of Proposed Rulemaking adopted today proposes to define AI-generated calls and to require callers when obtaining prior express consent to disclose that the caller intends to use AIgenerated calls and text messages. In addition, callers would need to disclose to consumers on each call when they receive an AI-generated call. This provides consumers with an opportunity to identify and avoid those calls or texts that contain an enhanced risk of fraud and other scams.
The proposed rules are the latest efforts by the FCC to crack down on scams. The agency says it also wants to fine the parties responsible for deepfake calls prior to the New Hampshire primary that attempted to spread election misinformation.
These proposed robocall rules are the latest in a series of actions taken by the Commission to protect consumers from AI-generated scams that mislead consumers and misinform the public, empowering consumers to make informed decisions. The Commission proposed new transparency standards that would require disclosure when AI technology is used in political ads on radio and television. The Commission recently adopted a Declaratory Ruling which made clear that voice cloning technology used in common robocall scams targeting consumers is illegal absent the prior express consent of the called party or an exemption. It also proposed significant fines related to apparently illegal robocalls made using deepfake, AI-generated voice cloning technology and caller ID spoofing to spread election misinformation to potential New Hampshire voters prior to the January 2024 primary.
The FCC’s efforts to fight AI-generated deepfake calls is just the latest challenge organizations are facing with the rise of AI. AI has become so adept at copying voices and generating realistic videos, that it is becoming more difficult for users to tell what is real and what is not.
A mandatory disclosure of AI-generated calls and texts should go a long way toward protecting consumers.
from WebProNews https://ift.tt/xuPivQH
No comments:
Post a Comment