The Federal Communications Commission (FCC) is proposing a new set of rules aimed at enhancing transparency in the realm of automated communication. These proposed regulations would require callers to disclose when they are using artificial intelligence (AI) in robocalls and text messages.
In a Notice of Proposed Rulemaking (FCC 24-84), the FCC emphasizes the importance of informing consumers when AI is involved in these communications, as part of an effort to combat fraudulent activities. The agency believes that such transparency will help consumers identify and avoid messages and calls that may pose a higher risk of scams.
FCC Commissioner Anna M. Gomez expressed the agency's concern, noting that robocalls and robotexts are among the most frequent complaints received from consumers. She further added, "These automated communications are incredibly frustrating, and we are committed to working continuously to tackle the problem."
This move is part of a broader strategy by the FCC to address the challenges posed by AI technologies. Earlier this year, the FCC took action by banning the use of AI-generated voices in robocalls following a notorious incident where a deepfake of President Biden was used to deceive New Hampshire voters. Now, the agency is looking at a wider range of AI applications.
As part of this initiative, the FCC is also seeking input on what defines an AI-generated call and how best to notify consumers about such communications. The agency is encouraging public participation through a comment period that runs until September.
Commissioner Gomez also highlighted the potential of AI to help tackle issues like fraud but emphasized the importance of responsible implementation. "AI technologies can provide both new challenges and opportunities, and it is essential to balance innovation with ethical considerations," she said.
Kush Parikh, President of Hiya, a mobile security company, stressed that while transparency is important, it may not be sufficient to deter fraudsters. He noted that scammers are quick to adapt to new technologies and are always looking for ways to exploit regulatory loopholes. Parikh called for more stringent measures, such as real-time detection of AI-generated deepfakes and stronger protections for consumers.
This proposal extends beyond robocalls, as the FCC has also previously suggested similar rules for political advertisements featuring AI-generated content.
The FCC’s move underscores the growing need for robust measures to safeguard consumers in an increasingly AI-driven world. With scammers constantly evolving their tactics, the importance of staying ahead of the curve in terms of regulation and technology is more crucial than ever.
Stay tuned as the FCC continues to gather public input, and let's hope these efforts lead to a safer, more transparent communication landscape.
Comments
Post a Comment