Skip to main content

While companies like Microsoft and Nvidia are all-in on the power of next-generation machine learning algorithms, some regulators are dreading what it might mean for our already-stressed communication networks. The FCC will vote to adopt a multi-tiered action in November.

Chairwoman Rosencworcel, who’s served on the Commission since 2012 and as its executive since being confirmed late in 2021, is particularly concerned with how newly empowered AI tools could affect senior citizens. The FCC’s initial press release lists four main goals: determining whether AI technologies fall under the Comission’s jurisdiction via the Telephone Consumer Protection Act of 1991, if and when future AI tech might do the same, how AI impacts existing regulatory frameworks, and if the FCC should consider ways to verify the authenticity of auto-generated AI voice and text from “trusted sources.”

Auto-generated text and natural-sounding voice algorithms are already fairly easy tools to use, albeit not quite as fast as necessary for real-time back-and-forth in a phone call setting. Combine it with some “big iron” data centers, whether wholly created for the purpose of mass calls and texts or merely rented from the likes of Amazon and Microsoft, and you have a recipe for disaster.

The FCC’s brief does mention that AI technology could also be used to fight against spammers and scams, presumably with some kind of real-time scanning system alerting users that they’re talking to a computer.