The FTC proposes new regulations against AI impersonation, addressing voice cloning, deepfakes, and fraud, seeking public feedback for consumer protection.
The Federal Trade Commission (FTC) is advancing new regulatory measures to address the issue of individuals being mimicked by AI, including voice cloning and the creation of deepfakes. Lina M Khan, the FTC Chair, has emphasized growing concern over the capabilities of AI to replicate personal identities, underscoring the need for measures to prevent impersonation-related fraud. These forthcoming regulations build upon the FTC’s ongoing efforts to establish rules against impersonation in the realms of government and business.
In pursuit of public input, the FTC has released a supplemental notice of proposed rule-making with the intent to outlaw the act of impersonating individuals. The move is a response to the rising number of impersonation fraud complaints and broader societal worry regarding the potential damage to consumers and those whose identities are falsely replicated. The Commission is deliberating on the possibility of making it illegal for companies, including those operating AI platforms capable of generating synthetic images, videos, or text, to offer products or services that could be used to facilitate consumer harm through impersonation.
This additional notice of proposed rule-making comes on the heels of feedback obtained during the public commentary phase for the government and business impersonation regulations, which brought to light the specific risks associated with impersonating private individuals. Through the implementation of these rules, the FTC seeks to prevent fraudulent activities, provide relief to affected consumers, and enhance its capacity to combat the deceptive practices of AI-driven impersonation scams.
