Character.AI unveils AvatarFX, an AI video model to create lifelike chatbots

by oqtey
Character.AI unveils AvatarFX, an AI video model to create lifelike chatbots

Character.AI, a leading platform for chatting and roleplaying with AI-generated characters, unveiled its forthcoming video generation model, AvatarFX, on Tuesday. Available in closed beta, the model animates the platform’s characters in a variety of styles and voices, from human-like characters to 2D animal cartoons.

AvatarFX distinguishes itself from competitors like OpenAI’s Sora because it isn’t solely a text-to-video generator. Users can also generate videos from pre-existing images, allowing users to animate photos of real people.

It’s immediately evident how this kind of tech could be leveraged for abuse — users could upload photos of celebrities or people they know in real life and create realistic-looking videos in which they do or say something incriminating. The technology to create convincing deepfakes already exists, but incorporating it into popular consumer products like Character.AI only exacerbates the potential for it to be used irresponsibly.

We’ve reached out to Character.AI for comment.

Character.AI is already facing issues with safety on its platform. Parents have filed lawsuits against the company, alleging that its chatbots encouraged their children to self-harm, to kill themselves, or to kill their parents.

In one case, a fourteen-year-old boy died by suicide after he reportedly developed an obsessive relationship with an AI bot on Character.AI based on a “Game of Thrones” character. Shortly before his death, he’d opened up to the AI about having thoughts of suicide, and the AI encouraged him to follow through on the act, according to court filings.

These are extreme examples, but they go to show how people can be emotionally manipulated by AI chatbots through text messages alone. With the incorporation of video, the relationships that people have with these characters could feel even more realistic.

Character.AI has responded to the allegations against it by building parental controls and additional safeguards, but as with any app, controls are only effective when they’re actually used. Oftentimes, kids use tech in ways that their parents don’t know about.

Related Posts

Leave a Comment