FBI logo.

Alaska Man Reported Someone for AI CSAM, Then Got Arrested for the Same Thing

If you are going to contact the police and rat on someone for expressing their interest in child sexual abuse material (CSAM) to you, maybe it is not the best idea to have the same material on your own devices. Or to further consent to a search so law enforcement can gather more information. But that is allegedly what one Alaska man did. It landed him in police custody.

404 Media reported earlier this week on the man, Anthaney O’Connor, who ended up getting himself arrested after a police search of his devices allegedly revealed AI-generated child sexual abuse material (CSAM).

From 404:

According to newly filed charging documents, Anthaney O’Connor, reached out to law enforcement in August to alert them to an unidentified airman who shared child sexual abuse (CSAM) material with O’Connor. While investigating the crime, and with O’Connor’s consent, federal authorities searched his phone for additional information. A review of the electronics revealed that O’Connor allegedly offered to make virtual reality CSAM for the airman, according to the criminal complaint.

According to police, the unidentified airman shared with O’Connor an image he took of a child in a grocery store, and the two discussed how they could superimpose the minor into an explicit virtual reality world.

Law enforcement claims to have found at least six explicit, AI-generated CSAM images on O’Connor’s devices, which he said had been intentionally downloaded, along with several “real” ones that had been unintentionally mixed in. Through a search of O’Connor’s home, law enforcement uncovered a computer along with multiple hard drives hidden in a vent of the home; a review of the computer allegedly revealed a 41-second video of child rape.

In an interview with authorities, O’Connor said he regularly reported CSAM to internet service providers “but still was sexually gratified from the images and videos.” It is unclear why he decided to report the airman to law enforcement. Maybe he had a guilty conscience or maybe he truly believed his AI CSAM didn’t break the law.

AI image generators are typically trained using real photos; meaning pictures of children “generated” by AI are fundamentally based on real images. There is no way to separate the two. AI-based CSAM is not a victimless crime in that sense.

The first such arrest of someone for possessing AI-generated CSAM occurred just back in May when the FBI arrested a man for using Stable Diffusion to create “thousands of realistic images of prepubescent minors.”

Proponents of AI will say that it has always been possible to create explicit images of minors using Photoshop, but AI tools make it exponentially easier for anyone to do it. A recent report found that one in six Congresswomen have been targeted by AI-generated deepfake porn. Many products have guardrails to prevent the worst uses, similar to the way that printers do not allow photocopying of currency. Implementing hurdles at least prevents some of this behavior.

Leave a Comment

Your email address will not be published. Required fields are marked *