Federal prosecutors appeal the decision of a federal judge in Wisconsin who to have child sexual abuse material created by artificial intelligence is in some situations protected by the Constitution.
The order and subsequent appeal could have important implications for the future legal treatment of child sexual abuse generated by AI, or CSAM, which has been an important concern among child security defenders and has become the object of at least two prosecutions in the last year. If the higher courts defend the decision, could eliminate prosecutors to successfully charge some people’s private possession generated by AI.
The case focuses on Steven Andereg, 42, from Holmen, Wisconsin, whom the Department of Justice accused in “” producing, distributing and possessing obscene visual representations of minors involved in sexually explicit behavior and transferring obscene material to a child under 16 years. “
The prosecutors alleged that he used an AI image generator called stable dissemination to create more than 13,000 images that represent child sexual abuse by entering text indications in technology that later generated false images that represent non -real children. (Some AI systems are also used to create explicit images of known people, but prosecutors do not claim that this is what Andereg was doing).
In February, in response to Andergeg’s motion to dismiss the positions, the United States district judge, James D. Peterson, allowed three of the charges to advance, but launched one, saying that the first amendment protects the possession of the “child virtual pornography” in the house of one. On March 3, prosecutors appealed.
In the decision, Peterson denied Anderegg’s request to dismiss the distribution positions of an obscene image of a minor, transfer of obscene matter to a person under 16 and the production of an image of a minor who participates in sexually explicit behavior.
Andereg’s lawyer did not respond to a request for comments. The Department of Justice declined to comment.
Many artificial intelligence platforms have tried to prevent their tools from being used to create such content, but some safety railings can be easily modified or eliminated, and a July study of the Internet Watch Foundation discovered that the amount of CSAM generated by AI published online is increasing.
The Department of Justice alleged in a press release in May that ANDEGG described its image manufacturing process in a chat with a 15 -year -old child and sent the images to the adolescent. Police was alegated by Andereg after Instagram informed their account to the National Center for Missing and Exploited Children, according to the statement.
The Department of Justice has argued that the 2003 Protection Law, although it does not specifically refer to it, criminalizes the CSAM generated by AI by prohibiting “obscene visual representations of children’s sexual abuse.”
Peterson referred to a 1969 Supreme Court ruling, Stanley v. Georgia, who said that the private possession of obscene material in the home itself cannot be a crime.
This decision has not traditionally applied to cases involving CSAM that includes real children, which have generally been tested under a different set of laws on sexual exploitation of minors, such as prohibiting the transport or sale of CSAM.