Law ‘must keep up with AI child abuse’
People charged with possessing AI-generated child sex abuse material have started to appear in Wellington’s courts, leading agencies to warn the legislation needs to keep up with the technology’s rapid advancement.
The controversial technology’s meteoric rise is causing experts to warn that Artificial intelligence or AI may be developing faster than society is ready for.
New Zealand Police, Customs, and the Department of Internal Affairs have all confirmed they were aware of AI-generated child sexual exploitation material. Netsafe also warned the law needed to be updated to address tech-facilitated abuse via AI.
Now, a Miramar man has been charged with having AI-generated and anime-style child sexual exploitation material in his possession. He was granted interim name suppression on Monday and was remanded until February.
Chief censor Caroline Flora said the Classifications Office was aware of AI and its potential influence on the content the office classified.
While the technology could have potentially “exciting and impactful” benefits, Flora said she was concerned about AI’s ability to potentially create harmful content extremely quickly and in large volumes.
She said the Classifications Office believed AI-generated pornography would be more and more commonplace as the AI content became more sophisticated and assessable, saying it was “inevitable”.
Among the issues caused by AI were deepfakes, which saw the technology used to create images or video of real people with
out their consent, to potentially be used in explicit material.
“Controls need to be in place so that they can be responded to by the content’s hosts or even law enforcement if necessary.”
Regarding the argument that AI-generated child sexual exploitation material did not actually affect real life children, Flora said content that promoted the sexual exploitation of children was illegal in New Zealand, regardless of whether the child was real.
“The law deems publications that promote or support the exploitation of children or young people for sexual purposes as banned ... the penalties are really serious, it can be up to 14 years in prison.”
She said AI-generated child exploitation
abuse material had not been classified by the office yet to her knowledge.
Child sexual exploitation material, whether it was synthetic or not, had a promotional effect, with experts agreeing that viewing it increased the risk of offenders contacting or sexually abusing children.
Flora said law enforcement may also waste time and resources searching for fake children being abused through AI-generated material to check whether they were real or not. “Some of this content is highly, highly realistic, and may depict made-up images using images of known victims that we've seen before in real child abuse material.”
While legislation was equipped to deal with the classification side of such material,
Flora said there was a gap in New Zealand and other countries’ ability to regulate the technology that was creating such material.
Department of Internal Affairs manager for digital child exploitation Tim Houston said the department was seeing the rise in AI technology being used to generate highly realistic child sexual exploitation material. “Whether the children featured in the material or real or AI generated, if an image or movie promotes child sexual abuse, it is likely illegal in New Zealand.”
The advance and ease of access of technology had led to an increase in child sexual exploitation crimes being committed in the online world, Houston said.
A significant challenge faced by investigators was the large amount of resources and capability needed to identify a child that was AI-generated.
“This is time that could be spent attempting to identify real-world children at risk.”
Netsafe chief online safety officer Sean Lyons said Netsafe had been saying for “a long time” that the Harmful Digital Communications Act 2015 needed to be updated to keep up with the rise of artificial intelligence.
“There are AI tools out there that allow us to create all sorts of positive and wonderful things, but we are very aware as an organisation that people will potentially use them for various purposes.”
Regarding new technology such as AI, there was always confusion whether or not legislation applied to material that was artificial, such as AI-generated images.
“For us, a big part is getting clarity that, if you harmed people with an image, be it an image you took or be it harm caused by an image you made using a technology tool such as AI – that that is still harm.”