Google has shut down accounts in at least two cases due to nudity-containing images of children requested by paediatricians for illness diagnosis.
Two fathers, one in San Francisco and the other in Houston, were investigated separately by police on suspicion of child abuse and exploitation after taking photos of their sons’ genitals for medical purposes using Android phones (owned by Google). Though the police determined that the parents had committed no crime in both cases, Google did not reach the same conclusion, permanently deactivating their accounts across all of its platforms, according to The New York Times.
The incidents highlight what can go wrong with automatic photo screening and reporting technology, as well as the perilous territory that tech companies enter when they rely on it. Without context, distinguishing between innocent and abusive images can be nearly impossible—even with the assistance of human screeners.
Google, like many other companies and online platforms, employs Microsoft’s PhotoDNA—an algorithmic screening tool designed to detect abuse in photos. According to self-reported data, the company identified 287,368 cases of suspected abuse in the first six months of 2021 alone. According to Google, the incident reports come from a variety of sources, including the automated PhotoDNA tool. “Our teams across Google work around the clock to identify, remove, and report this content, utilising a combination of industry-leading automated detection tools and specially-trained reviewers.” “We also receive reports from third parties and our users, which supplement our ongoing work,” Google says in a statement on its website.
Some privacy advocates, such as the libertarian Electronic Frontier Foundation, have been outspoken in their opposition to the expansion of such screening technologies. However, child sexual abuse and exploitation is a particularly difficult topic to advocate privacy over all else.
What appears clear is that no automated screening system is perfect, that false reports and detections of abuse are unavoidable, and that businesses will almost certainly require a better mechanism for dealing with them.
What occurred?
According to the Times, in the San Francisco case, Mark (last name withheld) photographed his toddler’s groyne to document swelling after noticing his son was in pain in the area. The next morning, his wife scheduled an emergency video consultation with a doctor. It was February 2021, and at that point in the pandemic, going to a medical office in person was generally discouraged unless absolutely necessary.
The scheduling nurse requested that photos be sent over ahead of time so that the doctor could review them. Mark’s wife texted the photos from her husband’s phone to herself before uploading them to the medical provider’s messaging system. The toddler’s condition improved after the doctor prescribed antibiotics.
However, two days after taking the photos of his son, Mark received notification that his account had been disabled for “harmful content” that was “severely violating Google’s policies and may be illegal,” according to the Times. He appealed the decision, but it was denied.
While Mark was unaware, Google reported the photos to the National Center for Missing and Exploited Children’s CyberTipline, which escalated the report to law enforcement. Mark received notification from the San Francisco Police Department ten months later that they had investigated him based on the photos and a Google report. The police had served Google with search warrants requesting everything in Mark’s account, including messages, photos, and videos stored with the company, internet searches, and location data.
The investigators concluded that no crime had occurred, and the case was closed by the time Mark discovered what had occurred. He attempted to use the police report to re-appeal to Google and regain access to his account, but his request was denied once more.
What were the ramifications?
Though it may appear to be a minor inconvenience in comparison to the possibility of child abuse, the loss of Mark’s Google account was reportedly a major inconvenience. From the New York Times:
Not only did he lose emails, contact information for friends and former colleagues, and documentation of his son’s first years of life, but his Google Fi account was also terminated, requiring him to obtain a new phone number with another carrier. He couldn’t sign in to other internet accounts because he couldn’t get the security codes he needed without access to his old phone number and email address.
“The more eggs you put in one basket, the more likely it is to break,” he explained.
A paediatrician in Houston, Texas, asked another father to take photos of his son’s “intimal parts” to diagnose an infection in a case very similar to the one covered by the Times. Those images were automatically backed up to Google Photos (not always a good idea), and the father sent them to his wife via Google Messenger. The couple was in the process of buying a new home at the time, and because the pictures eventually resulted in the father’s email address being disabled, they faced additional complications.”
A Google spokesperson told Gizmodo the following in an emailed statement:
We are committed to preventing the spread of child sexual abuse material (CSAM) on our platforms. We define CSAM in accordance with US law and use a combination of hash matching technology and artificial intelligence to detect and remove it from our platforms. Furthermore, our team of child safety experts reviews flagged content for accuracy and consults with paediatricians to ensure that we can identify instances where users may be seeking medical advice. Users can appeal any decision; our team reviews each appeal and will reinstate an account if an error was made.
Although mistakes appear to have been made in these two cases, Google did not reinstate the accounts in question. The company did not respond immediately to Gizmodo’s follow-up questions. And the consequences could have been far worse than simply deleting accounts.
It’s difficult to “account for things that are invisible in a photo, like the behaviour of the people sharing an image or the intentions of the person taking it,” according to Kate Klonick, a lawyer and law professor at St. John’s University who specialises in privacy. “This would be problematic if it was just content moderation and censorship,” Klonick added. “However, this is doubly risky because it also results in someone being reported to law enforcement.”
And some businesses appear to be well aware of the complexities and potential dangers that automated screening tools may pose. In 2021, Apple announced plans for its own CSAM screening system. However, following criticism from security experts, the company postponed its plans before seemingly abandoning them entirely.