News > Software & Apps Apple Addresses Concerns Over New Anti-Child Abuse Measures A new FAQ page clears some things up By Allison Murray Allison Murray Twitter Tech News Reporter Southern Illinois University Allison reports on all things tech. She's a news junky that keeps her eye on the latest trends. Allison is a writer working out of Chicago, IL, with her only coworker: her cat Norbert. lifewire's editorial guidelines Updated on August 9, 2021 12:31PM EDT Fact checked by Rich Scherr Fact checked by Rich Scherr Twitter University of Maryland Baltimore County Rich Scherr is a seasoned technology and financial journalist who spent nearly two decades as the editor of Potomac and Bay Area Tech Wire. lifewire's fact checking process Tweet Share Email Tweet Share Email Software & Apps Mobile Phones Internet & Security Computers & Tablets Smart Life Tech Leaders Home Theater & Entertainment Software & Apps Social Media Streaming Gaming Women in Gaming Apple is explaining more of the process involved in its new anti-child abuse measures. The tech giant announced a new policy last week that uses technology to spot potential child abuse imagery in the iCloud and Messages. The Verge reports that Apple released a FAQ page in recent days that explains how the technology is used and what the privacy aspects look like after people voiced concerns over the new measures. Getty Images/Yuriko Nakao Apple said its technology is specifically limited to detecting child sexual abuse material (CSAM) and cannot be turned into surveillance tools. "One of the significant challenges in this space is protecting children while also preserving the privacy of users," Apple wrote on the new FAQ page. "With this new technology, Apple will learn about known CSAM photos being stored in iCloud Photos where the account is storing a collection of known CSAM. Apple will not learn anything about other data stored solely on device." The technology works by scanning an image before it is backed up to the iCloud. Then, if an image matches the criteria of CSAM, Apple receives the data of the cryptographic voucher. Groups such as the Electronic Frontier Foundation voiced their concerns about the technology last week, saying that the tech could be "repurposed to create a database of 'terrorist' content that companies can contribute to and access for the purpose of banning such content." One of the significant challenges in this space is protecting children while also preserving the privacy of users. However, Apple’s detailed FAQ page addresses some of these concerns by laying out that the tech will not scan all the photos stored on a device, will not break end-to-end encryption in Messages, and that it would not falsely flag innocent people to law enforcement. The Verge does note that Apple’s FAQ does not address the concerns brought up about the technology being used to scan Messages and how the company ensures that the scanning only focuses on CSAM. Was this page helpful? Thanks for letting us know! Get the Latest Tech News Delivered Every Day Subscribe Tell us why! Other Not enough details Hard to understand Submit