[ad_1]
An important goal of technology is to make consumers’
lives not just easier but also safer. Technology—in
particular artificial intelligence (AI)/machine learning
(ML)—can help to safeguard people’s safety, dignity, and
privacy in the face of various forms of abuse that plague
society, including cyberbullying, domestic abuse, and
threats to child safety.
Some of the most heinous
crimes we strive to defend against are sexual crimes
targeting children and minors, such as Child Sexual Abuse
Material (CSAM). Detecting and stopping the spread of CSAM
on the internet by using AI/ML techniques is a valuable tool
that technologists are deploying in various shapes and
forms.
To that end, Apple
announced recently that it will deploy its
NeuralHash technology to detect known images of CSAM.
The tool will scan images during the (otherwise encrypted)
process of uploading photos to Apple’s iCloud platform.
Apple’s approach will scan for known fingerprints of such
material drawn from the America’s National Center for
Missing and Exploited Children database of
CSAM.
Apple’s approach will certainly bolster the
fight against CSAM by providing valuable information to law
enforcement. Many security researchers, however, have
expressed concerns about the privacy implications of the
approach. All photo uploads from all Apple iCloud users in
the U.S. will be scanned.
For years, Apple has been
subjected to pressure from law enforcement agencies to allow
access to encrypted user data, including photos and
messages. As such, security and privacy experts are
concerned that these types of “global” scans may be
misused, including by governments who aim at surveilling
their citizens. Indeed, due to privacy concerns expressed by
multiple parties, Apple has decided
to delay the rollout of this feature.
Many people
(especially during the COVID-19 pandemic) use their
smartphones to take pictures of private documents or other
types of private data or moments. Given that the potential
misuse of content scanning systems raises serious privacy
concerns for the majority of people (who are non-offenders),
it is worth considering what can be done to protect
legitimate private information, while still allowing Apple
and other vendors to scan for illegal explicit
content.
Norton Labs recently released the SafePic
app on iOS, which can help with photo-related privacy
concerns. SafePic employs AI/ML in order to identify photos
on smartphones containing personal information (such as
photos of IDs, credit cards, etc.). It offers users the
ability to protect such photos either by saving them in a
secure private vault, or by introducing “smart blur”
through the PhotoBlur feature.
The ability to
automatically identify specific classes of sensitive/private
photos enables SafePic users to help protect and make wise
decisions about the way such photos are used. SafePic also
offers the ability to take photos directly within the app
and add them to the user’s on-device SafePic secure vault.
All scanning, “smart” classification and processing of
content happens on the user’s smartphone and none of the
sensitive content leaves the device. The SafePic app is one
of the first of its kind and provides consumers with the
ability to help safeguard their private
information.
You can read the full blog here
and you can find SafePic
on the Apple app
store.
[ad_2]
Source link