Privacy-preserving Deep Learning for Computer Vision

privacy wallpaper
Contents

The unprecedented accuracy of deep learning enables image recognition that outperforms humans. However, the performance of deep learning depends on the availability of large amounts of visual data which drives the need for privacy-preserving Deep Learning for Computer Vision. In this article you will learn about:

  1. Privacy of visual data
  2. Privacy-Preserving Machine Learning (PPML)
  3. Methods for PPML

Privacy of Visual Data

Recently, visual data is being generated at an unprecedented scale. People upload billions of photos daily on social media and a high number of security cameras capture video data.

Worldwide, there are over 770 million CCTV surveillance cameras in use. Additionally, an increasing amount of image data is being generated due to the popularity of camera-equipped personal devices.

1. Deep Learning leverages the value of data

Recent advances in deep learning methods based on artificial neural networks have led to significant breakthroughs in long-standing AI fields such as Computer Vision.

The success of deep learning techniques is directly proportional to the amount of data available for training. Hence, companies such as Google, Facebook, and Apple take advantage of the massive amounts of training data collected from their users and the immense computational power of GPU farms to deploy deep learning on a large scale.

AI vision and learning from visual data have led to the introduction of computer vision applications that promote the common good and economic benefits, such as smart transportation systems, medical research, or marketing.

2. Privacy concerns regarding visual data

While the utility of deep learning is undeniable, the same training data that has made it so successful also present serious privacy issues that drive the need for visual privacy. The collection of photos and videos from millions of individuals comes with significant privacy risks.

  • Permanent collection. Companies gathering data usually keep it forever. Users from whom the data was collected can neither delete it, nor control how it will be used, nor influence what will be learned from it.
  • Sensitive information. Images often contain accidentally captured sensitive items such as faces, license plates, computer screens, location indications, and more. Such sensitive visual data could be misused or leaked through various vulnerabilities.
  • Legal concerns. Visual data kept by companies could be subject to legal matters, subpoenas, and warrants, as well as warrantless spying by national-security and intelligence organizations.

Privacy-Preserving Machine Learning (PPML)

While public datasets are accessible to everyone, machine learning frequently uses private datasets that can only be accessed by the dataset owner. Hence, privacy-preserving machine learning is concerned with adversaries trying to infer such private data, even from trained models.

  • Model inversion attacks are aimed at reconstructing training data from model parameters, for example, to recover sensitive attributes such as gender or genotype of an individual given the model’s output.
  • Membership inference attacks are used to infer whether an individual was part of the model’s training set.
  • Training data extraction attacks aim to recover individual training examples by querying the model.

A general approach that is commonly used to defend against such attacks is Differential Privacy (DP), which offers strong mathematical guarantees of the visual privacy of the individuals whose data is contained in a database.

Methods To Prevent Privacy Breaches During Training and Inference

  • Secure Enclaves. An important field of interest is the protection of data that is currently in use. Hence, enclaves have been used to execute machine learning workloads in a memory region that is protected from unauthorized access.
  • Homomorphic encryption. Machine Learning models can be run on encrypted private data using homomorphic encryption, a cryptographic method that allows mathematical operations on data to be carried out on ciphertext, instead of on the actual data itself.
  • Secure Federated Learning. The concept of federated learning was originally proposed by Google. The main idea is to build machine learning models based on data sets that are distributed across multiple devices. With federated learning, multiple data owners can train a model collectively without sharing their private data.
  • Secure multi-party computation. Privacy-preserving multi-party deep learning distributes a large volume of training data among many parties. For example, Secure Decentralized Training Frameworks (SDTF) can be used to create a decentralized network setting that does not need a trusted third-party server while simultaneously ensuring the privacy of local data with a low cost of communication bandwidth.

Methods for Privacy-Preserving Deep Learning in Visual Data

  • Image obfuscation. Several methods have been developed to sanitize and anonymize sensitive visual data. Such techniques include blacking, pixelization (or mosaicing), and blurring. However, the deterministic obfuscation of traditional image privacy preservation techniques can lead to re-identification with well-trained neural networks. Recent studies show that standard obfuscation methods are ineffective, due to the adaptability of convnet-based models. In experiments, obfuscated faces could be re-identified up to 96%, and even black fill-in faces, body, and scene features could be utilized to re-identify 70% of the people. There, new image obfuscation methods were developed based on metric privacy, a rigorous privacy notion generalized from differential privacy. This allows sharing pixelized images with rigorous privacy guarantees, by extending the standard differential privacy notion to image data, which protects individuals, objects, or their features.
  • Removal of moving objects. An alternative to blurring is a method to automatically remove and inpaint faces and license plates (e.g. pedestrians, vehicles) in Google street-view imagery. A moving object segmentation algorithm was used to detect, remove and inpaint moving objects with information from other views, to obtain a realistic output image in which the moving object is not visible anymore.

Some recent datasets contain sanitized visual data. For example, nuScenes is an autonomous driving dataset where faces and license plates are detected and then blurred. Also, the public video dataset for action recognition AViD anonymized all the face identities to protect their privacy.

Recovering Sensitive Information in Images

  • Limits of Face Obfuscation. Face obfuscation does not provide any formal guarantee of visual privacy. Because both humans and machines can infer identities from face-blurred images, based on context information such as height or clothing. In certain cases, both humans and machines can infer an individual’s identity from face-blurred images, presumably relying on cues such as height and clothing. The obfuscation techniques like mosaicing, pixelation, blurring, and P3 (encryption of the significant coefficients in the JPEG representation of the image) can be defeated with artificial neural networks.
  • Preventing Anti-Obfuscation. Methods try to protect sensitive image regions against anti-obfuscation attacks, for example by perturbing the image adversarially to reduce the performance of a recognizer. However, they usually only work for certain recognizers, may not work for humans, and provide no privacy guarantee either.

What’s Next?

Deep learning methods and their need for massive amounts of visual data pose serious privacy concerns as this data can be misused. We reviewed the privacy concerns brought by deep learning, and the mitigation techniques to tackle these issues.

In the near future, and with the induction of deep learning applications in production, we expect privacy-preserving deep learning for computer vision to become a major concern with commercial impact. We recommend you to read more about related topics:

Share on linkedin
Share on twitter
Share on whatsapp
Share on facebook
Share on email
Related Articles

Want to use Computer Vision applications?

Get the all-in-one Suite to build and deliver Computer Vision Applications. 
Learn more

This website uses cookies. By continuing to browse this site, you agree to this use.