Material Detail

Sample Complexity Bounds for Differentially Private Learning

Sample Complexity Bounds for Differentially Private Learning

This video was recorded at 24th Annual Conference on Learning Theory (COLT), Budapest 2011. We study the problem of privacy-preserving classification – namely, learning a classifier from sensitive data, while still preserving the privacy of individuals in the training set. In particular, we require that our learning algorithm guarantees differential privacy, a very strong notion of privacy that has gained significant attention over the past few years. A natural question to ask is: what is the sample requirement of a learning algorithm that guarantees a certain level of privacy and accuracy? In this paper, we study this question in the context of infinite hypothesis classes when the data is drawn from a continuous distribution. We show that even for very simple hypothesis classes, any algorithm which uses a finite number of examples and guarantees differential privacy, fails to guarantee classification accuracy for at least one unlabeled data distribution. This result is unlike the case of finite hypothesis classes and hypothesis classes on discrete data domains that were studied by Kasiviswanathan et al. (2008). We then propose two approaches to differentially private learning that get around this lower bound. The first approach is to use some prior knowledge about the unlabeled data distribution in the form of a reference distribution U that is chosen independently of the sensitive data. Given such a reference U, we provide an upper bound on the sample requirement which depends (among other things) on a measure of closeness between U and the unlabeled data distribution. Our upper bound applies to the non-realizable as well as the realizable case. The second approach is to relax the privacy requirement, by requiring only label-privacy – namely, that the labels, and not the unlabeled parts of the examples be considered sensitive information. An upper bound on the sample requirement of learning with label privacy was shown by Chaudhuri et al. (2006); in this paper, we show a lower bound.

Quality

  • User Rating
  • Comments
  • Learning Exercises
  • Bookmark Collections
  • Course ePortfolios
  • Accessibility Info

More about this material

Comments

Log in to participate in the discussions or sign up if you are not already a MERLOT member.