TIME/VENUE
MW 3:30 – 4:50 pm EST
ONLINE
INSTRUCTOR
Prakash Narayan
Room AVW 2353
Phone: (301) 405 3661
Email: prakash@umd.edu
OFFICE HOURS
MW 2:00 – 3:15 pm (But please, please make an appointment to avoid
idling Zoom sessions.) Also at other times by appointment.
OVERVIEW
The course covers an assortment of information theoretic methods of interest in statistical inference and learning. Topics include: (i) information geometry leading to the EM algorithm and application to maximum likelihood estimation; (ii) measure concentration methods; (iii) correlated multiarmed bandits including in parameter estimation and probability distribution learning from partially sampled observations (finding an arm or a small set of arms that yield information about other correlated arms); and (iv) data privacy vs
function computation utility tradeoffs.
PREREQUISITES
ENEE 620 (Random processes) or equivalent, ENEE 627 (Information theory), or permission of the instructor.
COURSE OUTLINE
I. Information geometry
KullbackLeibler divergence, Pythagorean inequality, Iprojection on linear families, iterative algorithm for finding the minimum divergence between two convex sets of distributions, EM algorithm, application to maximum likelihood estimation.
II. Concentration of measure
Concentration inequalities as basic tools: Markov, Chebyshev, Chernoff bounding, Hoeffding, Bennett, Bernstein, EfronStein, Azuma; Entropy method, tensorization, Han’s inequalities for entropy and divergence, log Sobolev inequalities, VapnikChevronenkis dimension and entropy. Some applications.
III. Correlated multiarmed bandits
Prediction and parameter estimation problems will be considered in which an estimator is able to assess selfreward or loss based on local measurements but is unaware of the consequences of estimation from unseen measurements. However, correlation among measurements affords a new latitude in learning. Stochastic and nonstochastic (or adversarial) multiarmed bandit problems will be considered.
IV. Data Privacy
Various notions of data privacy: Differential privacy, distribution privacy, divergencebased privacy; data privacy vs function computation utility tradeoffs.
COURSE GRADE
The course grade will be determined on the basis of individual or (small) team projects and inclass presentations. Specifically, an individual's or a team's performance will be
evaluated by means of a (i) a midterm assessment of progress on a term project, and (ii) a final assessment of the completed term project. A term project can consist of (a) work on
a chosen or assigned topic involving open issues or (b) a critical examination of a pertinent topic or topics in the existing literature  in both cases combined with a comprehensive
oral presentation at the end of the semester.
Problems to be addressed in a term project will be fixed at the end of approximately four (4) weeks into the semester.
REFERENCES
There is no required or recommended text. The course material will be drawn largely from a selection of books and publications that are listed below. Some of this material (marked by (*)) can be found here.
0. Sine qua non
 (*) I. Csiszár and J. Körner, Information Theory: Coding Theorems for Discrete Memoryless Systems, Academic, Cambridge U Press, 2011. (A PDF version of the first edition (1981) is posted.)
 T.M. Cover and J. Thomas, Elements of Information Theory, Wiley, New York, 1991.
I. Information geometry
 D. Barber, Bayesian Reasoning and Machine Learning, Chapter 11, Cambridge U Press, 2012.
 (*) I. Csiszár and G. Tusnády, “Information geometry and alternating minimization procedures,” Statistics and Decisions, Suppl. Issue 1, pp. 205237, 1984.
 (*) I. Csiszár and P. Shields, “Information Theory and Statistics: A Tutorial,” Foundations and Trends in Communications and Information Theory, Volume 1, Issue 4, 2004.
 (*) A. Kumar and R. Sundaresan, “Minimization Problems Based on
Relative ɑEntropy I: Forward Projection,” IEEE Trans.
Information Theory, vol. 61, no. 9, pp. 5063  5080, September 2015.
 (*) A. Kumar and R. Sundaresan, “Minimization Problems Based on
Relative ɑEntropy II: Forward Projection,” IEEE Trans.
Information Theory, vol. 61, no. 9, pp. 5081  5095, September 2015.
 S. ShalevShwartz and S. BenDavid, Understanding Machine
Learning, From Theory to Algorithms, Chapter 24, Cambridge U Press, 2014.
III. Multiarmed bandits
 J. Y. Audibert, S. Bubeck and R. Munos, “Best Arm Identification in Multiarmed Bandits,” Conference on Learning Theory, Suppl. Issue 1, pp. 4153, 2010.
 (*) S. Bubeck and N. CesaBianchi, “Regret Analysis of Stochastic and Nonstochastic
Multiarmed Bandit Problems,” https://arxiv.org/abs/1204.5721v2.
 N. CesaBianchi, and G. Lugosi, Prediction, Learning, and Games, Cambridge U Press,
New York, 2006.
 C. Guestrin, A. Krause, and A. P. Singh, “NearOptimal Sensor Placements in Gaussian
Processes,” International Conference on Machine Learning, pp. 265272, 2005.
 (*) S. Gupta, S. Chaudhari, S. Mukherjee, G. Joshi and O. Yagan, “A Unified Approach
to Translate Classical Bandit Algorithms to the Structured Bandit Setting,” IEEE Trans.
Information Theory, vol. 1, no. 3, pp. 840853, Nov. 2020.
 (*) S. Gupta, G. Joshi and O. Yagan, “BestArm Identification in Correlated Multiarmed
Bandits,” preprint, 2020.
 (*) K. Jamieson and R. Nowak, “Bestarm Identification Algorithms for Multiarmed Bandits in the Fixed Confidence Setting,” 2014 48th Annual Conference on Information Sciences and Systems (CISS), Princeton, NJ, 2014, pp. 16.
 E. Kaufmann, O. Cappe and A. Garivier, “On the Complexity of Best Arm Identification
in Multiarmed Bandit Models,” Journal of Machine Learning Research, 2015.
 C. Y. Liu and S. Bubeck, “Most Correlated Arms Identification,” Conference on Learning
Theory, pp. 623637, 2014.
IV. Data privacy
 (*) S. Asoodeh, M. Diaz, F. Alajai and T. Linder, “Estimation Efficiency Under Privacy
Constraints,” IEEE Trans. Information Theory, vol. 65, no. 3, pp. 15121534, March 2019.
 (*) C. Dwork, “Differential Privacy,” Automata, Languages and Programming, ICALP 2006, Lecture Notes in Computer
Science, vol 4052. Springer, Berlin, Heidelberg.
 (*) R. Bassily, A. Groce, J. Katz and A. Smith, “CoupledWorlds Privacy: Exploiting
Adversarial Uncertainty in Statistical Data Privacy,” FOCS, 2013.
 (*) C. Dwork, F. McSherry, K. Nissim and A. Smith, “Calibrating Noise to Sensitivity in
Private Data Analysis,” Journal of Privacy and Confidentiality, pp 265284, 2006.
 (*) Q. Geng and P. Viswanath, “The Optimal NoiseAdding Mechanism in Differential
Privacy,” IEEE Trans.
Information Theory, vol. 66, no. 3, pp. 925  951, Feb 2016.
 (*) Q. Geng, P. Kairouz, S. Oh and P. Viswanath, “The Staircase Mechanism in Differential
Privacy,” IEEE Journal Selected Topics in Signal Processing, vol. 9, No. 7, pp. 1176 
1184, Oct 2015.
 (*) C. Huang, P. Kairouz, X. Chen, L. Sankar and R. Rajagopal, “ContextAware Generative Adversarial Privacy,” 2Entropy 2017, pp. 1 35.
 (*) I. Issa, A. B. Wagner and S. Kamath, “An Operational Approach to Information
Leakage,” IEEE Trans. Information Theory, vol. 62, no. 2, pp. 1625  1657, March 2020.
 (*) A. Nageswaran and P. Narayan, “Data Privacy for a ρRecoverable Function,” IEEE
Trans. Information Theory, vol. 65, no. 6, pp. 3470  3488, June 2019.
 (*) A. Smith, “Efficient, Differentially Private Point Estimators,” arXiv:0809.4794v1, Sep 2008.
