Recommended reading

I recommend the following references to students interested in artificial intelligence research.

My current research is focused on developing principled but scalable Bayesian reinforcement learning methods. A typical Bayesian reinforcement learning method represents its knowledge about an environment by a probability distribution over (one-step dynamics) models and uses this knowledge to seek optimal actions. If you would like to pursue a PhD under my supervision, please skim the survey highlighted below and my selected publications.

PhD level

  • Artificial intelligence:
    • S. Legg. “Machine Super Intelligence”. PhD thesis, 2008.
  • Bayesian reinforcement learning (notes):
    • M. Ghavamzadeh, S. Mannor, J. Pineau, A. Tamar. “Bayesian Reinforcement Learning: A Survey”. Foundations and Trends in Machine Learning, 2015.
  • Measure-theoretic probability (notes):
    • R.G. Bartle. “The Elements of Integration and Lebesgue Measure”. John Wiley & Sons, 1995.
    • D. Williams. “Probability with Martingales”. Cambridge University Press, 1991.
  • Reinforcement learning theory (notes):
    • T. Lattimore, C. Szepesvári. “Bandit Algorithms”. Cambridge University Press, 2020.

MSc level

  • Machine learning (notes and notes):
    • C. Bishop. “Pattern Recognition and Machine Learning”. Springer-Verlag, 2007.
    • K.P. Murphy. “Probabilistic Machine Learning: An Introduction”. MIT Press, 2022.
    • M.J. Kochenderfer, T.A. Wheeler, K.H. Wray. “Algorithms for Decision Making”. MIT Press, 2022.
    • [D. Koller, N. Friedman. “Probabilistic Graphical Models: Principles and Techniques”. MIT Press, 2009.]
  • Neural networks (notes):
    • S.J.D. Prince. “Understanding Deep Learning”. MIT Press, 2023.
  • Reinforcement learning (notes):
    • D.P. Bertsekas. “A Course in Reinforcement Learning”. Athena Scientific, 2023.

BSc level

  • Programming:
    • E. Matthes. “Python Crash Course”. No Starch Press, 2023.
    • K.N. King. “C Programming: A Modern Approach”. W. W. Norton & Company, 2008.
  • Computer design:
    • D.A. Patterson, J.L. Hennessy. “Computer Organization and Design: RISC-V Edition”. Morgan Kaufmann, 2020.
  • Mathematical proof:
    • D.J. Velleman. “How to Prove It: A Structured Approach”. Cambridge University Press, 2019.
  • Calculus (notes):
    • J. Stewart. “Calculus: Early Transcendentals”. Brooks/Cole, 2011.
  • Algorithms:
    • T.H. Cormen, C.E. Leiserson, R. Rivest, C. Stein. “Introduction to Algorithms”. MIT Press, 2022.
  • Theory of computation:
    • T. Sipser. “Introduction to the Theory of Computation”. Course Technology, 2012.
  • Analysis:
    • S. Abbott. “Understanding Analysis”. Springer, 2015.
  • Linear algebra (notes):
    • S. Axler. “Linear Algebra Done Right”. Springer, 1997.
  • Probability (notes, Sec. 2):
    • D.P. Bertsekas, J.N. Tsitsiklis. “Introduction to Probability”. Athena Scientific, 2008.
  • Artificial intelligence:
    • S. Russell, P. Norvig. “Artificial Intelligence: A Modern Approach”. Pearson, 2021.
  • Reinforcement learning (notes):
    • R.S. Sutton, A.G. Barto. “Reinforcement Learning: An Introduction”. MIT Press, 2020.