The Applied Computational Linear Algebra for Everyone course is designed to equip you with the knowledge you need to link the math of linear algebra to code with a few "must know" applications centered around different ways of casting and fitting a system of equations and revealing structure in a matrix.
Mastering computational linear algebra by linking math with code will help you in any/all of the computational sciences -- see here for how it can help in many fields, including computer science.
You’ll learn by programming from scratch in a hands-on manner using a one-of-a-kind cloud-based interactive computational textbook that will guide you, and check your progress, step-by-step. Using real-world datasets and datasets of your choosing, you will understand, and we will discuss, via computational discovery and critical reasoning, the strengths and limitations of the algorithms and how they can or cannot be overcome.
By the end of the course, you will be able to recognize and use linear algebra concepts as they arise in machine learning and data science.
Since you’ll learn by doing (via coding), you’ll spend quite a bit of time coding and debugging not-working code. So a basic facility with (language agnostic) programming syntax and computational reasoning is invaluable. The rest you will learn in the course itself, i.e., you don’t have to be a Java whiz but you do need to have used Python, MATLAB or R.
is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. He received his Masters and PhD in Electrical Engineering and Computer Science at MIT as part of the MIT/WHOI Joint Program in Ocean Science and Engineering.
In addition to receiving the Jon R. and Beverly S. Holt Award for Excellence in Teaching, Prof. Nadakuditi has received the DARPA Directors Award, DARPA Young Faculty Award, IEEE Signal Processing Society Best Young Author Paper Award, Office of Naval Research Young Investigator Award, and the Air Force Research Laboratory Young Faculty Award.
His graduate level course, Computational Data Science and Machine Learning, attracts hundreds of students from 80+ disciplines across the University. He loves making machine learning accessible to learners from all disciplines and enjoys seeing how students adapt the underlying ideas and develop creative, new applications in their own scientific of engineering area of expertise.
This offering has evolved from many years of the instructor teaching Computational Data Science and Machine Learning at the University of Michigan, MIT Lincoln Laboratory and the Air Force Research Laboratory (AFRL).
The syllabus distills elements the linear algebra elements necessary so that one may take more advanced courses in computational science and engineering that require linear algebra as a pre-requisite.
Over the years of teaching this course at U-M, the instructor has derived tremendous satisfaction from seeing students from a wide range of disciplines seeing how the beautiful math leads to beautiful code and applications that seem magical the first time the math and code come together to do something remarkable, as in the many applications we will showcase. That's a bit part of the fun of the underlying subject matter and we hope you leave with that sense of wonder, too.
An introduction to the Julia programming language. Introduces students to variables, arrays, functions, and everything else that they need to succeed!
Introduction to matrix math and linear algebra. Learn about vectors, matrices, arrays, and various operations on these objects.
Intro to convolution and expressing convolution as a matrix-vector product.
Introduction to normal and non-normal matrices and the spectral theorem for normal matrices. Introduction the eigenvalue decomposition and the singular value decomposition and their variational characterizations via eigshow and svdshow.
Introduction to vector spaces, subspaces and the four fundamental subspaces of a matrix. Discussion of bases vectors for subspaces and how the SVD of a matrix reveals these bases. Orthogonal projection matrices and how to efficiently compute projection of a vector onto a matrix subspace without first computing and storing the associated projection matrix.
First difference matrix construction and the role of the Kronecker product and sparse constructions thereof.
Stochastic descent, Nesterov's accelerated method. Application: photometric stereo reconstruction using these algorithms.
How to setup and solve least squares problems of the form Ax = b. Applications include fitting data to a higher order polynomial function, predicting search query time series results after an appropriate non-linear transformation.
The Eckart-Young theorem and its consequences. Applications include image compression and image denoising.
How and why we need to regularize the solution of a system of equations of the form Ax = b. Applications include better fitting data to a higher order polynomial function, image in-painting/graffiti removal with a first difference regularizer. Discussion of how the optimal regularization coefficient is selected.
Learning to recognize sparsity in its canonical and transformed manifestations and seeing (computationally) how that helps regularize the solution of a system of equations of the form Ax = b in a regime where minimum norm least squares does not work well. Applications include compressed sensing, image in-painting/graffiti removal with a first difference regularizer and a discussion of how the optimal regularization coefficient is selected.
Convolution plays a role, directly or indirectly, in many data science techniques. Many seemingly complex image filters, for example, may be expressed elegantly using convolution.
Re-factoring matrix multiplication for the setting where the matrices are too huge to fit in memory.
You may apply for the course here.
$ 149 a person for the self-guided 'book'. Apply and complete the screening module to know more about the format.
There are several additional resources that we recommend. These resources may be used as a companion book or simply to supplement the concept presented here.
There is so much to learn and we are delighted that there so many resources that present the material in slightly different ways -- all come together to help a learner form a more complete picture of the material. One can never really stop learning with how much there is to learn! (That's part of the fun for this author!)
Thanks in particular to Gil Strang for his encouragement, feedback and support and for inspiring the idea behind the codices during the very special semester of Spring 2017 when we launched and taught 18.065 at MIT. Multiple thanks to Alan Edelman for years of encouragement and inspiration and for teaching me so much (including Julia). A learner experiencing this book by doing/coding might sometimes recognize their voice in the way I write and speak about the underlying math and code. That's no accident. This course is infused with their DNA and years of me soaking in their thoughts and ideas on so many matters, particularly on how elegant math produces elegant codes and vice versa. All they taught me about how to see math and linear algebra makes me love it, and to want to share with you in the codex way, even more.