The Continuum Jumpstart Course Computational Machine Learning (ML) for Scientists and Engineers is designed to equip you with the knowledge you need to understand, train, design and machine learning algorithms, particularly deep neural networks, and even deploy them on the cloud.
You’ll learn by programming machine learning algorithms from scratch in a hands-on manner using a one-of-a-kind cloud-based interactive computational textbook that will guide you, and check your progress, step-by-step. Using real-world datasets and datasets of your choosing, you will understand, and we will discuss, via computational discovery and critical reasoning, the strengths and limitations of the algorithms and how they can or cannot be overcome. You will understand how machine learning algorithms do what they claim to do so you can reproduce these while being able to reason about and spot wild, unsupported claims of their efficacy.
By the end of the course, you will be ready to harness the power of machine learning in your daily job and prototype, we hope, innovative new ML applications for your company with datasets you alone have access to.
Since you’ll learn by doing (via coding), you’ll spend quite a bit of time coding and debugging not-working code. So a basic facility with (language agnostic) programming syntax and computational reasoning is invaluable. The rest you will learn in the course itself, i.e., you don’t have to be a Java whiz but you do need to have used Python, MATLAB or R.
This course offers the opportunity to work in groups, remotely, or completely on your own. The choice is yours.
Visit the Continuum Jumpstart page to learn more about the logistics of the course. Click the Apply link in the About this course section to officially apply for the course.
Previous students have offered up some testimonials and advice for the course. A heartfelt thank you to everyone who has taken the course!
The level of interaction was absolutely phenomenal. I feel like I’m good at memorizing and so I do great in a classroom setting, but sometimes I would rely on my memory instead of fully understanding the material. In this course’s case, I found it easier to learn by writing the code and interacting with the codex.
I liked the pace of the course and the freedom to bring in so many different aspects of ML from the real software development world … such as COLAB, Nvidia, PyTorch, Julia, TensorFlow and Keras to learning environment.
See more testimonials and advice on the official Continuum site.
Raj is an Associate Professor of Electrical Engineering and Computer Science at the University of Michigan, Ann Arbor. He received his Masters and PhD in Electrical Engineering and Computer Science at MIT as part of the MIT/WHOI Joint Program in Ocean Science and Engineering.
In addition to receiving the Jon R. and Beverly S. Holt Award for Excellence in Teaching, Prof. Nadakuditi has received the DARPA Directors Award, DARPA Young Faculty Award, IEEE Signal Processing Society Best Young Author Paper Award, Office of Naval Research Young Investigator Award, and the Air Force Research Laboratory Young Faculty Award.
His graduate level course, Computational Data Science and Machine Learning, attracts hundreds of students from 80+ disciplines across the University. He loves making machine learning accessible to learners from all disciplines and enjoys seeing how students adapt the underlying ideas and develop creative, new applications in their own scientific of engineering area of expertise.
This offering has evolved from many years of the instructor teaching Computational Data Science and Machine Learning at the University of Michigan, MIT Lincoln Laboratory and the Air Force Research Laboratory (AFRL). The computational tools at the heart of the Machine Learning (ML) revolution have only recently become as accessible as they now are. This allows scientists and engineers to harness their power without needing to become experts.
The syllabus distills elements from more advanced courses taught at the University of Michigan to give you just what you need to be able to understand, design and train a machine learning system from scratch and to deploy a working ML prototype on the cloud.
Over the years of teaching this course at U-M, the instructor has derived tremendous satisfaction from seeing students from a wide range of disciplines take these ideas and adapt them to their own application. Our sincerest hope is that scientists and engineers taking this course will do the same in their own areas of expertise and in doing so will usher in the next wave of the ML revolution.
An introduction to the Julia and Python programming languages. Introduces students to variables, arrays, functions, and everything else that they need to succeed!
Introduction to matrix math and linear algebra. Learn about vectors, matrices, arrays, and various operations on these objects. Just what you need to parse ML terminology in papers.
Learning to tell apart two classes.
Learning to tell apart multiple classes.
Learning to tell apart multiple classes with logistic regression. What logistic regression is about, why it is different than least squares and what the underlying loss function better captures than the mean squared error loss function in least squares.
How decision theory (e.g. ROC curve) facilitates the systematic comparison of different algorithms for classification. The importance of understanding the tradeof between the probability of false alarm vs probability of correct classification.
Use Flux.jl to quickly design and train 1-D and 2-D deep feedforward neural nets. See the power of deep nets. Applications include handwritten digit recognition and 1-D signal recognition.
Becoming multi-lingual by learning to train networks in PyTorch and Keras (TensorFlow).
Use Flux.jl to quickly design and train convolutional neural nets. Why they are needed and how to critically examine what it means for conv. nets to be "shift-invariant". Applications include handwritten digit recognition and 1-D signal recognition. How to go from object classification to object detection and localization.
Becoming multi-lingual by learning to train conv networks in PyTorch and Keras (TensorFlow).
How to use neural networks in the context of regression and function approximation. Examples include fitting multi-sensory data and deep generative nets.
How a network can fail under adversarial attacks and how to train a network to be robust to them.
How to use transfer learning to train networks with layers from a pre-trained nets.
The importance of augmenting training datasets (e.g. by deforming/rotating/scaling/color equalizing an image) to encode in the network, invariances we wish to learn that are representative of test conditions.
Classifying different types of fruits from images.
Classifying different hand gestures from images.
Classifying wine categories based on chemical properties..
A classification task using data for an application of your choosing.
How to deploy a trained neural network model on the Amazon AWS Lambda Service.
MOOCs (massive open online courses) tend to be open to anyone wanting to take the course and therefore are designed for mass consumption with often minimal involvement by the instructor. Many sign up but few successfully finish and so MOOCs tend not to dive in too deep for fear of discouraging learners. Having learners ask too many questions when they are stuck and need help from an instructional staff is not a good business model for MOOCs.
We began designing this course by wanting to combine the best of MOOCs (online, self-paced) and in-person instruction (academic rigor and dedicated instructional support). You get a serious, immersive learning experience via a rigorous, experiential (learning-by-doing) course with individualized support.
A critical component of your learning experience with us will be the one-of-a-kind interactive computational textbook platform (Pathbird). Each chapter of the book (we call it a "codex") links math with programming code. This linking is what makes Machine learning tricky to learn -- math stars get stuck in the programming; programming whizzes get stuck with the math. Everyone gets stuck somewhere! Our commitment to you in offering the course is that when you get stuck as we dive deeper into the material, we will be there for you. We want you to reach out whenever you have any doubts and we are committed to answering your questions so you can acquire mastery of the material.
Our goal is to teach you the inner-workings of ML algorithms so you have a deep mastery of the inter-connection between the math and the code. This will give you the framework to understand an algorithm's strengths and also its weaknesses so that you will be able anticipate when it ought to work and also when and how it could fail in practice. Understanding the failure modes is an important step in being able to design strategies for mitigating their effect in practice.
Acquiring this deep understanding to be able to anticipate the strengths and shortcomings of an algorithm necessitates a greater depth of conversation between the learner and the instructor than just showcasing the algorithm's successes. This course, via the codices, is designed to stimulate and foster such deeper conversation.
Your questions, as you work your way through the codices and the additional programming exercises, are an opportunity for us to deepen the conversation and your understanding. We teach the course this way because we want you to know, and to ask for, more -- there is so much to learn and we are excited to be there with you every step of the way!
You can apply here.
See the Continuum ML page for more information.