Machine Learning

Programme Overview


Duration


MQA Accredited


Brochure


Online Application


Enquiry

This course provides a deep dive into the fundamental concepts and practical techniques essential for understanding and implementing machine learning algorithms. Spanning fourteen weeks, the curriculum is structured to progressively build expertise, starting with mathematical foundations, and advancing through classical and contemporary machine learning methodologies
Lecterur Name: Raman Raguraman

Programme StructureProgramme Outcome

Programme Structure

Programme Duration:
Total Duration: 10 weeks
Learning Hours: 40 Hours

Week
1 Mathematical Basics 1 – Introduction to Machine Learning, Linear Algebra
Introduction to Machine Learning:
Overview of Machine Learning (ML),Types of ML: Supervised, Unsupervised, Reinforcement Learning,Key applications and examples of ML
Linear Algebra:
Vectors, matrices, and their operations,Matrix multiplication, determinants, and inverses
Eigenvalues and eigenvectors,Singular Value Decomposition (SVD),Applications of linear algebra in ML
2 Mathematical Basics 2 – Probability
Probability Theory: Basic probability concepts (events, sample spaces, conditional probability)
Random variables, probability distributions (discrete and continuous)
Expectation, variance, and standard deviation
Common probability distributions: Bernoulli, Binomial, Poisson, Normal, Exponential
Central Limit Theorem
Applications in Machine Learning: Probabilistic models,Bayesian inference basics
3 Computational Basics – Numerical Computation and Optimization, Introduction to Machine Learning Packages
Numerical Computation:
Basics of numerical analysis, Floating-point arithmetic and errors,Numerical integration and differentiation,
Optimization:
Gradient Descent and its variants (Stochastic, Mini batch) , Convex optimization, Optimization challenges in ML (saddle points, local minima)
Machine Learning Packages:
Introduction to Python and Jupyter Notebooks, Overview of ML libraries: NumPy, SciPy, sci-kit-learn, TensorFlow, Keras, PyTorch
Hands-on exercises with basic ML algorithms using these libraries
4 Week 4: Linear and Logistic Regression – Bias/Variance Tradeoff, Regularization, Variants of Gradient Descent, MLE, MAP, Applications
Linear Regression:
Model representation, assumptions,Ordinary Least Squares (OLS) method
Cost function and Gradient Descent for Linear Regression,Evaluation metrics: RMSE, R²
Logistic Regression:
Model representation, assumptions,Sigmoid function, cost function,Gradient Descent for Logistic Regression,Evaluation metrics: Accuracy, Precision, Recall, F1 Score, ROC-AUC
Bias/Variance Tradeoff:
Understanding bias and variance
Strategies to balance bias and variance
Regularization:
L1 (Lasso) and L2 (Ridge) regularization,Regularization impact on model complexity
Variants of Gradient Descent:
Stochastic Gradient Descent (SGD),Mini-batch Gradient Descent
5 Neural Networks – Multilayer Perceptron, Backpropagation, Applications
Multilayer Perceptron (MLP):
Architecture and components
Activation functions (ReLU, Sigmoid, Tanh)
Forward propagation, Backpropagation, Derivation and intuition, Chain rule for gradients
Implementation of backpropagation in MLPs,Applications:
6 Convolutional Neural Networks 1 – CNN Operations, CNN Architectures
CNN Operations:
Convolution operation, Pooling (Max, Average), Padding and Stride
CNN Architectures:
Layer types and their roles (Convolutional layers, Pooling layers, Fully Connected layers)
Classic architectures: LeNet, AlexNet, VGG, GoogLeNet, ResNet
Hands-on Implementation:
Building simple CNN models using ML libraries
7 Convolutional Neural Networks 2 – Training, Transfer Learning, Applications
Training CNNs:
Techniques for improving training (Data Augmentation, Dropout, Batch Normalization)
Overfitting and regularization in CNNs
Transfer Learning:
Concept and benefits
Pre-trained models and fine-tuning
Applications:
Practical applications of CNNs in image classification, object detection, segmentation
Sequence modeling applications (text generation, language translation, time series prediction)
Application in clustering and density estimation
Applications:
Practical examples and use cases of clustering and density estimation techniques.
8 Recurrent Neural Networks (RNN), LSTM, GRU, Applications
Recurrent Neural Networks (RNN):
Architecture and functionality
Challenges (vanishing/exploding gradients)
Long Short-Term Memory (LSTM):
LSTM cell architecture and components
Advantages over standard RNNs
Gated Recurrent Unit (GRU):
GRU cell architecture and components
Comparison with LSTM
Applications:
9 Classical Techniques 1 – Bayesian Regression, Binary Trees, Random Forests, SVM, Naïve Bayes, Applications
Bayesian Regression:
Bayesian approach to linear regression
Predictive distribution
Decision Trees:
Tree structure, splitting criteria.
Pruning and overfitting
Random Forests:
Ensemble method, bagging
Feature importance
Support Vector Machines (SVM):
Hyperplanes and margins
Kernel trick
Naïve Bayes:
Assumptions, likelihoods
Variants (Gaussian, Multinomial)
10 Classical Techniques 2 – k-Means, kNN, GMM, Expectation Maximization, Applications
k-Means Clustering:
Algorithm steps, choosing k
Evaluation metrics (Inertia, Silhouette Score)
k-Nearest Neighbors (kNN):
Distance metrics (Euclidean, Manhattan)
Weighting and voting schemes
Gaussian Mixture Models (GMM):
Mixture of Gaussians, responsibilities

Programme Outcome

Upon successful completion of this 10-week intensive course, participants will be able to:

  1. Outcome 1: Master Fundamental Mathematical Concepts for Machine Learning:
    • Develop a solid understanding of linear algebra and probability theory.
    • Gain proficiency in numerical computation and optimization techniques.
    • Learn to implement mathematical principles using popular machine-learning packages.
  2. Outcome 2: Design and Implement Core Machine Learning Models
    • Acquire skills to design, train, and evaluate linear and logistic regression models, and neural networks (MLP, CNN, RNN, LSTM, GRU), and understand their applications.
    • Understand and apply regularization techniques, bias/variance tradeoffs, and gradient descent variants.
    • Utilize transfer learning for advanced model performance.
  3. Outcome 3: Apply Classical and Modern Machine Learning Techniques:
    • Explore and apply classical machine learning techniques like Bayesian regression, binary trees, random forests, SVM, Naïve Bayes, k-Means, kNN, GMM, and Expectation Maximization.
    • Implement these techniques in practical scenarios to solve real-world problems.
    • Compare and contrast classical techniques with modern neural network-based approaches for various applications