This blog will help self learners on their journey to Machine Learning and Deep Learning.

I plan to come up with week by week plan to have mix of solid machine learning theory foundation and hands on exercises right from day one.

Your suggestions and inputs are most welcome. Please contact me at omisonie at gmail.com

"Concept to Code: Semi-Supervised End-To-End Approaches For Speech Recognition (Om)"

Presentation:

Session 1: Concept to Code: Semi-Supervised End-To-End Approaches For Speech Recognition (Om)

Session 2: Concept to Code: Semi-Supervised End-To-End Approaches For Speech Recognition (Venky)

"Concept to Code: Multi Task Learning for Recommendation"

Presentation:

Session 1 & 2: Concept to Code:Deep Learning for Multitask Recommendation (Om)

For Hands on (jupyter notebooks) - contact omisonie at gmail dot com

"Concept to Code: Deep Neural Conversational System"

Presentations:

Session-1: KDD-1-RNN-QuickRecap_Seq2Seq_AttentionBasedApproaches (Om)

Session-2: KDD-2-Memory-E2EMemory-KeyValueMemoryNetworks (Nikesh)

Session-3: KDD-3-Transformer-BERT-TransformerXL-XLNet (Om)

"Concept to Code: Deep Learning for Fashion Recommendation"

Presentations:
Slides

For Hands on (jupyter notebooks) - contact omisonie at gmail dot com

"Concept to Code: Neural Networks for Sequence Learning"

Presentations:
Slides-1
Slides-2
Slides-3
Slides-4

"Concept to Code: Multi Task Learning for Recommendation"

Presentation:

Concept to Code: Learning Distributed Representation

For Hands on (jupyter notebooks) - contact omisonie at gmail dot com

Following is the list of online courses video courses:

- Sup Un-sup Learning (40min)
- Linear Regression one var (1 hr 15min)
- Linear regression one var Matrix ()
- Linear regression multiple features (1hr)
- Octave (1hr 20min)
- Logistic regression (1hr 10min)
- Regularisation (40min)
- Neural_Network (1hr)
- NN Back-propagation (1 hr 15min)
- Advice apply ML Bias, Var (1hr)
- ML System Design (1hr)
- Support Vector Machine (1hr 37min)
- Un-sup: Clustering (40min)
- Un-sup: PCA (1 hr 10min)
- Anomaly Detection (1hr 30min)
- Recommender System (1hr)
- Large Scale Machine Learning (1hr)
- Example Photo OCR (50min)
- Ref: Machine Learning Exercises in Python - http://www.kdnuggets.com/2017/07/machine-learning-exercises-python-introductory-tutorial-series.html

- Supervised Learning, Discriminative Algorithms (30 pages)
- Generative Algorithms (14 pages)
- Support Vector Machines (25 pages)
- Learning Theory (11 pages)
- Regularization and Model Selection (8 pages)
- Regularization and Model Selection ( 3 pages)
- Unsupervised Learning, k-means clustering (3 pages)
- Mixture of Gaussians ( 4 pages)
- The EM Algorithm (8 pages)
- Factor Analysis ( 9 pages)
- PCA Principal Components Analysis (6 pages)
- ICA Independent Components Analysis (6 pages)
- Reinforcement Learning and Control (15 pages)
- Boosting algorithms and weak learning (11 pages)
- Ref: http://cs229.stanford.edu/

- The Learning Problem - (slides 20)
- Is Learning Feasible? - (slides 20)
- The Linear Model I - (slides 25)
- Error and Noise - (slides 23)
- Training versus Testing - (slides 21)
- Theory of Generalization - (slides 19)
- The VC Dimension - (slides 25)
- Bias-Variance Tradeoff - (slides 25)
- The Linear Model II - (slides 23)
- Neural Networks - (slides 25)
- Overfitting - (slides 22)
- Regularization - (slides 22)
- Validation - (slides 23)
- Support Vector Machines - (slides 21)
- Kernel Methods - (slides 21)
- Radial Basis Functions - (slides 23)
- Three Learning Principles - (slides 23)
- Epilogue- (slides 18)
- Ref: https://work.caltech.edu/telecourse

- Naïve Bayes (51 pages)
- NNaïve Bayes Logistic Regression (17 pages)
- NNaïve Bayes (35 pages)
- NGaussian Naïve Bayes (31 pages)
- NNaïve BayesLogReg ( 17 pages)
- NGenerative/Discriminative classifier (36 pages)
- Ref: http://www.cs.cmu.edu/~ninamf/courses/601sp15/lectures.shtml

- Neural Networks (Slides 48, 46min)
- NN arch (Slides 32, 45min)
- Linear Neuron (Slides 35, 45min)
- Predict next word (Slides 34, 45min)
- Object Recognition (Slides 30, 45min)
- Batch Gradient Descent (Slides 31, 45min)
- Modeling sequences (Slides 34, 50min)
- Hessian Free - optional (Slides 31, 60min)
- Improve Generalisation (Slides 39, 45min)
- Combine Models (Slides 41, 45min)
- Hopfield Nets (Slides 37, 60min)
- Boltzman MLA (Slides 47, 60min)
- Backprop up/down (Slides 26, 50min)
- RBM (Slides 39, 70min)
- Autoencoderss (Slide 35, 60min)
- Image & Caption (Slides 19, 40min)
- Ref: https://www.coursera.org/learn/neural-networks

- Introduction (Slides 47, 1hr 20min)
- Image Classification Pipeline (Slides 57, 1hr)
- Loss Function and Optimization (Slides 76, 1hr 10min)
- Back Propagation and NN part 1 (Slides 84, 1hr 20min)
- Training NN part 1 (Slides102, 1hr 20min)
- Training NN part 2 (Slides 86, 1hr 10min)
- Convolution NN (Slides 89)
- Spatial Localization and Detection (Slides 90, 1hr 5min)
- Understanding and Visualizing Convolutional Neural Networks (Slides 83, 1hr 20min)
- Recurrent Neural Networks (Slides 82, 1hr 10min)
- CNNs in Practice (Slides 112, 1hr 15min)
- Software Packages Caffe Torch Theano TensorFlow (Slides 177, 1hr 20min)
- Segmentation and Attention (Slides 133, 1hr 10min)
- Videos unsupervised learning (Slides130, 1hr 20min)
- Administrative (Slides 22)
- Invited Talk by Jeff Dean (1hr15min)
- Ref: http://cs231n.stanford.edu/

- Introduction (53min)
- linear models (48min)
- Maximum likelihood and information (1hr 13min)
- Regularization, model complexity and data complexity - part 1 (41min)
- Regularization, model complexity and data complexity - part 2 (60min)
- Optimization (60min)
- Logistic regression, a Torch approach (45min)
- Modular back-propagation, logistic regression and Torch (53min)
- Neural networks and modular design in Torch (54min)
- Convolutional Neural Networks (51min)
- Max-margin learning, transfer and memory networks (60min)
- Recurrent Neural Nets and LSTMs (52min)
- Deep Reinforcement Learning - Policy search (55min)
- Reinforcement learning and neuro-dynamic programming (57min)
- Ref: https://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/

- From Machine Learning to Deep Learning
- Deep Neural Networks
- Convolutional Neural Networks
- Deep Models for Text and Sequences
- Ref: https://in.udacity.com/course/deep-learning--ud730

- Course 1: Neural Networks and Deep Learning
- Course 2: Improving Deep Neural Networks: Hyperparameter tuning, Regularization and Optimization
- Course 3: Structuring Machine Learning Project
- Course 4: Convolutional Neural Networks
- Course 5: Sequence Models
- Ref: https://www.coursera.org/specializations/deep-learning

- Ref: Stanford: http://deeplearning.stanford.edu/tutorial/
- Ref: Montreal: https://sites.google.com/site/deeplearningsummerschool/

- CS231n Convolutional Neural Networks for Visual Recognition. (71)
- Getting Started in Computer Vision by Mostafa S. Ibrahim (71)
- A Beginner's Guide To Understanding Convolutional Neural Networks
- ImageNet Classification with Deep Convolutional Neural Networks - Alex Krizhevsky et. al.
- Going deeper with Convolutions - Google
- LeNet-5, convolutional neural networks - Yann LeCun http://yann.lecun.com/exdb/lenet/
- Deep learning for complete beginners: convolutional neural networks with keras - https://cambridgespark.com/content/tutorials/convolutional-neural-networks-with-keras/index.html
- Convolutional Neural Networks for Sentence Classification - http://www.aclweb.org/anthology/D14-1181
- Text Understanding from Scratch - https://arxiv.org/pdf/1502.01710.pdf
- Character-level Convolutional Networks for Text Classification - https://arxiv.org/pdf/1509.01626.pdf
- Using convolutional neural nets to detect facial keypoints tutorial - danielnouri.org
- Ref: http://cs231n.stanford.edu/
- Demos: http://cs.stanford.edu/people/karpathy/convnetjs/

- CS 224N / Ling 284 by Christopher Manning is a great course to get started.
- CS224d: Deep Learning for Natural Language Processing,
- Stanford class by David Socher (founder of MetaMind)
- For more details see How do I learn Natural Language Processing?
- On word embeddings: http://ruder.io/word-embeddings-1/
- word2vec Parameter Learning Explained - Xin Rong
- GloVe: Global Vectors for Word Representation - Jeffrey et. al.
- Visualizing Data using t-SNE - Laurens et. al.
- How to Use t-SNE Effectively - https://distill.pub/2016/misread-tsne/
- Distributed Representations of Sentences and Documents - Quoc et. al.
- Ref: http://cs224d.stanford.edu/

- The Unreasonable Effectiveness of Recurrent Neural Networks - Andrej Karpathy
- Understanding LSTM networks - Christopher - http://colah.github.io/posts/2015-08-Understanding-LSTMs/
- Deep Learning, NLP, and Representations - http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/
- Attention and Augmented Recurrent Neural Networks - https://distill.pub/2016/augmented-rnns/
- Unfolding RNNs: Concepts and Architectures - http://suriyadeepan.github.io/2017-01-07-unfolding-rnn/
- Doc2Vec Tutorial - https://rare-technologies.com/doc2vec-tutorial/
- Distributed representations of sentences and documents – Le & Mikolov
- Doc2vec model Example - https://amsterdam.luminis.eu/2016/11/15/machine-learning-example/
- Building Skip-Thought Vectors for Document Understanding - https://www.intelnervana.com/building-skip-thought-vectors-document-understanding/
- Tweet2Vec: Character-Based Distributed Representations for Social Media - Dhingra et al. - http://www.cs.cmu.edu/~wcohen/postscript/acl-2016-bd.pdf
- Recent work in combining attention mechanism in LSTM Recurrent Neural networks with external writable memory has meant some interesting work in building systems that can understand, store and retrieve information in a question & answering style. This research area got its start in Dr. Yann Lecun’s Facebook AI lab at NYU.
- The original paper is on arxiv: Memory Networks. There’re many research variants, datasets, benchmarks, etc that have stemmed from this work, for example, Metamind’s Dynamic Memory Networks for Natural Language Processing
- Skip-Thought Vectors - https://arxiv.org/pdf/1506.06726.pdf
- Character-Aware Neural Language Models - https://people.csail.mit.edu/dsontag/papers/kim_etal_AAAI16_slides.pdf
- Ref: http://colah.github.io/
- Recurrent Neural Networks for Collaborative Filtering - Erik Bernhardsson

- Deep Boltzmann Machines - Ruslan et. al.
- A BetterWay to Pretrain Deep Boltzmann Machines - Ruslan et. al.
- An Efficient Learning Procedure for Deep Boltzmann Machines - Ruslan et. al.
- A Beginner’s Tutorial for Restricted Boltzmann Machines - Deeplearning4j.org
- RBM for Collaborative Filtering - Ruslan et. al.
- RBM and recommender systems - Xavier Chapuis
- A Practical Guide to Training Restricted Boltzmann Machines - Geo rey Hinton

- Introduction to Deep Learning and Self Driving Cars [MIT 6.S094]
- 1. Introduction to Deep Learning and Self-Driving Cars (1h30m)
- 2. Deep Reinforcement Learning for Motion Planning (1h30m)
- 3. Convolutional Neural Networks for End-to-End Learning of the Driving Task (1h20m)
- 4. Recurrent Neural Networks for Steering Through Time (1h15m)
- 5. Deep Learning for Human-Centered Semi-Autonomous Vehicles (35min)
- Ref: http://selfdrivingcars.mit.edu/
- David Silver’s (Google Deepmind) Video Lectures on RL
- Book: Reinforcement Learning: An Introduction - Rich Stutton
- Andrew Ng: CS229 - Reinforcement Learning and Control (15 pages)

- Machine Learning: A Probabilistic Perspective - Kevin P Murphy
- Patttern Recognition and Machine Learning - Christopher M. Bishop
- Machine Learning - Tom Mitchell
- The Elements of Statistical Learning - Hastie, Tibshirani, Friedman
- Deep Learning - http://www.deeplearningbook.org
- Learning Deep Architectures for AI (2009) provides a good but academic introduction paper to the eld. http://goo.gl/MkUt6B
- Deep Learning in Neural Networks: An Overview (2014), another excellent but academic introduction paper to the eld. http://arxiv.org/abs/1404.7828
- Deep Learning Tutorial - LISA lab, University of Montreal
- Mining of Massive Datasets (Cup) Paperback use pre formatted date that complies with legal requirement from media matrix – 20 Jun 2014 by Anand Rajaraman (Author), Jeffrey David Ullman (Author)
- Reinforcement Learning : An Introduction" - Richard S. Sutton

- Python Machine Learning - Sebastian Raschka
- Building Machine Learning Systems with Python - Luis Pedro Coelho and Willi Richert
- Programming Collective Intelligence - Toby Segaran
- Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems

- Book: Bayesian Reasoning and ML David Barber
- Video: Linear Algebra for machine learning” and was created by Patrick van der
- Course: Coding the Matrix: Linear Algebra through Computer Science Applications - Philip Klein
- Book: Linear algebra and its applications - Gilbert Strang

- Python tutorial: http://docs.python.org/tutorial/
- Beginners: http://www.greenteapress.com/thinkpython/
- Intermediate: http://www.diveintopython.net/
- Google Python style guide , 2013
- Video: Google Python Class Day 1 Part 2
- Video: Google Python Class Day 1 Part 3
- Video: Google Python Class Day 2 Part 2
- Video: Google Python Class Day 2 Part 3
- Video: Google Python Class Day 2 Part 4
- http://code.google.com/edu/languages/google-python-class
- Book: introduction_to_computation_and_programming_using_python - John Guttag
- A Complete Tutorial to Learn Data Science with Python from Scratch - Analytics Vidhya
- Python Numpy http://cs231n.github.io/python-numpy-tutorial/
- pandas 0.18.1 documentation Cookbook
- https://scipy.org/
- http://scikit-learn.org
- Video: SciKit - Pycon (3 hrs)
- Scikit Learn Machine Learning Tutorial
- Advanced SciKit Learn Tutorial
- Deep Learning in Python - Analytics Vidhya

- Tensorflow: https://www.tensorflow.org/
- Tensorflow for Deep Learning Research - http://web.stanford.edu/class/cs20si/
- Theano: http://deeplearning.net/software/theano/tutorial/
- Keras: https://keras.io/
- Torch: http://torch.ch/
- Caffe: http://caffe.berkeleyvision.org/
- Lasange: https://lasagne.readthedocs.io/en/latest/

- Simple and scalable response prediction for display advertising Olivier Chapelle, Criteo et. al.
- An Empirical Evaluation of Thompson Sampling, Olivier Chapelle, Yahoo et. al.
- Improving Ad Relevance in Sponsored Search, Yahoo
- Click Modeling in Search Advertising- Challenges & Solutions, Yahoo
- Multi-armed Bandit, Cameron Davidson
- Field-aware Factorization Machines for CTR Prediction
- Ad Click Prediction- a View from the Trenches, Google
- A Logistic Regression Approach to Ad Click Prediction
- Online Advertising and Large Scale model fitting
- Sequential Click Prediction for Sponsored Search with Recurrent Neural Networks
- Training Large-scale Ad Ranking Models in Spark
- Delivering Guaranteed Display Ads under Reach and Frequency Requirements
- Real-Time Bidding based Display Advertising: Mechanisms and Algorithms

- Data Mining Methods for Recommender Systems, Xavier Amatriain et. al.
- Netflix The Recommender Problem Revisited
- Deep Neural Networks for YouTube Recommendations, Google
- Deep Learning for Recommender System, Telefonica Research
- Repeat buyer prediction for eCommerce Training
- Collaborative Deep Learning for Recommendation
- Wide & Deep Learning for Recommender Systems, Google
- A Multi-View Deep Learning Approach for Cross Domain User Modeling in Recommendation Systems, Microsoft
- A Survey and Critique of Deep Learning on Recommender Systems
- Amazon Food Review Classification using Deep Learning and Recommender System
- Applying Deep Learning to Collaborative Filtering, Hulu
- Collaborative Deep Learning for Recommender Systems, Hong Kong
- Collaborative Filtering and Deep Learning Based Recommendation System For Cold Start Items, UK
- Comparative Deep Learning of Hybrid Representations for Image Recommendations
- Deep content-based music recommendation
- Factorization Meets the Neighborhood- a Multifaceted Collaborative Filtering Model
- Improving Scalability of Personalized Recommendation Systems for Enterprise Knowledge Workers
- Relational Stacked Denoising Autoencoder for Tag Recommendation
- Restricted Boltzmann Machines for Collaborative Filtering
- Session-based Recommendations with Recurrent Neural Networks
- The application of Deep Learning in Collaborative Filtering
- The WellDressed Recommendation Engine
- Toward Fashion-Brand Recommendation Systems Using Deep-Learning: Preliminary Analysis
- Enhanced Deep Convolutional Neural Network for Move Recommendation in Go
- Explainable Restricted Boltzmann Machines for Collaborative Filtering
- Recommender Systems for Large-scale E-Commerce

- Reliable Effective Terascale Linear Learning System
- ImageNet Classification with Deep Convolutional Neural Networks
- Visualizing and Understanding Deep Neural Networks
- A Practical guide to training Restricted Boltzman Machine
- Deep learning using genetic algorithms
- Genetic Algorithms in Search, Optimization, and Machine Learning
- A Scalable Tree Boosting System, Tianqi Chen (author)

- How to Rank 10% in Your First Kaggle Competition
- Beating Kaggle the easy way
- How to win Machine Learning competitions
- Want to Win Competitions Pay Attention to Your Ensembles
- 4 Idiots' Approach for Click-through Rate Prediction
- Secret Sauce Behind 9 Kaggle Winning Ideas
- Winning the KDD Cup Orange Challenge with Ensemble Selection
- Ensemble of Collaborative Filtering and Feature Engineered Models for Click Through Rate Prediction
- 3-idots Approach for Display Advertising
- A Beat the benchmark with Vowpal Wabbit - Display Advertising Challenge | Kaggle
- Feature Engineering and Classifier Ensemble for KDD Cup 2010
- BellKor solution to the NetFlix Grand Prize
- The BigChaos Solution to the Netix Grand Prize
- Netflix algorithm Prize Tribute Recommendation Algorithm in Python
- Large-scale Parallel Collaborative Filtering for the Netflix Prize
- Netflix Prize and SVD
- Netflix Tech Blog_ Netflix Recommendations_ Beyond the 5 stars
- What tools do Kaggle winners use

Copyright © 2015-2019 DeepThinking.AI