Sunday, January 7, 2024

Exercise 1.1 On Linear Equations for Regression - From Pattern Recognition and ML - By Christopher Bishop

Pre-read
Question
Solution from web
My Explanation
Tags: Technology,Machine Learning,Mathematical Foundations for Data Science,

Exercise 1.2 On Regularization in Linear Regression - Pattern Recognition and ML - By Christopher Bishop

Pre-read for exercise 1.1:
Pre-read for exercise 1.2:
Problem 1.2:
Solution for problem 1.2: Pg 1:
Pg 2:
Pg 3:
Pg 4:

Exercise 1.3 on 'sum rule and product rule of probability' (From the book Pattern Recognition and ML by Christopher Bishop)

Let's quiz you for the basic concepts of probability theory by considering a simple example. 
    
Imagine we have two boxes, one red and one blue, and in the red box we have 2 apples and 6 oranges, and in the blue box we have 3 apples and 1 orange.

This is illustrated in Figure 1.9. Now suppose we randomly pick one of the boxes and from that box we randomly select an item of fruit, and having observed which sort of fruit it is we replace it in the box from which it came. We could imagine repeating this process many times. 

Let us suppose that in so doing we pick the red box 40% of the time and we pick the blue box 60% of the time, and that when we remove an item of fruit from a box we are equally likely to select any of the pieces of fruit in the box.

In this example, the identity of the box that will be chosen is a random variable, which we shall denote by B. This random variable can take one of two possible values, namely r (corresponding to the red box) or b (corresponding to the blue box). Similarly, the identity of the fruit is also a random variable and will be denoted by F . It can take either of the values a (for apple) or o (for orange).

To begin with, we shall define the probability of an event to be the fraction of times that event occurs out of the total number of trials, in the limit that the total number of trials goes to infinity. Thus the probability of selecting the red box is 4/10 and the probability of selecting the blue box is 6/10. We write these probabilities as: 
p(B = r) = 4/10 and p(B = b) = 6/10. 

Note that, by definition, probabilities must lie in the interval [0, 1]. Also, if the events are mutually exclusive and if they include all possible outcomes (for instance, in this example the box must be either red or blue), then we see that the probabilities for those events must sum to one.

We can now ask questions such as: 
“what is the overall probability that the selection procedure will pick an apple?”, 

or 

“given that we have chosen an orange, what's the probability that the box we chose was the blue one?”

We can answer questions such as these, and indeed much more complex questions associated with
problems in pattern recognition, once we have equipped ourselves with the two elementary rules of probability, known as the sum rule and the product rule. 

Sum Rule and Product Rule

Here p(X, Y ) is a joint probability and is verbalized as “the probability of X and Y ”. Similarly, the quantity p(Y |X) is a conditional probability and is verbalized as “the probability of Y given X”, whereas the quantity p(X) is a marginal probability and is simply “the probability of X”. Let us now return to our example involving boxes of fruit. For the moment, we shall once again be explicit about distinguishing between the random variables and their instantiations. We have seen that the probabilities of selecting either the red or the blue boxes are given by p(B = r) = 4/10 p(B = b) = 6/10 respectively. Note that these satisfy p(B = r) + p(B = b) = 1. Now suppose that we pick a box at random, and it turns out to be the blue box. Then the probability of selecting an apple is just the fraction of apples in the blue box which is 3/4, and so p(F = a|B = b) = 3/4. In fact, we can write out all four conditional probabilities for the type of fruit, given the selected box p(F = a|B = r) = 1/4 p(F = o|B = r) = 3/4 p(F = a|B = b) = 3/4 p(F = o|B = b) = 1/4

Practice Exercise

Suppose that we have three coloured boxes r (red), b (blue), and g (green). Box r contains 3 apples, 4 oranges, and 3 limes, box b contains 1 apple, 1 orange, and 0 limes, and box g contains 3 apples, 3 oranges, and 4 limes. If a box is chosen at random with probabilities p(r) = 0.2, p(b) = 0.2, p(g) = 0.6, and a piece of fruit is removed from the box (with equal probability of selecting any of the items in the box), then what is the probability of selecting an apple? If we observe that the selected fruit is in fact an orange, what is the probability that it came from the green box?

Solution

Ref: ChatGPT

Exercise 1.4 On Probability Densities - Pattern Recognition and ML - By Christopher Bishop

Pre-read

Fig:1
Fig:2
Fig:3
If we have several continuous variables x1,...,xD, denoted collectively by the vector x, then we can define a joint probability density p(x) = p(x1,...,xD) such Fig:4

Question

Fig:5

Solution

Exercise 1.5 On Concept of Variance From Statistics - Pattern Recognition and ML - By Christopher Bishop

Pre-read

1.5: Using the definition (1.38) show that var[f(x)] satisfies (1.39).

Solution

Exercise 1.6 On Covariance - Pattern Recognition and ML - By Christopher Bishop

Pre-read

Fig 1
Fig 2

1.6: Show that if two variables x and y are independent, then their covariance is zero.

Solution

Fig 3
Fig 4

Exercise 1.7 - Identifying normalization constant for Gaussian Distribution and Proving that it's integral equals 1 - From Pattern Recognition and ML - By Christopher Bishop

Pre-read

Fig 1:
Fig 2:

Question

Fig 3:

Solution

Fig 4:
Fig 5:
Fig 6:
Fig 7:
Fig 8:
Fig 9:
Fig 10:
Fig 11:

Index of Machine Learning

Toggle All Sections

THEORY

What is Machine Learning?

Logistic Regression

Decision Tree

Deep Learning

Integration

Vector Calculus

Exercises from the book "Pattern Recognition and Machine Learning" by Christopher Bishop

Statistics


PRACTICALS


ML1: Category Encoding

  1. Installing 'Category Encoders' Python Package Using Pip And Conda
  2. Category Encoders Analysis (in Python)
  3. One Hot Encoding Using Pandas' get_dummies() Method on Titanic Dataset
  4. Do we need all the one hot features?
  5. One Hot Encoding from PySpark, Pandas, Category Encoders and skLearn
  6. Comparing StringIndexer (PySpark), LabelEncoder (skLearn), OrdinalEncoder (skLearn), OrdinalEncoder (category_encoders) [Tags: Machine Learning, Spark]
  7. Effect of PySpark's StringIndexer on clustering of data [Tags: Machine Learning, Spark]
  8. Two ways to get Frequency Based Order of Categorical Data (Python)

ML2: Data Preprocessing

  1. Binning (of a column or 1D Numerical Data)
  2. Feature Scaling in Machine Learning (when to use which among MinMaxScaler and StandardScaler)
  3. Similarity and Dissimilarity for Numerical (interval-scaled) variables, Asymmetric binary variables, Categorical variables, For text
  4. Correlation between continuous-numeric columns, and between categorical columns
  5. Importance of posing right question for machine learning, data analysis and data preprocessing [Tags: Machine Learning, Spark]
  6. Working with skLearn's MinMax scaler and defining our own
  7. Data Preprocessing Using Python package Pandas (Use case: Creating a Nifty50 SIP Simulator)
  8. Loading data from Pandas to PostgreSQL [Tags: Databases, Machine Learning]
  9. Exploring skLearn's CountVectorizer
  10. Using Snorkel to create test data and classifying using Scikit-Learn
  11. Pandas DataFrame Filtering Using eval()
  12. Three Types of Input Data Format For Apriori Algorithm (Association Analysis)
  13. Bagging in overcoming variance of a classifier, clustering algorithm or regressor

ML3: Data visualization

ML3.1: Misc

  1. Data Visualization's Basic Theory
  2. Box Plot and Anomaly Detection in 1D [Tags: Data Visualization, Anomaly Detection]
  3. Social Analysis (SOAN using Python 3) Report [Tags: Data Visualization, NLP]
  4. Plotting Correlation Matrix in Three Ways Using Pandas, Matplotlib and Seaborn
  5. Creating Heatmap from Pandas DataFrame correlation matrix
  6. Binomial Probability Distribution (visualization using Seaborn) [Tags: Machine Learning, NumPy]
  7. Google Analytics for Beginners (Assessments Dump, Oct 2020)
  8. survival8 Audience Around The World (Jun 2021)
  9. Topic modeling using Latent Dirichlet Allocation from sklearn and visualization using pyLDAvis
    [Tags: Data Visualization, Natural Language Processing]

ML3.2: Using PowerBI

  1. PowerBI's HTML Content Visualization [Tags: Data Visualization, PowerBI]
  2. Sorting an 'HTML Content' Visual in PowerBI [Tags: Data Visualization, PowerBI]
  3. Concatenate Two Tables using R in PowerBI [Tags: Data Visualization, PowerBI, R Language]
  4. Timeline View Using HTML Content Visual in PowerBI [Tags: Data Visualization, PowerBI, Python]

ML3.3: Line Chart

  1. Creating and Editing Line Chart in LibreOffice Calc [Tags: Data Visualization, FOSS]
  2. Line plot with multiple lines for page views for survival8 (Sep 2022)

ML3.4: Pie Plot

  1. Stratified sampling and fixed size sampling plus visualization using pie plot (Nov 2022)
  2. Plotting changes in Nifty50's top-5 sectors after last three market crashes Using Pie Plot, Bar Chart and Grouped Bar Chart

ML3.5: Choropleth and Cartography

Choropleth: A choropleth map is a type of statistical thematic map that uses pseudocolor, i.e., color corresponding with an aggregate summary of a geographic characteristic within spatial enumeration units, such as population density or per-capita income. 
Cartography: the science or practice of drawing maps.

  1. Drawing a world heat map using 'cartopy'
  2. Ownership of Bicycle, 2-wheeler and car in India by percentages (2022)
  3. Global Footprint of Survival8 (Nov 2022)

ML3.6: Histogram

  1. Differences between 'bar graph' and histogram
  2. Histogram report and binning on Sales data
  3. An exercise in visualization (plotting line plot and multicolored histogram with -ve and +ve values) using Pandas and Matplotlib

ML4: Outlier Detection / Anomaly Detection

  1. Box Plot and Anomaly Detection in 1D [Tags: Data Visualization, Anomaly Detection]
  2. DCSO: Dynamic Combination of Detector Scores for Outlier Ensembles (Research paper, 2018)
  3. Unsupervised Outlier Detection Using PyOD
  4. Isolation based anomaly detection using iForest (2133360.2133363 / Research Paper)
  5. Density-based algorithm for anomaly detection (Adeel Hashmi / Research Paper)
  6. Isolation Forest Implementation using skLearn, PyOD, and spark-iForest
  7. Anomaly Detection using Scikit-Learn and "eif" PyPI package (for Extended Isolation Forest)
  8. Distributed Deep Learning Using Python Packages Elephas, Keras, Tensorflow and PySpark [Tags: Anomaly Detection, Deep Learning, Spark]
  9. Anomalies in 'survival8' Viewers' Stats (Mar 2022)
  10. Estimating the Contamination Factor For Unsupervised Anomaly Detection

ML5: Classification

ML5.1: Decision Tree

  1. Decision Tree Learning
  2. Interpretation of Decision Tree J48 Classifier output in Weka [Tags: FOSS, Machine Learning, Weka]
  3. Calculations for Info Gain and Gini Coefficient for Building Decision Tree

ML5.2: Miscellaneous of Classification

  1. Creating ML model, saving it, and creating Flask API [Tags: Flask, Machine Learning, Classification]
  2. kNN classification parallelized using MapReduce [Tags: Hadoop / Spark, Machine Learning]
  3. Elbow Method for identifying k in kMeans (clustering) and kNN (classification)
  4. Snorkel's Analysis Package Overview (v0.9.6, Sep 2020). This dicusses how to interpret classification results
  5. Improving a Classifier (ML) Using Snorkel's Slicing Technique
  6. Multi-label Classification using Python
  7. Naïve Bayes Classifier for Spam Filtering
  8. Weka classification experiment on Iris dataset [Tags: FOSS, Machine Learning, Weka]

ML6: Clustering

  1. Elbow Method for identifying k in kMeans (clustering) and kNN (classification)
  2. Weka clustering experiment on Iris dataset [Tags: FOSS, Machine Learning, Weka]
  3. 'Supervised classification-oriented measures' for Cluster Analysis
  4. Similarity-oriented measures for cluster analysis

ML7: Regression

  1. Linear Regression (Theory)
  2. Improvements over OLS (Forward Stepwise, Ridge, Lasso and LARS forms of Regression)
  3. Descriptive Statistics and Linear Regression Using 'statistics' module and 'statsmodels' module
  4. Hands-on 5 Regression Algorithms Using Scikit-Learn
  5. Demo of Linear Regression on Boston Housing Data Using Weka [Tags: FOSS, Machine Learning]
  6. Saving Model, Loading Model and Making Predictions for Linear Regression (in Weka)
  7. Linear Regression Using Java Code And Weka JAR [Tags: FOSS, Java, Machine Learning, Weka]

ML8: Association Mining Between Attributes

  1. Three Types of Input Data Format For Apriori Algorithm (Association Analysis)
  2. The Concept of Lift in Association Rules Mining
  3. Apriori Algorithm For Association Mining Using Weka's Supermarket Dataset
  4. Running Weka's Apriori on 9_TXN_5_ITEMS Dataset
  5. Interpretation of output from Weka for Apriori Algorithm

ML9: Weka Tool

  1. Demo of Linear Regression on Boston Housing Data Using Weka [Tags: FOSS, Machine Learning, Weka]
  2. Interpretation of Decision Tree J48 Classifier output in Weka [Tags: FOSS, Machine Learning, Weka]
  3. Weka classification experiment on Iris dataset [Tags: FOSS, Machine Learning, Weka]
  4. Weka clustering experiment on Iris dataset [Tags: FOSS, Machine Learning, Weka]
  5. Demo of Linear Regression on Boston Housing Data Using Weka [Tags: FOSS, Machine Learning]
  6. Saving Model, Loading Model and Making Predictions for Linear Regression (in Weka)
  7. Linear Regression Using Java Code And Weka JAR [Tags: FOSS, Java, Machine Learning, Weka]
  8. Apriori Algorithm For Association Mining Using Weka's Supermarket Dataset
  9. Running Weka's Apriori on 9_TXN_5_ITEMS Dataset
  10. Interpretation of output from Weka for Apriori Algorithm
  11. Machine Learning and Weka Interview (5 Questions) [Tags: Machine Learning Q&A, Weka Tool]

ML10: Traffic Prediction on my Blog

  1. Traffic Prediction on my Blog (Oct 2023)
  2. When not to use Poisson Distribution for prediction?
  3. Time Series Analysis and Forecasting Using Exponential Moving Average (A use case of traffic prediction on my blog)

ML11: Questions and Answers

  1. Machine Learning dose with ten Q&A (Set 1)
  2. Machine Learning Q&A (Set 2)
  3. Machine Learning Q&A (Set 3)
  4. LinkedIn Machine Learning Assessment Dump (Aug 2021)
  5. Machine Learning and Weka Interview (5 Questions) [Tags: Machine Learning Q&A, Weka Tool]
  6. ARIMA forecast for timeseries is one step ahead. Why? (Solved Interview Problem)

ML12: Miscellaneous

  1. Simple demonstration of how important data is for machine learning
  2. Reading a JSON file from the Google Drive in the Google Colab
  3. A case of cyclic dependencies between PyPI packages [Tags: Machine Learning, Python]
  4. Extracting Information From Search Engines [Tags: Machine Learning, Python]
  5. Digging deeper into your toolbox (Viewing LDiA code of sklearn)
    [Tags: FOSS, Machine Learning, Natural Language Processing]

ML13: Articles

  1. Machine Learning Resources (Dec 2019)
  2. Machine Learning Evolution (Jan 2020)
  3. Data Science Timeline (Aug 2020)

ML14: Our 'Machine Learning' Videos on YouTube

  1. Session 1 - Linear Regression (OLS Method and Theory) - 20210716
  2. Session 2 - Improvements over Linear Regression method of OLS (Forward Stepwise, Ridge, Lasso, LARS)
  3. Linear Regression Theory (2022-02-15)
  4. Pandas and Linear Regression in Code (Dated: 2022-Feb-16)
  5. Naive Bayes Classifier / Application: Spam Filtering / Dated: 2022 Feb 17
  6. Decision Trees Learning (2022 Feb 22)
  7. Perceptron in Machine Learning (24 Apr 2022)
Tags: Machine Learning,Mathematical Foundations for Data Science,Technology,Index