In this project, we’ll be working with data from the S&P500 Index. We will be using historical data on the price of the S&P500 Index to make predictions about future prices. Predicting whether an index will go up or down will help us forecast how the stock market as a whole will perform. Since stocks tend to correlate with how well the economy as a whole is performing, it can also help us make economic forecasts.

We’ll be working with a csv file containing index prices. Each row in the file contains a daily record of the price of the…

This article is about the simple example where you can use Gaussian distribution in the real world scenario.

Consider or imagine as you are working in a company named XYZ. You are working as a data analyst or as a data scientist or even as a HR manager and your company decides to make their own brand of t-shirt with its company logo for all the employees . Let’s say your company has about 100k employees working. There are multiple size of t-shirts(S, M, L ,XL) .

And Here the question you want to know is ” How may of…

As you have already studied about linear algebra in your high school and college. Linear algebra is fundamental to almost every field of mathematics. And for the reminder, Linear algebra is the mathematical branch of linear equations such as Linear maps. And their representations by matrices and in vector spaces. In machine learning, linear algebra plays a vital role.

Linear algebra is a must-have prerequisite for data science and machine learning. If you want to build a strong data science career then linear algebra is a vital part of its base. You may wonder why you need to learn the…

There are different metrics for assessing a classifier ‘s performance, they are called evaluation matrices. They can be categorized as matrices that satisfy and optimize. It is necessary to remember that these evaluation matrices must be evaluated on a training set, a production set or on the test set.

Example: Cat vs Non-cat

Let’s say you’ve decided that you care about the classification accuracy of your cat’s classifier, this could have been an F1 score or some other measure of accuracy, but let’s say you also care about running time in addition to accuracy.

If you are tuning hyper-parameters, or attempting various ideas for Learning algorithms, or trying out various options for constructing your learning machine. You will notice your development would be much quicker if you have Single Number Evaluation Metric that quickly lets you tell whether the unknown thing you are doing works better than the last idea, or worse. So when teams start on a project for machine learning, you need to set your problem to a single real number evaluation metric. Let us look at one case.

We often get an idea, code it, run the experiment to see how…

Orthogonalization is a system design property that ensures that modification of an instruction or an algorithm component does not create or propagate side effects to other system components. Orthogonalization makes it easier to independently verify the algorithms, thus reducing the time required for testing and development.

One of the problems with developing machine learning systems is that there are so many things that you might try to change. For instance, so many hyperparameters you might tune. One of the things I have noticed is about is that most people who is learning machine learning were really consistent about what to…

Today deep learning is applied to several different areas of application, and intuitions about hyperparameter settings from one area of application can or may not move to another. There’s a lot of mixing between the domains of different applications. So one pleasant development in deep learning is that people from different application domains are increasingly reading research papers from other application domains to look for cross-fertilization inspiration.

Finally, in terms of how people look for hyperparameters, I see perhaps two main fields of thinking. One approach is if you put one pattern to babysitting. And typically you do this because…

In the last aticle you saw how random sampling, over the range of hyperparameters, can enable you to more efficiently search the space of hyperparameters. Yet it turns out that random sampling doesn’t imply uniform sampling at random, over the spectrum of valid values.Instead, it’s important to pick the appropriate scale on which to explore the hyperparamaters.

Let’s look at one case. Say about your search for the alpha hyperparameter, the rate of learning. And let’s say you suspect that 0.0001 may be on the low end, or perhaps it might be as high as 1. Now if you draw…

Optimization or tuning of hyper-parameters is the question of choosing an appropriate range of hyper-parameters for a learning algorithm. A hyper-parameter is a parameter of which its value controls the learning process. The values of other parameters (usually node weights) are learned, by contrast.

One of the frustrating things about deep training in Neural Nets is the right number of hyper-parameters that you have to deal with, ranging from Learning rate alpha to momentum beta, if you are using momentum, or the beta-one, beta-two, and epsilon hyper-parameters for the Adam Optimization Algorithm. You may need to pick the number of…

Adam optimization algorithm is one of the unique algorithms that has really stood up and proven to be effective well across a wide variety of models in deep learning. Adam optimization algorithm essentially takes and ties together Momentum and RMSprop. Adams stands for Adaptive Moment Estimation.

How it works ?

- First, it calculates and stores an exponentially weighted average of past gradients in VdW & Vdb (before bias correction) and VdWcorrected & Vdbcorrected (with bias correction) variables.
- It then calculates an exponentially weighted average of past gradient squares and stores it in SdW & Sdb (before bias correction) and SdWcorrected…

I post articles on Data Science | Machine Learning | Deep Learning . Connect with me on Linkedln: https://www.linkedin.com/in/bibek-shah-shankhar/