How to tune Hyper parameters using Random Search in Python?

This recipe helps you tune Hyper parameters using Random Search in Python

Recipe Objective

Many a times while working on a dataset and using a Machine Learning model we don"t know which set of hyperparameters will give us the best result. Passing all sets of hyperparameters manually through the model and checking the result might be a hectic work and may not be possible to do.

To get the best set of hyperparameters we can use Grid Search. Random Search passes Random combinations of hyperparameters one by one into the model and check the result. Finally it gives us the set of hyperparemeters which gives the best result after passing in the model.

So this recipe is a short example of how can tune Hyper-parameters using Random Search in Python

Access Face Recognition Project Code using Facenet in Python

Step 1 - Import the library - RandomizedSearchCv

from scipy.stats import uniform from sklearn import linear_model, datasets from sklearn.model_selection import RandomizedSearchCV

Here we have imported various modules like datasets, uniform, linear_model and RandomizedSearchCV from differnt libraries. We will understand the use of these later while using it in the in the code snipet.
For now just have a look on these imports.

Step 2 - Setup the Data

Here we have used datasets to load the inbuilt iris dataset and we have created objects X and y to store the data and the target value respectively. iris = datasets.load_iris() X = iris.data y = iris.target

Step 3 - Using Model

Here, we are using Logistic Regression as a Machine Learning model to use RandomisedSearchCV. So we have created an object Logistic. logistic = linear_model.LogisticRegression()

Step 5 - Parameters to be optimized

Logistic Regression requires two parameters "C" and "penalty" to be optimised by RandomisedSearchCV. So we have set these two parameters as a list of values form which RandomisedSearchCV will select the best value of parameter. C = uniform(loc=0, scale=4) penalty = ["l1", "l2"] hyperparameters = dict(C=C, penalty=penalty)

Step 6 - Using RandomisedSearchCV and Printing Results

Before using RandomisedSearchCV, lets have a look on the important parameters.

  • estimator: In this we have to pass the models or functions on which we want to use RandomisedSearchCV
  • param_grid: Dictionary or list of parameters of models or function in which RandomisedSearchCV have to select the best.
  • Scoring: It is used as a evaluating metric for the model performance to decide the best hyperparameters, if not especified then it uses estimator score.

Making an object clf for RandomisedSearchCV and fitting the dataset i.e X and y clf = RandomizedSearchCV(logistic, hyperparameters, random_state=1, n_iter=100, cv=5, verbose=0, n_jobs=-1) best_model = clf.fit(X, y) Now we are using print statements to print the results. It will give the values of hyperparameters as a result. print("Best Penalty:", best_model.best_estimator_.get_params()["penalty"]) print("Best C:", best_model.best_estimator_.get_params()["C"]) As an output we get:

Best Penalty: l1
Best C: 1.668088018810296

Download Materials


What Users are saying..

profile image

Jingwei Li

Graduate Research assistance at Stony Brook University
linkedin profile url

ProjectPro is an awesome platform that helps me learn much hands-on industrial experience with a step-by-step walkthrough of projects. There are two primary paths to learn: Data Science and Big Data.... Read More

Relevant Projects

Build a Customer Support Agent using OpenAI and AzureML
In this LLM Project, you will build an intelligent customer support agent using OpenAI and Azure ML to automate ticket categorization, prioritization, and response generation.

Tensorflow Transfer Learning Model for Image Classification
Image Classification Project - Build an Image Classification Model on a Dataset of T-Shirt Images for Binary Classification

Deploy Transformer-BART Model on Paperspace Cloud
In this MLOps Project you will learn how to deploy a Tranaformer BART Model for Abstractive Text Summarization on Paperspace Private Cloud

Build a Text Generator Model using Amazon SageMaker
In this Deep Learning Project, you will train a Text Generator Model on Amazon Reviews Dataset using LSTM Algorithm in PyTorch and deploy it on Amazon SageMaker.

NLP Project for Multi Class Text Classification using BERT Model
In this NLP Project, you will learn how to build a multi-class text classification model using using the pre-trained BERT model.

Data Analysis of Working Capital Management using Tableau
In this Data Analysis Project using Tableau, you will focus on optimizing working capital by analyzing receivables and payables data using Tableau and build actionable dashboards.

Build an End-to-End AWS SageMaker Classification Model
MLOps on AWS SageMaker -Learn to Build an End-to-End Classification Model on SageMaker to predict a patient’s cause of death.

Isolation Forest Model and LOF for Anomaly Detection in Python
Credit Card Fraud Detection Project - Build an Isolation Forest Model and Local Outlier Factor (LOF) in Python to identify fraudulent credit card transactions.

Image Classification Model using Transfer Learning in PyTorch
In this PyTorch Project, you will build an image classification model in PyTorch using the ResNet pre-trained model.

End-to-End Speech Emotion Recognition Project using ANN
Speech Emotion Recognition using RAVDESS Audio Dataset - Build an Artificial Neural Network Model to Classify Audio Data into various Emotions like Sad, Happy, Angry, and Neutral