Movatterモバイル変換


[0]ホーム

URL:


Master Generative AI with 10+ Real-world Projects in 2025!

  • d
  • :
  • h
  • :
  • m
  • :
  • s
Interview Prep
Career
GenAI
Prompt Engg
ChatGPT
LLM
Langchain
RAG
AI Agents
Machine Learning
Deep Learning
GenAI Tools
LLMOps
Python
NLP
SQL
AIML Projects

Reading list

Exploratory Data Analysis(EDA) in Python!

Guest Blog Last Updated : 30 Jul, 2021
6 min read

Introduction

Exploratory data analysis is one of the best practices used in data science today. While starting a career in Data Science, people generally don’t know the difference between Data analysis and exploratory data analysis. There is not a very big difference between the two, but both have different purposes.

Exploratory Data Analysis

Exploratory Data Analysis(EDA): Exploratory data analysis is a complement toinferential statistics, which tends to be fairly rigid with rules and formulas. At an advanced level, EDA involves looking at and describing the data set from different angles and then summarizing it.

Data Analysis: Data Analysis is the statistics and probability to figure out trends in the data set. It is used to show historical data by using some analytics tools. It helps in drilling down the information, to transform metrics, facts, and figures into initiatives for improvement.

Exploratory Data Analysis(EDA)

We will explore a Data set and perform the exploratory data analysis in python. You can refer to ourpython course online to get on board with python.

The major topics to be covered are below:

– Handle Missing value
– Removing duplicates
– Outlier Treatment
– Normalizing and Scaling( Numerical Variables)
– Encoding Categorical variables( Dummy Variables)
– Bivariate Analysis

# Importing Libraries

Exploratory Data Analysis - Import Libraries

# Loading the data set

We will be loading the EDA cars excel file using pandas. For this, we will be using read_excel file.

Box-plot after removing outliers

Box-plot after removing outliers

# Basic Data Exploration

In this step, we will perform the below operations to check what the data set comprises of. We will check the below things:

– head of the dataset
– the shape of the dataset
– info of the dataset
– summary of the dataset

  1. The head function will tell you the top records in the data set. By default, python shows you only the top 5 records.
  2. The shape attribute tells us a number of observations and variables we have in the data set. It is used to check the dimension of data. The cars data set has 303 observations and 13 variables in the data set.

    Exploratory Data Analysis - Data Shape

  3. info() is used to check the Information about the data and the datatypes of each respective attribute.

    Exploratory Data Analysis - Data Information

    Looking at the data in the head function and in info, we know that the variable Income and travel time are of float data type instead of the object. So we will convert it into the float. Also, there are some invalid values like @@ and ‘*’ in the data which we will be treating as missing values.

    Exploratory Data Analysis - Data Type

  4. The described method will help to see how data has been spread for numerical values. We can clearly see the minimum value, mean values, different percentile values, and maximum values.

    Exploratory Data Analysis - Describe

Handling missing value

Exploratory Data Analysis - Sum

We can see that we have various missing values in the respective columns. There are various ways of treating your missing values in the data set. And which technique to use when is actually dependent on the type of data you are dealing with.

  • Drop the missing values: In this case, we drop the missing values from those variables. In case there are very few missing values you can drop those values.
  • Impute with mean value: For the numerical column, you can replace the missing values with mean values. Before replacing with mean value, it is advisable to check that the variable shouldn’t have extreme values .i.e. outliers.
  • Impute with median value: For the numerical column, you can also replace the missing values with median values. In case you have extreme values such as outliers it is advisable to use the median approach.
  • Impute with mode value: For the categorical column, you can replace the missing values with mode values i.e the frequent ones.

In this exercise, we will replace the numerical columns with median values and for categorical columns, we will drop the missing values.

Image for postExploratory Data Analysis - Impute Missing values

Exploratory Data Analysis - Impute Missing Values

Image for post

Handling Duplicate records

Image for post

Since we have 14 duplicate records in the data, we will remove this from the data set so that we get only distinct records. Post removing the duplicate, we will check whether the duplicates have been removed from the data set or not.

Image for post

Image for post

Handling Outlier

Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations.

We Generally identify outliers with the help of boxplot, so here box plot shows some of the data points outside the range of the data.

Image for post

Box-plot before removing outliers

Looking at the box plot, it seems that the variables INCOME, have outlier present in the variables. These outliers value needs to be teated and there are several ways of treating them:

  • Drop the outlier value
  • Replace the outlier value using the IQR

Image for post

#Boxplot After removing outlier

Box-plot after removing outliers

Bivariate Analysis

When we talk about bivariate analysis, it means analyzing 2 variables. Since we know there are numerical and categorical variables, there is a way of analyzing these variables as shown below:

  1. Numerical vs. Numerical

    1. Scatterplot
    2. Line plot
    3. Heatmap for correlation
    4. Joint plot

  2. Categorical vs. Numerical

    1. Bar chart
    2. Violin plot
    3. Categorical box plot
    4.Swarm plot

  3. Two Categorical Variables

    1. Bar chart
    2. Grouped bar chart
    3. Point plot

If we need to find the correlation-

Image for post

Correlation between all the variables

Normalizing and Scaling

Often the variables of the data set are of different scales i.e. one variable is in millions and others in only 100. For e.g. in our data set Income is having values in thousands and age in just two digits. Since the data in these variables are of different scales, it is tough to compare these variables.

Feature scaling (also known as data normalization) is the method used to standardize the range of features of data. Since the range of values of data may vary widely, it becomes a necessary step in data preprocessing while using machine learning algorithms.

In this method, we convert variables with different scales of measurements into a single scale. StandardScaler normalizes the data using the formula (x-mean)/standard deviation. We will be doing this only for the numerical variables.

Image for post

Image for post

ENCODING

One-Hot-Encoding is used to create dummy variables to replace the categories in a categorical variable into features of each category and represent it using 1 or 0 based on the presence or absence of the categorical value in the record.

This is required to do since the machine learning algorithms only work on the numerical data. That is why there is a need to convert the categorical column into a numerical one.

get_dummies is the method that creates a dummy variable for each categorical variable.

Image for post

Image for post

Image for post

About the Author

Ritika Singh – Data Scientist

I am a Data scientist by profession and a Blogger by passion. I have been working on machine learning projects for more than 2 years. Here you will find articles on “Machine Learning, Statistics, Deep Learning, NLP and Artificial Intelligence”.

Login to continue reading and enjoy expert-curated content.

Free Courses

Responses From Readers

Cancel reply

Clear

Abdallah
Abdallah

Why did you treat postal code as a numerical variable? It is not meaningful to represent it that way, since a numerical value for postal code will be misinterpreted by any machine learning algorithm. For example, the postal code "90049" will be matched with a label based on the correlation and the postal code "300" will be matched to the other label since it has a lower value, which is incorrect. It would be better represented as a categorical variable, even if there are many unique observations.

123
Cancel reply

Bala
Bala

Hi Ritika, Can you pls. help me with the csv file that you used for this tutorial?I would like to use the file to learn the steps taught here.

123
Cancel reply

rohith gaddam
rohith gaddam

cool and clear its easy to understand tq for the explanation i fall in love with ur blog

123
Cancel reply

Become an Author

Share insights, grow your voice, and inspire the data community.

  • Reach a Global Audience
  • Share Your Expertise with the World
  • Build Your Brand & Audience
  • Join a Thriving AI Community
  • Level Up Your AI Game
  • Expand Your Influence in Genrative AI
imag

Flagship Programs

GenAI Pinnacle Program|GenAI Pinnacle Plus Program|AI/ML BlackBelt Program|Agentic AI Pioneer Program

Free Courses

Generative AI|DeepSeek|OpenAI Agent SDK|LLM Applications using Prompt Engineering|DeepSeek from Scratch|Stability.AI|SSM & MAMBA|RAG Systems using LlamaIndex|Building LLMs for Code|Python|Microsoft Excel|Machine Learning|Deep Learning|Mastering Multimodal RAG|Introduction to Transformer Model|Bagging & Boosting|Loan Prediction|Time Series Forecasting|Tableau|Business Analytics|Vibe Coding in Windsurf|Model Deployment using FastAPI|Building Data Analyst AI Agent|Getting started with OpenAI o3-mini|Introduction to Transformers and Attention Mechanisms

Popular Categories

AI Agents|Generative AI|Prompt Engineering|Generative AI Application|News|Technical Guides|AI Tools|Interview Preparation|Research Papers|Success Stories|Quiz|Use Cases|Listicles

Generative AI Tools and Techniques

GANs|VAEs|Transformers|StyleGAN|Pix2Pix|Autoencoders|GPT|BERT|Word2Vec|LSTM|Attention Mechanisms|Diffusion Models|LLMs|SLMs|Encoder Decoder Models|Prompt Engineering|LangChain|LlamaIndex|RAG|Fine-tuning|LangChain AI Agent|Multimodal Models|RNNs|DCGAN|ProGAN|Text-to-Image Models|DDPM|Document Question Answering|Imagen|T5 (Text-to-Text Transfer Transformer)|Seq2seq Models|WaveNet|Attention Is All You Need (Transformer Architecture)|WindSurf|Cursor

Popular GenAI Models

Llama 4|Llama 3.1|GPT 4.5|GPT 4.1|GPT 4o|o3-mini|Sora|DeepSeek R1|DeepSeek V3|Janus Pro|Veo 2|Gemini 2.5 Pro|Gemini 2.0|Gemma 3|Claude Sonnet 3.7|Claude 3.5 Sonnet|Phi 4|Phi 3.5|Mistral Small 3.1|Mistral NeMo|Mistral-7b|Bedrock|Vertex AI|Qwen QwQ 32B|Qwen 2|Qwen 2.5 VL|Qwen Chat|Grok 3

AI Development Frameworks

n8n|LangChain|Agent SDK|A2A by Google|SmolAgents|LangGraph|CrewAI|Agno|LangFlow|AutoGen|LlamaIndex|Swarm|AutoGPT

Data Science Tools and Techniques

Python|R|SQL|Jupyter Notebooks|TensorFlow|Scikit-learn|PyTorch|Tableau|Apache Spark|Matplotlib|Seaborn|Pandas|Hadoop|Docker|Git|Keras|Apache Kafka|AWS|NLP|Random Forest|Computer Vision|Data Visualization|Data Exploration|Big Data|Common Machine Learning Algorithms|Machine Learning|Google Data Science Agent

[8]ページ先頭

©2009-2025 Movatter.jp