0% found this document useful (0 votes)
101 views5 pages

Data Transformation and Standardization

Uploaded by

adityatompe17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views5 pages

Data Transformation and Standardization

Uploaded by

adityatompe17
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Unit 03: Advanced Statistics

Data Transformation
Data transformation in data mining refers to the process of converting raw data into
a format that is suitable for analysis and modelling. The goal of data transformation
is to prepare the data for data mining so that it can be used to extract useful insights
and knowledge. Data transformation typically involves several steps, including:
1. Data cleaning: Removing or correcting errors, inconsistencies, and
missing values in the data.
2. Data integration: Combining data from multiple sources, such as
databases and spreadsheets, into a single format.
3. Data normalization: Scaling the data to a common range of values, such
as between 0 and 1, to facilitate comparison and analysis.
4. Data reduction: Reducing the dimensionality of the data by selecting a
subset of relevant features or attributes.
5. Data discretization: Converting continuous data into discrete categories
or bins.
6. Data aggregation: Combining data at different levels of granularity, such
as by summing or averaging, to create new features or attributes.
7. Data transformation is an important step in the data mining process as it
helps to ensure that the data is in a format that is suitable for analysis and
modeling, and that it is free of errors and inconsistencies. Data
transformation can also help to improve the performance of data mining
algorithms, by reducing the dimensionality of the data, and by scaling the
data to a common range of values.
The data are transformed in ways that are ideal for mining the data. The data
transformation involves steps that are:
1. Smoothing: It is a process that is used to remove noise from the dataset using
some algorithms It allows for highlighting important features present in the dataset.
It helps in predicting the patterns. When collecting data, it can be manipulated to
eliminate or reduce any variance or any other noise form. The concept behind data
smoothing is that it will be able to identify simple changes to help predict different
trends and patterns. This serves as a help to analysts or traders who need to look at a
lot of data which can often be difficult to digest for finding patterns that they
wouldn’t see otherwise.
2. Aggregation: Data collection or aggregation is the method of storing and
presenting data in a summary format. The data may be obtained from multiple data
sources to integrate these data sources into a data analysis description. This is a
crucial step since the accuracy of data analysis insights is highly dependent on the
quantity and quality of the data used. Gathering accurate data of high quality and a
large enough quantity is necessary to produce relevant results. The collection of data
is useful for everything from decisions concerning financing or business strategy of
the product, pricing, operations, and marketing strategies. For example, Sales, data
may be aggregated to compute monthly& annual total amounts.
3. Discretization: It is a process of transforming continuous data into set of small
intervals. Most Data Mining activities in the real world require continuous attributes.
Yet many of the existing data mining frameworks are unable to handle these
attributes. Also, even if a data mining task can manage a continuous attribute, it can
significantly improve its efficiency by replacing a constant quality attribute with its
discrete values. For example, (1-10, 11-20) (age:- young, middle age, senior).
4. Attribute Construction: Where new attributes are created & applied to assist the
mining process from the given set of attributes. This simplifies the original data &
makes the mining more efficient.
5. Generalization: It converts low-level data attributes to high-level data attributes
using concept hierarchy. For Example Age initially in Numerical form (22, 25) is
converted into categorical value (young, old). For example, Categorical attributes,
such as house addresses, may be generalized to higher-level definitions, such as town
or country.
6. Normalization: Data normalization involves converting all data variables into a
given range. Techniques that are used for normalization are:
 Min-Max Normalization:
 This transforms the original data linearly.
 Suppose that: min_A is the minima and max_A is the maxima of
an attribute, P
 Where v is the value you want to plot in the new range.
 v’ is the new value you get after normalizing the old value.
 Z-Score Normalization:
 In z-score normalization (or zero-mean normalization) the values
of an attribute (A), are normalized based on the mean of A and its
standard deviation
 A value, v, of attribute A is normalized to v’ by computing
 Decimal Scaling:
 It normalizes the values of an attribute by changing the position
of their decimal points
 The number of points by which the decimal point is moved can
be determined by the absolute maximum value of attribute A.
 A value, v, of attribute A is normalized to v’ by computing
 where j is the smallest integer such that Max(|v’|) < 1.
 Suppose: Values of an attribute P varies from -99 to 99.
 The maximum absolute value of P is 99.
 For normalizing the values we divide the numbers by 100 (i.e., j
= 2) or (number of integers in the largest number) so that values
come out to be as 0.98, 0.97 and so on.

ADVANTAGES OR DISADVANTAGES:

Advantages of Data Transformation in Data Mining:


1. Improves Data Quality: Data transformation helps to improve the quality of
data by removing errors, inconsistencies, and missing values.
2. Facilitates Data Integration: Data transformation enables the integration of
data from multiple sources, which can improve the accuracy and
completeness of the data.
3. Improves Data Analysis: Data transformation helps to prepare the data for
analysis and modeling by normalizing, reducing dimensionality, and
discretizing the data.
4. Increases Data Security: Data transformation can be used to mask sensitive
data, or to remove sensitive information from the data, which can help to
increase data security.
5. Enhances Data Mining Algorithm Performance: Data transformation can
improve the performance of data mining algorithms by reducing the
dimensionality of the data and scaling the data to a common range of
values.

Disadvantages of Data Transformation in Data Mining:

1. Time-consuming: Data transformation can be a time-consuming process,


especially when dealing with large datasets.
2. Complexity: Data transformation can be a complex process, requiring
specialized skills and knowledge to implement and interpret the results.
3. Data Loss: Data transformation can result in data loss, such as when
discretizing continuous data, or when removing attributes or features from
the data.
4. Biased transformation: Data transformation can result in bias, if the data is
not properly understood or used.
5. High cost: Data transformation can be an expensive process, requiring
significant investments in hardware, software, and personnel.
Overfitting: Data transformation can lead to overfitting, which is a common problem
in machine learning where a model learns the detail and noise in the training data to
the extent that it negatively impacts the performance of the model on new unseen data.

Data Standardization

What is a data standardization example?

Data standardization is converting data into a standard format that computers can easily
understand and use. For example, when you standardize data, you might convert all
measurements into the metric system or all dates into a single format (such as YYYY-MM-
DD).

What is the purpose of data standardization?


The primary purpose of data standardization is to improve data quality, reduce costs, and
better decision-making. Data standardization can also help to enhance communication and
collaboration between different teams and departments.

3. Why is data standardization important in healthcare?


Data standardization is critical in healthcare because it helps improve patient care, increase
operational efficiency, and reduce costs. When data is standardized, it is easier to exchange
and use, which leads to better decision-making and improved patient outcomes.

4. What are data standardization and normalization?


Data standardization converts data into a standard format. This is often done to improve the
compatibility of data between different systems. Data normalization, however, ensures that
data is consistent and free of errors. This is usually done by ensuring that information is
stored in a standard format and adheres to rules.
Conclusion
Data standardization is the method of organizing data so that it can be easily accessed and
used by businesses. This process is essential because it allows companies to make better
decisions, improve efficiency, and save money. There are a few different ways to standardize
data; the most effective method will vary depending on the type and amount of data.
However, all businesses can benefit from data standardization, which is a crucial part of data
management.
Normalization vs Standardization



Feature scaling is one of the most important data preprocessing step in machine learning.
Algorithms that compute the distance between the features are biased towards numerically
larger values if the data is not scaled.
Tree-based algorithms are fairly insensitive to the scale of the features. Also, feature scaling
helps machine learning, and deep learning algorithms train and converge faster.
There are some feature scaling techniques such as Normalization and Standardization that are
the most popular and at the same time, the most confusing ones.
Normalization or Min-Max Scaling is used to transform features to be on a similar scale.
The new point is calculated as:
X_new = (X - X_min)/(X_max - X_min)
This scales the range to [0, 1] or sometimes [-1, 1]. Geometrically speaking, transformation
squishes the n-dimensional data into an n-dimensional unit hypercube. Normalization is
useful when there are no outliers as it cannot cope up with them. Usually, we would scale
age and not incomes because only a few people have high incomes but the age is close to
uniform.
Standardization or Z-Score Normalization is the transformation of features by
subtracting from mean and dividing by standard deviation. This is often called as Z-score.
X_new = (X - mean)/Std
Standardization can be helpful in cases where the data follows a Gaussian distribution.
However, this does not have to be necessarily true. Geometrically speaking, it translates the
data to the mean vector of original data to the origin and squishes or expands the points if
std is 1 respectively. We can see that we are just changing mean and standard deviation to a
standard normal distribution which is still normal thus the shape of the distribution is not
affected.
Standardization does not get affected by outliers because there is no predefined range of
transformed features.
Difference between Normalization and Standardization
S.NO. Normalization Standardization

Minimum and maximum value of Mean and standard deviation is used for
1.
features are used for scaling scaling.

It is used when features are of It is used when we want to ensure zero


2.
different scales. mean and unit standard deviation.

Scales values between [0, 1] or [-1,


3. It is not bounded to a certain range.
1].

4. It is really affected by outliers. It is much less affected by outliers.

Scikit-Learn provides a transformer Scikit-Learn provides a transformer


5. called MinMaxScaler for called StandardScaler for
Normalization. standardization.

This transformation squishes the n- It translates the data to the mean vector
6. dimensional data into an n- of original data to the origin and
dimensional unit hypercube. squishes or expands.

It is useful when we don’t know It is useful when the feature distribution


7.
about the distribution is Normal or Gaussian.

It is a often called as Scaling It is a often called as Z-Score


8.
Normalization Normalization.

You might also like