0% found this document useful (0 votes)
11 views34 pages

Week3- Data Preprocessing, Extraction and Preparation

The document outlines the essential tasks involved in data preprocessing, extraction, and preparation for AI/ML applications in cybersecurity, including data cleaning, integration, reduction, and transformation. It emphasizes the importance of handling missing and noisy data, as well as the need for dimensionality reduction techniques like PCA to improve data quality. Additionally, it discusses methods for data normalization and discretization to enhance the effectiveness of machine learning models.

Uploaded by

jaswinder singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views34 pages

Week3- Data Preprocessing, Extraction and Preparation

The document outlines the essential tasks involved in data preprocessing, extraction, and preparation for AI/ML applications in cybersecurity, including data cleaning, integration, reduction, and transformation. It emphasizes the importance of handling missing and noisy data, as well as the need for dimensionality reduction techniques like PCA to improve data quality. Additionally, it discusses methods for data normalization and discretization to enhance the effectiveness of machine learning models.

Uploaded by

jaswinder singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

Week3- Data Preprocessing, Extraction

and Preparation
Al/ML in Cybersecurity
Dr. Abdul Shahid
Outline
• Recap:
• Cyber Threat Taxonomy
• ML Model Development
• Datasets
• Data and Data Types
• Graphic Displays of Basic Statistical Descriptions
• Why Preprocess the Data?
• Data cleaning
• Data integration
• Data reduction
• Data Transformation and Data Discretization

• Preprocessing Tasks
• Data cleaning
• Data integration
• Data reduction
• Data Transformation and Data Discretization
Major Tasks in Data Preprocessing
• Data cleaning
• Fill in missing values, smooth noisy data, identify or remove outliers,
and resolve inconsistencies
• Data integration
• Integration of multiple databases, data types, or files
• Data reduction
• Dimensionality reduction
• Data Transformation and Data Discretization
• A process where the entire set of values of a given attribute to a new set
of replacement values s.t. each old value can be identified with one of
the new values
Summary-
Preprocessing Tasks
• Data cleaning
• Data integration
• Data reduction
• Data Transformation and Data
Discretization
Data cleaning
• Missing and Incomplete Data Handling in Cybersecurity Applications
Marek Pawlicki1,2(B), Michal Chora´s1,2, Rafal Kozik1,2, and Witold Holubowicz
https://link.springer.com/chapter/10.1007/978-3-030-73280-6_33
• https://www.unb.ca/cic/datasets/ids-
2017.html#:~:text=The%20CICIDS2017%20dataset%20consists%20of,are%20publicly%20available%20for%20researchers.
• Few of the techniques:
• Complete Case Analysis (CCA)
The method is also known under the name of ‘Listwise Deletion’
• Single Imputation
Single imputation refers to a process where the missing variables are estimated using the observed values. Example (‘last
observation carried forward’ )
• Central Tendency Measures Substitution
The three measures of central tendency - mean, median and mode - are used for imputation almost as frequently as listwise
deletion
• Hot-Deck Imputation
The term hot-deck is a reference back to the times of punch cards, where the currently processed deck would literally be ‘hot’.
The donor samples are found through the auxiliary values via algorithms like the K-Nearest Neighbours.
A slight tangent from the topic- Datasets
• https://www.mdpi.com/2306-5729/7/2/22
• CIC-IDS2017
• UNSW-NB15
• DS2OS
• BoT-IoT
• KDD Cup 1999
• NSL-KDD
Data Cleaning
• Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g.,
instrument faulty, human or computer error, transmission error
• incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
• e.g., Occupation=“ ” (missing data)
• noisy: containing noise, errors, or outliers
• e.g., Salary=“−10” (an error)
• inconsistent: containing discrepancies in codes or names, e.g.,
• Age=“42”, Birthday=“03/07/2010”
• Was rating “1, 2, 3”, now rating “A, B, C”
• discrepancy between duplicate records
• Intentional (e.g., disguised missing data)
• Jan. 1 as everyone’s birthday? https://www.dataentryoutsourced.com/blog/cxos-guide-to-marketing-
and-sales-data-cleansing-and-enrichment/
Incomplete (Missing) Data
• Data is not always available Missing Completely At Random (MCAR)

• E.g., many data objects have no recorded value for several


attributes, such as customer income in sales data

Missing Not At Random (MNAR)

Missing At Random (MAR)


• Missing data may be due to
• equipment malfunction
• inconsistent with other recorded data and thus deleted
• data not entered due to misunderstanding
• certain data may not be considered important at the time
of entry
• not register history or changes of the data
• Missing data may need to be inferred
https://www.analyticsvidhya.com/blog/2021/10/handling-missing-value/
How to Handle Missing Data?
• Ignore the that record: usually done when class label is
missing (when doing classification)—not effective when
the % of missing values per attribute varies considerably
• Fill in the missing value manually: tedious + infeasible?
• Fill in it automatically with
• a global constant : e.g., “unknown”, a new class?!
• the attribute mean
• the attribute mean for all samples belonging to the
same class: smarter
• the most probable value: inference-based such as
Bayesian formula or decision tree
Noisy Data
• Noise: random error or variance in a measured variable
• Incorrect attribute values may be due to
• faulty data collection instruments
• data entry problems
• data transmission problems
• technology limitation
• inconsistency in naming convention
• Other data problems which require data cleaning
• duplicate records
• incomplete data
• inconsistent data
How to Handle Noisy Data?
• Binning
• first sort data and partition into (equal-
frequency) bins
• then one can smooth by bin means,
smooth by bin median, smooth by bin
boundaries, etc.
• Regression
• smooth by fitting the data into regression
functions
• Clustering
• detect and remove outliers
• Combined computer and human inspection
• detect suspicious values and check by
human (e.g., deal with possible outliers)
Data integration
• https://www.frontiersin.org/articles/10.3389/fdata.2020.521132/full
• Merging Datasets of CyberSecurity Incidents for Fun and Insight
• Giovanni Abbiati1,2 Silvio Ranise2,3* Antonio Schizzerotto2,3 Alberto Siena2

• Quality of data is the base requirement for quality results, and the
method used to produce data is an important factor on the way to
improve data quality. we organize the methodology in three phases:
identification and collection of the datasets to be merged; mapping,
merging, and selection of the original datasets into one possibly
containing duplicates (Section 2.2); and redundancy elimination to filter
out redundant entries in the merged dataset (Section 2.3).
Handling Redundancy in Data Integration
• Redundant data occur often when integration of multiple
databases
• Object identification: The same attribute or object
may have different names in different databases
• Derivable data: One attribute may be a “derived”
attribute in another table, e.g., annual revenue
• Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
• Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
Correlation Analysis (Nominal Data)
• Χ2 (chi-square) test
(Observed − Expected ) 2
2 = 
Expected
• The larger the Χ2 value, the more likely the variables are
related
• The cells that contribute the most to the Χ2 value are those
whose actual count is very different from the expected
count
• Correlation does not imply causality
• # of hospitals and # of car-theft in a city are correlated
• Both are causally linked to the third variable: population
Chi-Square Calculation: An Example

Male female Sum (row)


Like science fiction 250(90) 200(360) 450

Not like science fiction 50(210) 1000(840) 1050

Sum(col.) 300 1200 1500

• Χ2 (chi-square) calculation (numbers in parenthesis are expected counts calculated


based on the data distribution in the two categories)

( 250 − 90 ) 2
(50 − 210 ) 2
( 200 − 360 ) 2
(1000 − 840 ) 2
2 = + + + = 507.93
90 210 360 840
• It shows that like_science_fiction and play_chess are correlated in the group
Chi-Square Calculation
• For this 2 × 2 table, the degrees of
freedom are .(2 – 1)*(2 – 1)= 1.
• For 1 degree of freedom, the χ2 value
needed to reject the hypothesis at the
0.001 significance level is 10.828.
• Since our computed value is above
than this value (10.828), we can reject
the hypothesis that gender and
preferred reading are independent
and conclude that the two attributes
are (strongly) correlated for the given
group of people.

2/5/2024 Data Mining: Concepts and Techniques


https://faculty.washington.edu/heagerty/Books/Biostatistics/TABLES/ChiSquare/index.html
Correlation Analysis (Numeric Data)

• Correlation coefficient (also called Pearson’s product moment coefficient)

 
n n
(ai − A)(bi − B) (ai bi ) − n AB
rA, B = i =1
= i =1
(n − 1) A B (n − 1) A B

where n is the number of tuples, A and B are the respective means of A and B, σA and
σB are the respective standard deviation of A and B, and Σ(aibi) is the sum of the AB
cross-product.

• If rA,B > 0, A and B are positively correlated (A’s values increase as B’s). The
higher, the stronger correlation.
• rA,B = 0: independent; rAB < 0: negatively correlated
Visually Evaluating Correlation

Scatter plots
showing the
similarity from
–1 to 1.
Data reduction
• Comparative analysis of dimensionality reduction
techniques for cybersecurity in the SWaT dataset
• Mehmet Bozdal, Kadir Ileri & Ali Ozkahraman
• https://link.springer.com/article/10.1007/s11227-023-05511-w
• ………Additionally, the paper explores dimensionality reduction methods, including Autoencoders, Generalized
Eigenvalue Decomposition (GED), and Principal Component Analysis (PCA). The research findings highlight the
importance of balancing dimensionality reduction with the need for accurate intrusion detection. It is found that
PCA provided better performance compared to the other techniques, as reducing the input dimension by 90.2%
resulted in only a 2.8% and 2.6% decrease in the accuracy and F1 score, respectively.
Data Reduction 1: Dimensionality Reduction
• Curse of dimensionality
• When dimensionality increases, data becomes increasingly
sparse
• Density and distance between points, which is critical to
clustering, outlier analysis, becomes less meaningful
• The possible combinations of subspaces will grow
exponentially
• Dimensionality reduction
• Avoid the curse of dimensionality
• Help eliminate irrelevant features and reduce noise
• Reduce time and space required in data mining
• Allow easier visualization
• Dimensionality reduction techniques
• Principal Component Analysis
• Supervised and nonlinear techniques (e.g., feature selection)
Principal Component Analysis (PCA)

• Find a projection that captures the largest amount of variation in data


• The original data are projected onto a much smaller space, resulting in dimensionality
reduction. We find the eigenvectors of the covariance matrix, and these eigenvectors
define the new space

x2

x1
Principal Component Analysis (Steps)

• Given N data vectors from n-dimensions, find k ≤ n orthogonal vectors (principal


components) that can be best used to represent data
• Normalize input data: Each attribute falls within the same range
• Compute k orthonormal (unit) vectors, i.e., principal components
• Each input data (vector) is a linear combination of the k principal component vectors
• The principal components are sorted in order of decreasing “significance” or strength
• Since the components are sorted, the size of the data can be reduced by eliminating
the weak components, i.e., those with low variance (i.e., using the strongest principal
components, it is possible to reconstruct a good approximation of the original data)
• Works for numeric data only
Attribute Subset Selection
• Another way to reduce dimensionality of data
• Redundant attributes
• Duplicate much or all of the information contained in one or more other
attributes
• E.g., purchase price of a product and the amount of sales tax paid
• Irrelevant attributes
• Contain no information that is useful for the data mining task at hand
• E.g., students' ID is often irrelevant to the task of predicting students' GPA
Heuristic Search in Attribute Selection
• There are 2d possible attribute combinations of d
attributes
• Typical heuristic attribute selection methods:
• Best single attribute under the attribute
independence assumption: choose by significance
tests
• Best step-wise feature selection:
• The best single-attribute is picked first
• Then next best attribute condition to the first, ...
• Step-wise attribute elimination:
• Repeatedly eliminate the worst attribute
• Best combined attribute selection and elimination
Data Reduction: Regression and Log-Linear Models

• Linear regression
• Data modeled to fit a straight line
• Often uses the least-square method to fit the line
• Multiple regression
• Allows a response variable Y to be modeled as a linear function of
multidimensional feature vector
• Log-linear model
• Approximates discrete multidimensional probability distributions
Clustering – sample data selection
• Partition data set into clusters based on similarity, and store cluster
representation (e.g., centroid and diameter) only
• Can have hierarchical clustering and be stored in multi-dimensional index
tree structures
• There are many choices of clustering definitions and clustering
algorithms
Data Transformation and Data Discretization
• https://ieeexplore.ieee.org/abstract/document/8947945

• Evaluation of Cybersecurity Data Set Characteristics for Their Applicability to Neural


Networks Algorithms Detecting Cybersecurity Anomalies
• Xavier A. Larriva-Novo; Mario Vega-Barbas; Víctor A. Villagrá; Mario Sanz Rodrigo
• Abstract: “….. this research focuses on the evaluation of characteristics for different well-
established Machine Leaning algorithms commonly applied to IDS scenarios. To do this, a
categorization for cybersecurity data sets that groups its records into several groups is first
considered. Making use of this division, this work seeks to determine which neural network
model (multilayer or recurrent), activation function, and learning algorithm yield higher
accuracy values, depending on the group of data…….”
• “Certain features, such as protocol, service or flag, are not presented numerically, which is why one-
hot coding was used [47]. In addition, some of the characteristics, such as duration or src bytes
(sbytes), present data with widely dispersed values over a wide numerical range, so they are
normalized by both the min-max function”
Data Transformation
• A function that maps the entire set of values of a given attribute to a new set of
replacement values s.t. each old value can be identified with one of the new values
• Methods
• Smoothing: Remove noise from data
• Attribute/feature construction
• New attributes constructed from the given ones
• Aggregation: Summarization
• Normalization: Scaled to fall within a smaller, specified range
• min-max normalization
• z-score normalization
• normalization by decimal scaling
• Discretization
Normalization
• Min-max normalization: to [new_minA, new_maxA]
v − minA
v' = (new _ maxA − new _ minA) + new _ minA
maxA − minA
• Ex. Let income range $12,000 to $98,000 normalized to [0.0, 1.0]. Then $73,000 is
73,600 − 12,000
(1.0 − 0) + 0 = 0.716
mapped to 98,000 − 12,000

• Z-score normalization (μ: mean, σ: standard deviation):


v − A
v' =
 A

73,600 − 54,000
= 1.225
• Ex. Let μ = 54,000, σ = 16,000. Then 16,000

• Normalization by decimal scaling


v
v'= Where j is the smallest integer such that Max(|ν’|) < 1
10 j
Discretization
• Three types of attributes
• Nominal—values from an unordered set, e.g., color, profession
• Ordinal—values from an ordered set, e.g., military or academic rank
• Numeric—real numbers, e.g., integer or real numbers
• Discretization: Divide the range of a continuous attribute into intervals
• Interval labels can then be used to replace actual data values
• Reduce data size by discretization
• Supervised vs. unsupervised
• Split (top-down) vs. merge (bottom-up)
• Discretization can be performed recursively on an attribute
• Prepare for further analysis, e.g., classification
Data Discretization Methods
• Typical methods: All the methods can be applied recursively
• Binning
• Top-down split, unsupervised
• Histogram analysis
• Top-down split, unsupervised
• Clustering analysis (unsupervised, top-down split or bottom-up merge)
• Decision-tree analysis (supervised, top-down split)
• Correlation (e.g., 2) analysis (unsupervised, bottom-up merge)
Simple Discretization: Binning

• Equal-width (distance) partitioning


• Divides the range into N intervals of equal size: uniform grid
• if A and B are the lowest and highest values of the attribute, the width of intervals
will be: W = (B –A)/N.
• The most straightforward, but outliers may dominate presentation
• Skewed data is not handled well

• Equal-depth (frequency) partitioning


• Divides the range into N intervals, each containing approximately same number of
samples
• Good data scaling
• Managing categorical attributes can be tricky
Binning Methods for Data Smoothing
❑Sorted data for price (in dollars): 4, 8, 9, 15, 21, 21, 24, 25, 26, 28, 29, 34
* Partition into equal-frequency (equi-depth) bins:
- Bin 1: 4, 8, 9, 15
- Bin 2: 21, 21, 24, 25
- Bin 3: 26, 28, 29, 34
* Smoothing by bin means:
- Bin 1: 9, 9, 9, 9
- Bin 2: 23, 23, 23, 23
- Bin 3: 29, 29, 29, 29
* Smoothing by bin boundaries:
- Bin 1: 4, 4, 4, 15
- Bin 2: 21, 21, 25, 25
- Bin 3: 26, 26, 26, 34
Discretization Without Using Class Labels(Binning vs. Clustering)

Data Equal interval width (binning)

Equal frequency (binning) K-means clustering leads to better results

You might also like