Week3- Data Preprocessing, Extraction and Preparation
Week3- Data Preprocessing, Extraction and Preparation
and Preparation
Al/ML in Cybersecurity
Dr. Abdul Shahid
Outline
• Recap:
• Cyber Threat Taxonomy
• ML Model Development
• Datasets
• Data and Data Types
• Graphic Displays of Basic Statistical Descriptions
• Why Preprocess the Data?
• Data cleaning
• Data integration
• Data reduction
• Data Transformation and Data Discretization
• Preprocessing Tasks
• Data cleaning
• Data integration
• Data reduction
• Data Transformation and Data Discretization
Major Tasks in Data Preprocessing
• Data cleaning
• Fill in missing values, smooth noisy data, identify or remove outliers,
and resolve inconsistencies
• Data integration
• Integration of multiple databases, data types, or files
• Data reduction
• Dimensionality reduction
• Data Transformation and Data Discretization
• A process where the entire set of values of a given attribute to a new set
of replacement values s.t. each old value can be identified with one of
the new values
Summary-
Preprocessing Tasks
• Data cleaning
• Data integration
• Data reduction
• Data Transformation and Data
Discretization
Data cleaning
• Missing and Incomplete Data Handling in Cybersecurity Applications
Marek Pawlicki1,2(B), Michal Chora´s1,2, Rafal Kozik1,2, and Witold Holubowicz
https://link.springer.com/chapter/10.1007/978-3-030-73280-6_33
• https://www.unb.ca/cic/datasets/ids-
2017.html#:~:text=The%20CICIDS2017%20dataset%20consists%20of,are%20publicly%20available%20for%20researchers.
• Few of the techniques:
• Complete Case Analysis (CCA)
The method is also known under the name of ‘Listwise Deletion’
• Single Imputation
Single imputation refers to a process where the missing variables are estimated using the observed values. Example (‘last
observation carried forward’ )
• Central Tendency Measures Substitution
The three measures of central tendency - mean, median and mode - are used for imputation almost as frequently as listwise
deletion
• Hot-Deck Imputation
The term hot-deck is a reference back to the times of punch cards, where the currently processed deck would literally be ‘hot’.
The donor samples are found through the auxiliary values via algorithms like the K-Nearest Neighbours.
A slight tangent from the topic- Datasets
• https://www.mdpi.com/2306-5729/7/2/22
• CIC-IDS2017
• UNSW-NB15
• DS2OS
• BoT-IoT
• KDD Cup 1999
• NSL-KDD
Data Cleaning
• Data in the Real World Is Dirty: Lots of potentially incorrect data, e.g.,
instrument faulty, human or computer error, transmission error
• incomplete: lacking attribute values, lacking certain attributes of
interest, or containing only aggregate data
• e.g., Occupation=“ ” (missing data)
• noisy: containing noise, errors, or outliers
• e.g., Salary=“−10” (an error)
• inconsistent: containing discrepancies in codes or names, e.g.,
• Age=“42”, Birthday=“03/07/2010”
• Was rating “1, 2, 3”, now rating “A, B, C”
• discrepancy between duplicate records
• Intentional (e.g., disguised missing data)
• Jan. 1 as everyone’s birthday? https://www.dataentryoutsourced.com/blog/cxos-guide-to-marketing-
and-sales-data-cleansing-and-enrichment/
Incomplete (Missing) Data
• Data is not always available Missing Completely At Random (MCAR)
• Quality of data is the base requirement for quality results, and the
method used to produce data is an important factor on the way to
improve data quality. we organize the methodology in three phases:
identification and collection of the datasets to be merged; mapping,
merging, and selection of the original datasets into one possibly
containing duplicates (Section 2.2); and redundancy elimination to filter
out redundant entries in the merged dataset (Section 2.3).
Handling Redundancy in Data Integration
• Redundant data occur often when integration of multiple
databases
• Object identification: The same attribute or object
may have different names in different databases
• Derivable data: One attribute may be a “derived”
attribute in another table, e.g., annual revenue
• Redundant attributes may be able to be detected by
correlation analysis and covariance analysis
• Careful integration of the data from multiple sources may
help reduce/avoid redundancies and inconsistencies and
improve mining speed and quality
Correlation Analysis (Nominal Data)
• Χ2 (chi-square) test
(Observed − Expected ) 2
2 =
Expected
• The larger the Χ2 value, the more likely the variables are
related
• The cells that contribute the most to the Χ2 value are those
whose actual count is very different from the expected
count
• Correlation does not imply causality
• # of hospitals and # of car-theft in a city are correlated
• Both are causally linked to the third variable: population
Chi-Square Calculation: An Example
( 250 − 90 ) 2
(50 − 210 ) 2
( 200 − 360 ) 2
(1000 − 840 ) 2
2 = + + + = 507.93
90 210 360 840
• It shows that like_science_fiction and play_chess are correlated in the group
Chi-Square Calculation
• For this 2 × 2 table, the degrees of
freedom are .(2 – 1)*(2 – 1)= 1.
• For 1 degree of freedom, the χ2 value
needed to reject the hypothesis at the
0.001 significance level is 10.828.
• Since our computed value is above
than this value (10.828), we can reject
the hypothesis that gender and
preferred reading are independent
and conclude that the two attributes
are (strongly) correlated for the given
group of people.
n n
(ai − A)(bi − B) (ai bi ) − n AB
rA, B = i =1
= i =1
(n − 1) A B (n − 1) A B
where n is the number of tuples, A and B are the respective means of A and B, σA and
σB are the respective standard deviation of A and B, and Σ(aibi) is the sum of the AB
cross-product.
• If rA,B > 0, A and B are positively correlated (A’s values increase as B’s). The
higher, the stronger correlation.
• rA,B = 0: independent; rAB < 0: negatively correlated
Visually Evaluating Correlation
Scatter plots
showing the
similarity from
–1 to 1.
Data reduction
• Comparative analysis of dimensionality reduction
techniques for cybersecurity in the SWaT dataset
• Mehmet Bozdal, Kadir Ileri & Ali Ozkahraman
• https://link.springer.com/article/10.1007/s11227-023-05511-w
• ………Additionally, the paper explores dimensionality reduction methods, including Autoencoders, Generalized
Eigenvalue Decomposition (GED), and Principal Component Analysis (PCA). The research findings highlight the
importance of balancing dimensionality reduction with the need for accurate intrusion detection. It is found that
PCA provided better performance compared to the other techniques, as reducing the input dimension by 90.2%
resulted in only a 2.8% and 2.6% decrease in the accuracy and F1 score, respectively.
Data Reduction 1: Dimensionality Reduction
• Curse of dimensionality
• When dimensionality increases, data becomes increasingly
sparse
• Density and distance between points, which is critical to
clustering, outlier analysis, becomes less meaningful
• The possible combinations of subspaces will grow
exponentially
• Dimensionality reduction
• Avoid the curse of dimensionality
• Help eliminate irrelevant features and reduce noise
• Reduce time and space required in data mining
• Allow easier visualization
• Dimensionality reduction techniques
• Principal Component Analysis
• Supervised and nonlinear techniques (e.g., feature selection)
Principal Component Analysis (PCA)
x2
x1
Principal Component Analysis (Steps)
• Linear regression
• Data modeled to fit a straight line
• Often uses the least-square method to fit the line
• Multiple regression
• Allows a response variable Y to be modeled as a linear function of
multidimensional feature vector
• Log-linear model
• Approximates discrete multidimensional probability distributions
Clustering – sample data selection
• Partition data set into clusters based on similarity, and store cluster
representation (e.g., centroid and diameter) only
• Can have hierarchical clustering and be stored in multi-dimensional index
tree structures
• There are many choices of clustering definitions and clustering
algorithms
Data Transformation and Data Discretization
• https://ieeexplore.ieee.org/abstract/document/8947945
73,600 − 54,000
= 1.225
• Ex. Let μ = 54,000, σ = 16,000. Then 16,000