Ethics and Ai Lab Final
Ethics and Ai Lab Final
COLLEGE OF TECHNOLOGY
Vadakal, Sriperumbudur – 602 105
RECORD NOTEBOOK
Student Name :
Register Number :
Semester / Branch :
Subject Code :
Subject Name :
SRI VENKATESWARAA COLLEGE OF TECHNOLOGY
Vadakal, Sriperumbudur – 602 105
Department of Artificial intelligence & Data science
BONAFIDE CERTIFICATE
This is to certify that this a bonafide record of practical work done
by Register Number
of Semester B.Tech/M.Tech in the
laboratory during the
academic year .
3. Ethical AI in Defence
Aim:
Step 1:
import pandas as pd
import numpy as np
import plotly.express as px
import warnings
warnings.filterwarnings('ignore')
plt.style.use('fivethirtyeight')
%matplotlib inline
pd.set_option('display.max_columns', 26)
Step 2:
df= pd.read_csv('kidney_disease.csv')
df.head()
Step 3:
'aanemia', 'class']
Step 5:
df.info()
Step 6:
plotnumber = 1
ax = plt.subplot(3, 5, plotnumber)
sns.distplot(df[column])
plt.xlabel(column)
plotnumber += 1
plt.tight_layout()
plt.show()
Step 12:
plotnumber = 1
ax = plt.subplot(3, 4, plotnumber)
plt.xlabel(column)
plotnumber += 1
plt.tight_layout()
plt.show()
Step 13:
def violin(col):
return fig.show()
def kde(col):
grid.add_legend()
return fig.show()
Step 14:
violin('red_blood_cell_count')
kde('blood_urea')
scatter('red_blood_cell_count', 'albumin')
Step 15:
dep_col = 'class'
X = df[ind_col]
y = df[dep_col]
Step 17:
Step 18:
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
Aim:
The aim of this project is to analyze public survey responses on ethical dilemmas of
autonomous vehicles and build predictive models to classify respondents based on their
responses.
Procedure:
Step 1:
import pandas as pd
import numpy as np
Step 2:
url = "dataset.csv"
df = pd.read_csv(url)
Step 3:
print(df.head())
Step 4:
print(df.info())
Step 5:
plt.figure(figsize=(15, 10))
plt.subplot(3, 3, i)
sns.countplot(data=df, x=f"Ǫ{i}")
plt.title(f"Ǫuestion {i}")
plt.tight_layout()
plt.show()
Step 7:
sns.pairplot(df, hue='Target_Variable')
plt.show()
Step 8:
plt.figure(figsize=(10, 8))
plt.title('Correlation Heatmap')
plt.show()
Step 9:
y = df.iloc[:, 0] Target
rf_classifier = RandomForestClassifier(random_state=42)
lr_classifier = LogisticRegression(random_state=42)
Step 10:
rf_classifier.fit(X_train, y_train)
Step 11:
lr_classifier.fit(X_train, y_train)
Step 12:
rf_predictions = rf_classifier.predict(X_test)
lr_predictions = lr_classifier.predict(X_test)
Step 13:
Aim:
Step 1:
import pandas as pd
import numpy as np
df = pd.read_csv('military.csv')
Step 3:
print(df.head())
print(df.info())
print(df.describe())
Step 4:
plt.figure(figsize=(10, 6))
sns.distplot(df['Active military'])
plt.ylabel('Frequency')
plt.show()
Step 5:
# Visualize the top 10 countries with the highest active military personnel
plt.figure(figsize=(10, 6))
plt.xlabel('Country')
plt.show()
Step 6:
# Analyze the correlation between active military personnel and GDP per capita
df_gdp = pd.read_csv('gdp_per_capita.csv') # assume you have a separate dataset with GDP per
capita data
print(corr_matrix)
Step 7:
# Visualize the correlation between active military personnel and GDP per capita
plt.figure(figsize=(8, 6))
# For example, let's analyze the potential bias in autonomous systems based on country of
origin
print(bias_analysis)
Step 9:
plt.figure(figsize=(10, 6))
plt.xlabel('Country')
plt.show()
Output:
Exp No: 4
Multiple Linear Regression Model
Date:
Aim:
Procedure:
Dataset Link:https://www.kaggle.com/code/divan0/multiple-linear-regression/input
Step 1:
import numpy as np
import pandas as pd
%matplotlib inline
dataset = pd.read_csv('../input/kc_house_data.csv')
dataset.head()
Step 2:
print(dataset.dtypes)
Step 3:
Step 4:
with sns.plotting_context("notebook",font_scale=2.5):
g = sns.pairplot(dataset[['sqft_lot','sqft_above','price','sqft_living','bedrooms']],
hue='bedrooms', palette='tab20',size=6)
g.set(xticklabels=[]);
Step 5:
X = dataset.iloc[:,1:].values
y = dataset.iloc[:,0].values
regressor = LinearRegression()
regressor.fit(X_train, y_train)
y_pred = regressor.predict(X_test)
Step 7:
import statsmodels.formula.api as sm
numVars = len(x[0])
temp = np.zeros((21613,19)).astype(int)
for i in range(0, numVars):
maxVar = max(regressor_OLS.pvalues).astype(float)
adjR_before = regressor_OLS.rsquared_adj.astype(float)
if (regressor_OLS.pvalues[j].astype(float) == maxVar):
temp[:,j] = x[:, j]
x = np.delete(x, j, 1)
adjR_after = tmp_regressor.rsquared_adj.astype(float)
x_rollback = np.delete(x_rollback, j, 1)
print (regressor_OLS.summary())
return x_rollback
else:
continue
regressor_OLS.summary()
return x
SL = 0.05
Aim:
The aim of this experiment is to understand and analyze the impact of bias and variance on
regression models. We will experiment with regression models both with and without bias to
evaluate their performance and characteristics.
Procedure:
Data Preparation
1. Model A: Train a regression model without introducing bias (Low Bias, High Variance).
2. Model B: Train a regression model with high bias (High Bias, Low Variance).
Model Evaluation
1. Evaluate both models using metrics such as Mean Squared Error (MSE), R-squared, and
other relevant metrics.
2. Visualize the bias-variance tradeoff using plots like bias-variance decomposition plots
or learning curves.
Analysis
Step 1 :
import numpy as np
plt.title(title)
plt.xlabel('X')
plt.ylabel('y')
plt.legend()
plt.show()
Step 3 :
model_A = LinearRegression()
model_A.fit(X_train, y_train)
y_pred_A = model_A.predict(X_test)
# Plot Model A
poly_features = PolynomialFeatures(degree=degree)
X_poly_train = poly_features.fit_transform(X_train)
X_poly_test = poly_features.transform(X_test)
model_B = LinearRegression()
model_B.fit(X_poly_train, y_train)
y_pred_B = model_B.predict(X_poly_test)
# Plot Model B
# Model Evaluation
r2 = r2_score(y_true, y_pred)
print(f'R-squared: {r2:.2f}')
print('Model A Evaluation:')
evaluate_model(y_test, y_pred_A)
print('\nModel B Evaluation:')
evaluate_model(y_test, y_pred_B)
Sample Output :
Model A Evaluation:
Mean Squared Error (MSE): 0.50
R-squared: 0.36
Model B Evaluation:
Mean Squared Error (MSE): 0.24
R-squared: 0.70
Exp No: 06
Classification using Perceptron with and without Bias.
Date:
Aim:
To classify a dataset from the UCI repository using a perceptron algorithm with and without
bias.
Procedure:
Data Preprocessing:
1. Initialize weights w
2. w to small random values.
3. Initialize bias b
4. b to zero (if using bias).
Training:
1. Calculate the accuracy, precision, recall, and F1-score of the perceptron classifier on
the test dataset.
2. Compare the performance of the perceptron with and without bias.
Program :
class Perceptron:
self.learning_rate = learning_rate
self.weights[-1] = 1
x_with_bias = np.append(x, 1)
activation = np.dot(self.weights, x_with_bias)
for i in range(len(X)):
x = X[i]
x_with_bias = np.append(x, 1)
predicted = self.predict(x)
class Perceptron:
self.learning_rate = learning_rate
self.weights[-1] = 1
def predict(self, x):
x_with_bias = np.append(x, 1)
for i in range(len(X)):
x = X[i]
x_with_bias = np.append(x, 1)
predicted = self.predict(x)
error = y[i] - predicted
Output :
Result :
Since you're currently reviewing the lab manual and materials, the results section will likely be
filled out after completing the experiment. Here, you would report the performance metrics for
each model and discuss any observed differences in their effectiveness for classifying the
chosen dataset.This investigation will provide insights into the role of bias in a perceptron model
and its impact on classification accuracy.
Exp No: 07
Case Study on Ontology with Ethical Implications.
Date:
Aim:
The aim of this experiment is to analyze a case study involving ontology where ethical
considerations are at stake.
Procedure :
Preliminary Review:
1. Access the provided GitHub repository containing the case study.
2. Familiarize yourself with the documentation, ontology files, and related materials.
Ontology Examination:
1. Examine the ontology structure, including classes, properties, and relationships.
2. Identify the main concepts, entities, and their interconnections within the ontology.
Ethical Implications Assessment:
1. Evaluate the ontology for potential ethical issues related to data privacy, accuracy, bias,
accessibility, and consent.
2. Document any ethical concerns or implications identified during the assessment.
Data Privacy and Security Review:
1. Evaluate the ontology's accessibility features for different user groups, including
patients, healthcare providers, and researchers.
2. Check for accessibility barriers and identify potential improvements.
Informed Consent and Ethics Review:
1. Assess the process of obtaining informed consent for using patient data in the ontology.
2. Review the ethical considerations and compliance with relevant regulations and
guidelines.
Documentation and Transparency Assessment:
1. Review the documentation provided with the ontology, including design methodologies,
data sources, and usage guidelines.
2. Evaluate the transparency of the ontology's development process and decision-making.
Recommendations and Suggestions:
1. Based on the assessment, provide recommendations to address identified ethical
implications and improve the ontology's quality, security, and accessibility.Document
suggestions for further research, validation, and collaboration with stakeholders.
Conclusion and Summary:
2. Summarize the findings from the ontology examination and ethical implications
assessment. Conclude with a comprehensive overview of the case study's strengths,
weaknesses, and areas for improvement.
Program :
# Create a Graph
g = Graph()
# Define Namespaces
ex = Namespace("http://example.org/ontology#")
# Define Classes
# Individuals
# Relationships
g.add((ex.Diabetes, ex.hasTreatment, ex.InsulinTherapy))
print(g.serialize(format="turtle").decode("utf-8"))
Output :