Assignment4 Supritha
Assignment4 Supritha
This project demonstrates the end-to-end process of building, training, and deploying a
machine learning model within a robust DevOps environment. The workflow includes the
following steps:
• A simple machine learning model is created using Python and libraries like scikit-learn.
• The model is trained with sample data, serialized into a model.pkl file, and saved for
deployment.
• A Flask application is developed to serve the trained model, providing endpoints for
making predictions.
• The API processes input data, loads the serialized model, and returns predictions in real
time.
• A Docker image is created for the Flask application, bundling the app code,
dependencies, and the trained model.
• Docker commands are used to build, test, and run the container locally to ensure
functionality.
1.Directory structure:
ml-app/
├── app/
│ ├── model.py
│ ├── app.py
│ └── requirements.txt
├── Dockerfile
├── kubernetes/
│ ├── deployment.yaml
│ └── service.yaml
└── README.md
import pickle
from sklearn.datasets import load_iris
from sklearn.ensemble import RandomForestClassifier
if __name__ == "__main__":
train_model()
import pickle
from flask import Flask, request, jsonify
app = Flask(__name__)
@app.route("/")
def home():
return "Welcome to the ML App!"
@app.route("/predict", methods=["POST"])
def predict():
data = request.json
prediction = model.predict([data["features"]])
return jsonify({"prediction": prediction.tolist()})
if __name__ == "__main__":
app.run(host="0.0.0.0", port=5000)
Flask==2.3.3
scikit-learn==1.3.2
1. Dockerfile:
FROM python:3.9-slim
WORKDIR /app
# Copy the application and the model file into the Docker
image
COPY app/ /app/
COPY model.pkl /app/
EXPOSE 5000
apiVersion: apps/v1
kind: Deployment
metadata:
name: ml-app
spec:
replicas: 1
selector:
matchLabels:
app: ml-app
template:
metadata:
labels:
app: ml-app
spec:
containers:
- name: ml-app
image: abhishekkn/ml-app:latest
ports:
- containerPort: 5000
apiVersion: v1
kind: Service
metadata:
name: ml-app-service
spec:
selector:
app: ml-app
ports:
- protocol: TCP
port: 80
targetPort: 5000
type: NodePort
**************