0% found this document useful (0 votes)
6 views4 pages

Ass 4

The document provides a step-by-step guide for deploying a machine learning model as a containerized application using Flask and Docker. It covers prerequisites, creating a Flask API, generating a requirements file, building a Docker image, running the container, testing the API, and optional steps for pushing the image to a registry and deploying to cloud platforms. The guide emphasizes the importance of containerization for managing ML models in production environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views4 pages

Ass 4

The document provides a step-by-step guide for deploying a machine learning model as a containerized application using Flask and Docker. It covers prerequisites, creating a Flask API, generating a requirements file, building a Docker image, running the container, testing the API, and optional steps for pushing the image to a registry and deploying to cloud platforms. The guide emphasizes the importance of containerization for managing ML models in production environments.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 4

Deploying a machine learning (ML) model as a containerized application involves

packaging the model, its dependencies, and the serving logic into a Docker
container. This allows the model to be easily deployed, scaled, and managed in
various environments. Below is a step-by-step guide to containerize and deploy an
ML model using Flask for serving and Docker for containerization.

---

### **Step 1: Prerequisites**


1. **Install Docker**: Ensure Docker is installed on your system.
- Download and install Docker from
[https://www.docker.com/](https://www.docker.com/).
2. **Python Installed**: Ensure Python is installed on your system.
3. **ML Model**: Have a trained ML model saved (e.g., using `joblib`, `pickle`, or
`TensorFlow SavedModel`).
4. **Flask Installed**: Install Flask using pip:
```bash
pip install Flask
```

---

### **Step 2: Create a Simple Flask API to Serve the Model**


1. Create a project directory:
```bash
mkdir ml-app
cd ml-app
```
2. Save your trained ML model (e.g., `model.pkl`) in the project directory.
3. Create a Python file (`app.py`) for the Flask API:
```python
from flask import Flask, request, jsonify
import joblib

app = Flask(__name__)

# Load the trained model


model = joblib.load('model.pkl')

@app.route('/predict', methods=['POST'])
def predict():
# Get input data from the request
data = request.json
# Perform prediction
prediction = model.predict([data['input']])
# Return the prediction as JSON
return jsonify({'prediction': prediction.tolist()})

if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
```

---

### **Step 3: Create a `requirements.txt` File**


1. Generate a `requirements.txt` file to list dependencies:
```bash
pip freeze > requirements.txt
```
Ensure it includes Flask, scikit-learn, and any other required libraries:
```
Flask==2.3.2
scikit-learn==1.0.2
joblib==1.2.0
```

---

### **Step 4: Create a Dockerfile**


1. Create a `Dockerfile` in the project directory:
```Dockerfile
# Use an official Python runtime as the base image
FROM python:3.9-slim

# Set the working directory in the container


WORKDIR /app

# Copy the requirements file into the container


COPY requirements.txt .

# Install the dependencies


RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application code


COPY . .

# Expose the port the app runs on


EXPOSE 5000

# Command to run the application


CMD ["python", "app.py"]
```

---

### **Step 5: Build the Docker Image**


1. Build the Docker image:
```bash
docker build -t ml-app .
```
- `ml-app` is the name of the Docker image.
- `.` refers to the current directory (where the Dockerfile is located).

---

### **Step 6: Run the Docker Container**


1. Run the Docker container:
```bash
docker run -d -p 5000:5000 ml-app
```
- `-d`: Run the container in detached mode (in the background).
- `-p 5000:5000`: Map port 5000 on your local machine to port 5000 in the
container.

---

### **Step 7: Test the API**


1. Send a POST request to the API using `curl` or Postman:
```bash
curl -X POST -H "Content-Type: application/json" -d '{"input": [1, 2, 3, 4]}'
http://localhost:5000/predict
```
2. You should receive a JSON response with the prediction:
```json
{
"prediction": [1]
}
```

---

### **Step 8: Push the Docker Image to a Registry (Optional)**


1. Tag the Docker image:
```bash
docker tag ml-app your-dockerhub-username/ml-app:latest
```
2. Log in to Docker Hub:
```bash
docker login
```
3. Push the image to Docker Hub:
```bash
docker push your-dockerhub-username/ml-app:latest
```

---

### **Step 9: Deploy the Container (Optional)**


1. Deploy the container to a cloud platform like AWS, Google Cloud, or Azure.
2. Use Kubernetes or Docker Swarm for orchestration if deploying multiple
containers.

---

### **Step 10: Clean Up**


1. Stop the running container:
```bash
docker stop <container-id>
```
2. Remove the container:
```bash
docker rm <container-id>
```
3. Remove the Docker image:
```bash
docker rmi ml-app
```

---

### **Summary**
- You created a Flask API to serve your ML model.
- You containerized the application using Docker.
- You built and ran the Docker container.
- You tested the API and optionally pushed the image to a Docker registry.

This assignment demonstrates how to containerize and deploy an ML model as a web


service, a key skill for deploying ML models in production environments.

You might also like