Ass 4
Ass 4
packaging the model, its dependencies, and the serving logic into a Docker
container. This allows the model to be easily deployed, scaled, and managed in
various environments. Below is a step-by-step guide to containerize and deploy an
ML model using Flask for serving and Docker for containerization.
---
---
app = Flask(__name__)
@app.route('/predict', methods=['POST'])
def predict():
# Get input data from the request
data = request.json
# Perform prediction
prediction = model.predict([data['input']])
# Return the prediction as JSON
return jsonify({'prediction': prediction.tolist()})
if __name__ == '__main__':
app.run(host='0.0.0.0', port=5000)
```
---
---
---
---
---
---
---
---
---
### **Summary**
- You created a Flask API to serve your ML model.
- You containerized the application using Docker.
- You built and ran the Docker container.
- You tested the API and optionally pushed the image to a Docker registry.