Microsoft Azure offers scalable cloud services for the deployment of machine learning models. The deployment workflow is automatable, and services integration on n8n can help you achieve that promptly.
Make sure to have:
1. Set Up n8n
Install n8n locally
To set up n8n, run the following code snippet.
npm install -g n8n
Then run:
n8n
n8n spins up a service on your local machine that can be accessed from http://localhost:5678 on any web browser.
Server Deployment (Docker Method)
If you want to host n8n on a server, you may accomplish this using Docker:
docker run -it --rm \
-p 5678:5678 \
-v ~/.n8n:/home/node/.n8n \
n8nio/n8n
This makes n8n accessible from http://your-server-ip:5678.
2. Set a Container for Azure Blob Storage
Head over to the Azure Portal > Storage Accounts tab > select Create.
Upon completion, go to Containers > press on New
Identify the container (for instance, ml-model-container).
Your model file can now be uploaded (e.g., model.pkl, model.h5).
3. Set up the Machine Learning Model on Azure
Log into Azure Machine Learning Studio > click on the Models tab > press on Register Model
Set the model’s source as Azure Blob Storage
Assign model to an Endpoint
4. Set up with Virtual Machine Deployment on Azure (Optional Final Deployment Step)
Log into Azure Portal > select Virtual Machines > select Create.
Select a feasible instance type (e.g., Standard_D2_v2 for small models or Standard_NC6 for GPU-based models).
Set up networking for HTTP access (allow 5000 for Flask and 8000 for FastAPI).
An SSH connection should be established to enable the installation of prerequisites:
sudo apt update && sudo apt install python3-pip -y
pip install flask tensorflow azure-storage-blob
1. Develop the Workflow
2. Additional Steps: Save and Deploy the Workflow
import pickle
from flask import Flask, request, jsonify
from azure.storage.blob import BlobServiceClient
app = Flask(__name__)
# Get model from Azure blob storage
connection_string = 'YOUR_AZURE_STORAGE_CONNECTION_STRING'
container_name = 'ml-model-container'
blob_name = 'model.pkl'
# Get blob service client
blob_service_client = BlobServiceClient.from_connection_string(connection_string)
blob_client = blob_service_client.get_blob_client(container=container_name,blob=blob_name)
with open('model.pkl','wb') as f:
f.write(blob_client.download_blob().readall())
model=pickle.load(open('model.pkl','rb'))
@app.route('/predict',methods=['POST'])
def predict():
# Get model prediction
data=request.json['features']
prediction=model.predict([data])
return jsonify({'prediction': prediction.tolist()})
if __name__=='__main__':
app.run(host='0.0.0.0',port=5000)
Make sure network security groups permit traffic on port 5000.
Model management and deployment was simplified using n8n on Azure.
n8n makes it easy to automate the whole workflow regardless of whether you are using Azure ML, Blob Storage, or Virtual Machines.
Ready to optimize your AI infrastructure? Contact us today and leverage our AI/ML expertise!