- Nov 2024
-
python.plainenglish.io python.plainenglish.io
-
Deploying Machine Learning Models with Flask and AWS Lambda: A Complete Guide
In essence, this article is about:
1) Training a sample model and uploading it to an S3 bucket:
```python from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression import joblib
Load the Iris dataset
iris = load_iris() X, y = iris.data, iris.target
Split the data into training and testing sets
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
Train the logistic regression model
model = LogisticRegression(max_iter=200) model.fit(X_train, y_train)
Save the trained model to a file
joblib.dump(model, 'model.pkl') ```
- Creating a sample Zappa config, because AWS Lambda doesn’t natively support Flask, we need to use Zappa, a tool that helps deploy WSGI applications (like Flask) to AWS Lambda:
```json { "dev": { "app_function": "app.app", "exclude": [ "boto3", "dateutil", "botocore", "s3transfer", "concurrent" ], "profile_name": null, "project_name": "flask-test-app", "runtime": "python3.10", "s3_bucket": "zappa-31096o41b" },
"production": { "app_function": "app.app", "exclude": [ "boto3", "dateutil", "botocore", "s3transfer", "concurrent" ], "profile_name": null, "project_name": "flask-test-app", "runtime": "python3.10", "s3_bucket": "zappa-31096o41b" }
} ```
- Writing a sample Flask app:
```python import boto3 import joblib import os
Initialize the Flask app
app = Flask(name)
S3 client to download the model
s3 = boto3.client('s3')
Download the model from S3 when the app starts
s3.download_file('your-s3-bucket-name', 'model.pkl', '/tmp/model.pkl') model = joblib.load('/tmp/model.pkl')
@app.route('/predict', methods=['POST']) def predict(): # Get the data from the POST request data = request.get_json(force=True)
# Convert the data into a numpy array input_data = np.array(data['input']).reshape(1, -1) # Make a prediction using the model prediction = model.predict(input_data) # Return the prediction as a JSON response return jsonify({'prediction': int(prediction[0])})
if name == 'main': app.run(debug=True) ```
- Deploying this app to production (to AWS):
bash zappa deploy production
and later eventually updating it:
bash zappa update production
- We should get a URL like this:
https://xyz123.execute-api.us-east-1.amazonaws.com/production
which we can query:
curl -X POST -H "Content-Type: application/json" -d '{"input": [5.1, 3.5, 1.4, 0.2]}' https://xyz123.execute-api.us-east-1.amazonaws.com/production/predict
-
- Jun 2021
- May 2020
-
docs.aws.amazon.com docs.aws.amazon.com
-
When CloudFront receives a request, you can use a Lambda function to generate an HTTP response that CloudFront returns directly to the viewer without forwarding the response to the origin. Generating HTTP responses reduces the load on the origin, and typically also reduces latency for the viewer.
can be helpful when auth
-
-
aws.amazon.com aws.amazon.com
-
For this setup, do the following: 1. Create a custom AWS Identity and Access Management (IAM) policy and execution role for your Lambda function. 2. Create Lambda functions that stop and start your EC2 instances. 3. Create CloudWatch Events rules that trigger your function on a schedule. For example, you could create a rule to stop your EC2 instances at night, and another to start them again in the morning.
-
- Apr 2020
-
content.aws.training content.aws.training
-
Lambda authorizers–A Lambda authorizer is simply a Lambda function that you can write to perform any custom authorization that you need. There are two types of Lambda authorizers: token and request parameter. When a client calls your API, API Gateway verifies whether a Lambda authorizer is configured for the API method. If it is, API Gateway calls the Lambda function.In this call, API Gateway supplies the authorization token (or the request parameters, based on the type of authorizer), and the Lambda function returns a policy that allows or denies the caller’s request.API Gateway also supports an optional policy cache that you can configure for your Lambda authorizer. This feature increases performance by reducing the number of invocations of your Lambda authorizer for previously authorized tokens. And with this cache, you can configure a custom time to live (TTL).To make it easy to get started with this method, you can choose the API Gateway Lambda authorizer blueprint when creating your authorizer function from the Lambda console.
-
- Nov 2018
-
www.apsense.com www.apsense.com
-
Develop a Lightweight Project With the Help of AWS, Lambda and Serverless
Develop a Lightweight Project With the Help of AWS, Lambda and Serverless
-
- Nov 2017
-
docs.aws.amazon.com docs.aws.amazon.com
-
Lambda@Edge lets you run Lambda functions at AWS Regions and Amazon CloudFront edge locations in response to CloudFront events
Extremely happy to see such an amazing opportunity which I think will help create fined grain API's which are fast and can leverage Caching strategies which will be cheap.
-
- Oct 2017
-
docs.aws.amazon.com docs.aws.amazon.com