Databricks-ML-professional-S03c-Real-Time
This Notebook adds information related to the following requirements:
Real-time:
- Describe the benefits of using real-time inference for a small number of records or when fast prediction computations are needed
- Identify JIT feature values as a need for real-time deployment
- Describe model serving deploys and endpoint for every stage
- Identify how model serving uses one all-purpose cluster for a model deployment
- Query a Model Serving enabled model in the Production stage and Staging stage
- Identify how cloud-provided RESTful services in containers is the best solution for production-grade real-time deployments
Download this notebook at format ipynb here.
- For on-demand response
- Generates predictions for a small number of records with fast results (e.g. results in milliseconds)
- Rely on REST API development - need to create a REST endpoint for example MLflow model serving endpoint
- Real-time or near Real-time predictions
- Has lowest latency but also highest costs because it requires serving infrastructures which have a cost
- Users provide data to the model through REST API, model predicts the target in real-time
- 5-10% of use cases
- Example of use cases: Financial (fraud detection), mobile, ad tech
N/A
You can use a serving endpoint to serve models from the Databricks Model Registry or from Unity Catalog.
Endpoints expose the underlying models as scalable REST API endpoints using serverless compute. This means the endpoints and associated compute resources are fully managed by Databricks and will not appear in your cloud account.
A serving endpoint can consist of one or more MLflow models from the Databricks Model Registry, called served models.
A serving endpoint can have at most ten served models.
You can configure traffic settings to define how requests should be routed to your served models behind an endpoint.
Additionally, you can configure the scale of resources that should be applied to each served model.
For more information about how to create a model serving enpoint using MLflow, see this video.
- A model need to be logged and registered to MLflow before being linked to a serving endpoint
- Model(s) to be served should be selected at endpoint creation by the selection of model(s) name and model(s) version
- Up to 10 models can be served and percentage of traffic for each of them is configurable:
- A newly created endoint is disabled. It will become active after having been enabled.
import mlflow
import requests
#
# this is to get a temporary token. Best is to create a token within Databricks interface
token = mlflow.utils.databricks_utils._get_command_context().apiToken().get()
#
# With the token, create the authorization header for the subsequent REST calls
headers = {"Authorization": f"Bearer {token}"}
#
# get endpoint at which to execute the request
api_url = mlflow.utils.databricks_utils.get_webapp_url()
#
# create the url
url = f"{api_url}/api/2.0/mlflow/endpoints/enable"
#
# send request to enable endpoint
requests.post(url, headers=headers, json={"registered_model_name": "<model_name>"})
- User who need to create a model serving endpoint in MLflow will need cluster creation persmission.
The purpose of a served model is to provide predictions in real-time. When users or anyone/any service make a request to the endpoint to get predictions, he/it should not have to wait for a cluster to start, results should be provided instantly. Serving endpoints use serverless compute. See this page
# this is to get a temporary token. Best is to create a token within Databricks interface
token = mlflow.utils.databricks_utils._get_command_context().apiToken().get()
#
# With the token, create the authorization header for the subsequent REST calls
headers = {"Authorization": f"Bearer {token}"}
#
# get endpoint at which to execute the request
api_url = mlflow.utils.databricks_utils.get_webapp_url()
#
# create url
url = f"{api_url}/model/<model_name>/invocations"
#
# data to predict should be formatted this way. As an example, let's consider we want to predict X_test
ds_dict = X_test.to_dict(orient="split")
#
# request predictions
response = requests.request(method="POST", headers=headers, url=url, json=ds_dict)
#
# for predictions in JSON, this is the command
response.json()
Alternatively, sample url or code (Curl/Python) to make a request and get predictions from a served model is provided in the Serving UI (source: this video):
Containers are suitable for real-time production deployments due to their ease of management, lightweight characteristics, and scalable capabilities facilitated by services like Kubernetes.