Command-Line Interface

The MLflow command-line interface (CLI) provides a simple interface to various functionality in MLflow. You can use the CLI to run projects, start the tracking UI, create and list experiments, download run artifacts, serve MLflow Python Function and scikit-learn models, serve MLflow Python Function and scikit-learn models, and serve models on Microsoft Azure Machine Learning and Amazon SageMaker.

Each individual command has a detailed help screen accessible via mlflow command_name --help.

mlflow

mlflow [OPTIONS] COMMAND [ARGS]...

Options

--version

Show the version and exit.

artifacts

Upload, list, and download artifacts from an MLflow artifact repository.

To manage artifacts for a run associated with a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.

mlflow artifacts [OPTIONS] COMMAND [ARGS]...

download

Download an artifact file or directory to a local directory. The output is the name of the file or directory on the local filesystem.

Either --artifact-uri or --run-id must be provided.

mlflow artifacts download [OPTIONS]

Options

-r, --run-id <run_id>

Run ID from which to download

-a, --artifact-path <artifact_path>

For use with Run ID: if specified, a path relative to the run’s root directory to download

-u, --artifact-uri <artifact_uri>

URI pointing to the artifact file or artifacts directory; use as an alternative to specifying –run_id and –artifact-path

-d, --dst-path <dst_path>

Path of the local filesystem destination directory to which to download the specified artifacts. If the directory does not exist, it is created. If unspecified the artifacts are downloaded to a new uniquely-named directory on the local filesystem, unless the artifacts already exist on the local filesystem, in which case their local path is returned directly

list

Return all the artifacts directly under run’s root artifact directory, or a sub-directory. The output is a JSON-formatted list.

mlflow artifacts list [OPTIONS]

Options

-r, --run-id <run_id>

Required Run ID to be listed

-a, --artifact-path <artifact_path>

If specified, a path relative to the run’s root directory to list.

log-artifact

Log a local file as an artifact of a run, optionally within a run-specific artifact path. Run artifacts can be organized into directories, so you can place the artifact in a directory this way.

mlflow artifacts log-artifact [OPTIONS]

Options

-l, --local-file <local_file>

Required Local path to artifact to log

-r, --run-id <run_id>

Required Run ID into which we should log the artifact.

-a, --artifact-path <artifact_path>

If specified, we will log the artifact into this subdirectory of the run’s artifact directory.

log-artifacts

Log the files within a local directory as an artifact of a run, optionally within a run-specific artifact path. Run artifacts can be organized into directories, so you can place the artifact in a directory this way.

mlflow artifacts log-artifacts [OPTIONS]

Options

-l, --local-dir <local_dir>

Required Directory of local artifacts to log

-r, --run-id <run_id>

Required Run ID into which we should log the artifact.

-a, --artifact-path <artifact_path>

If specified, we will log the artifact into this subdirectory of the run’s artifact directory.

db

Commands for managing an MLflow tracking database.

mlflow db [OPTIONS] COMMAND [ARGS]...

upgrade

Upgrade the schema of an MLflow tracking database to the latest supported version.

IMPORTANT: Schema migrations can be slow and are not guaranteed to be transactional - always take a backup of your database before running migrations. The migrations README, which is located at https://github.com/mlflow/mlflow/blob/master/mlflow/store/db_migrations/README.md, describes large migrations and includes information about how to estimate their performance and recover from failures.

mlflow db upgrade [OPTIONS] URL

Arguments

URL

Required argument

deployments

Deploy MLflow models to custom targets. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions in https://mlflow.org/docs/latest/plugins.html#community-plugins

You can also write your own plugin for deployment to a custom target. For instructions on writing and distributing a plugin, see https://mlflow.org/docs/latest/plugins.html#writing-your-own-mlflow-plugins.

mlflow deployments [OPTIONS] COMMAND [ARGS]...

create

Deploy the model at model_uri to the specified target.

Additional plugin-specific arguments may also be passed to this command, via -C key=value

mlflow deployments create [OPTIONS]

Options

--endpoint <endpoint>

Name of the endpoint

-C, --config <NAME=VALUE>

Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.

--name <name>

Required Name of the deployment

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-f, --flavor <flavor>

Which flavor to be deployed. This will be auto inferred if it’s not given

create-endpoint

Create an endpoint with the specified name at the specified target.

Additional plugin-specific arguments may also be passed to this command, via -C key=value

mlflow deployments create-endpoint [OPTIONS]

Options

-C, --config <NAME=VALUE>

Extra target-specific config for the endpoint, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.

--endpoint <endpoint>

Required Name of the endpoint

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

delete

Delete the deployment with name given at –name from the specified target.

mlflow deployments delete [OPTIONS]

Options

--endpoint <endpoint>

Name of the endpoint

-C, --config <NAME=VALUE>

Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.

--name <name>

Required Name of the deployment

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

delete-endpoint

Delete the specified endpoint at the specified target

mlflow deployments delete-endpoint [OPTIONS]

Options

--endpoint <endpoint>

Required Name of the endpoint

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

explain

Generate explanations of model predictions on the specified input for the deployed model for the given input(s). Explanation output formats vary by deployment target, and can include details like feature importance for understanding/debugging predictions. Run mlflow deployments help or consult the documentation for your plugin for details on explanation format. For information about the input data formats accepted by this function, see the following documentation: https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools

mlflow deployments explain [OPTIONS]

Options

--name <name>

Name of the deployment. Exactly one of –name or –endpoint must be specified.

--endpoint <endpoint>

Name of the endpoint. Exactly one of –name or –endpoint must be specified.

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

-I, --input-path <input_path>

Required Path to input prediction payload file. The file canbe a JSON (Python Dict) or CSV (pandas DataFrame). If the file is a CSV, the user must specifythe –content-type csv option.

-O, --output-path <output_path>

File to output results to as a JSON file. If not provided, prints output to stdout.

get

Print a detailed description of the deployment with name given at --name in the specified target.

mlflow deployments get [OPTIONS]

Options

--endpoint <endpoint>

Name of the endpoint

--name <name>

Required Name of the deployment

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

get-endpoint

Get details for the specified endpoint at the specified target

mlflow deployments get-endpoint [OPTIONS]

Options

--endpoint <endpoint>

Required Name of the endpoint

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

help

Display additional help for a specific deployment target, e.g. info on target-specific config options and the target’s URI format.

mlflow deployments help [OPTIONS]

Options

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

list

List the names of all model deployments in the specified target. These names can be used with the delete, update, and get commands.

mlflow deployments list [OPTIONS]

Options

--endpoint <endpoint>

Name of the endpoint

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

list-endpoints

List all endpoints at the specified target

mlflow deployments list-endpoints [OPTIONS]

Options

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

predict

Predict the results for the deployed model for the given input(s)

mlflow deployments predict [OPTIONS]

Options

--name <name>

Name of the deployment. Exactly one of –name or –endpoint must be specified.

--endpoint <endpoint>

Name of the endpoint. Exactly one of –name or –endpoint must be specified.

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

-I, --input-path <input_path>

Required Path to input prediction payload file. The file canbe a JSON (Python Dict) or CSV (pandas DataFrame). If the file is a CSV, the user must specifythe –content-type csv option.

-O, --output-path <output_path>

File to output results to as a JSON file. If not provided, prints output to stdout.

run-local

Deploy the model locally. This has very similar signature to create API

mlflow deployments run-local [OPTIONS]

Options

-C, --config <NAME=VALUE>

Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.

--name <name>

Required Name of the deployment

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-f, --flavor <flavor>

Which flavor to be deployed. This will be auto inferred if it’s not given

start-server

Start the MLflow Deployments server

mlflow deployments start-server [OPTIONS]

Options

--config-path <config_path>

Required The path to the deployments configuration file.

--host <host>

The network address to listen on (default: 127.0.0.1).

--port <port>

The port to listen on (default: 5000).

--workers <workers>

The number of workers.

Environment variables

MLFLOW_DEPLOYMENTS_CONFIG

Provide a default for --config-path

update

Update the deployment with ID deployment_id in the specified target. You can update the URI of the model and/or the flavor of the deployed model (in which case the model URI must also be specified).

Additional plugin-specific arguments may also be passed to this command, via -C key=value.

mlflow deployments update [OPTIONS]

Options

--endpoint <endpoint>

Name of the endpoint

-C, --config <NAME=VALUE>

Extra target-specific config for the model deployment, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.

--name <name>

Required Name of the deployment

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

-m, --model-uri <URI>

URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-f, --flavor <flavor>

Which flavor to be deployed. This will be auto inferred if it’s not given

update-endpoint

Update the specified endpoint at the specified target.

Additional plugin-specific arguments may also be passed to this command, via -C key=value

mlflow deployments update-endpoint [OPTIONS]

Options

-C, --config <NAME=VALUE>

Extra target-specific config for the endpoint, of the form -C name=value. See documentation/help for your deployment target for a list of supported config options.

--endpoint <endpoint>

Required Name of the endpoint

-t, --target <target>

Required Deployment target URI. Run mlflow deployments help –target-name <target-name> for more details on the supported URI format and config options for a given target. Support is currently installed for deployment to: databricks, http, https, openai, sagemaker

See all supported deployment targets and installation instructions at https://mlflow.org/docs/latest/plugins.html#community-plugins

doctor

Prints out useful information for debugging issues with MLflow.

mlflow doctor [OPTIONS]

Options

--mask-envs

If set (the default behavior without setting this flag is not to obfuscate information), mask the MLflow environment variable values (e.g. “MLFLOW_ENV_VAR”: “***”) in the output to prevent leaking sensitive information.

experiments

Manage experiments. To manage experiments associated with a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.

mlflow experiments [OPTIONS] COMMAND [ARGS]...

create

Create an experiment.

All artifacts generated by runs related to this experiment will be stored under artifact location, organized under specific run_id sub-directories.

Implementation of experiment and metadata store is dependent on backend storage. FileStore creates a folder for each experiment ID and stores metadata in meta.yaml. Runs are stored as subfolders.

mlflow experiments create [OPTIONS]

Options

-n, --experiment-name <experiment_name>

Required

-l, --artifact-location <artifact_location>

Base location for runs to store artifact results. Artifacts will be stored at $artifact_location/$run_id/artifacts. See https://mlflow.org/docs/latest/tracking.html#where-runs-are-recorded for more info on the properties of artifact location. If no location is provided, the tracking server will pick a default.

csv

Generate CSV with all runs for an experiment

mlflow experiments csv [OPTIONS]

Options

-x, --experiment-id <experiment_id>

Required

-o, --filename <filename>

delete

Mark an active experiment for deletion. This also applies to experiment’s metadata, runs and associated data, and artifacts if they are store in default location. Use list command to view artifact location. Command will throw an error if experiment is not found or already marked for deletion.

Experiments marked for deletion can be restored using restore command, unless they are permanently deleted.

Specific implementation of deletion is dependent on backend stores. FileStore moves experiments marked for deletion under a .trash folder under the main folder used to instantiate FileStore. Experiments marked for deletion can be permanently deleted by clearing the .trash folder. It is recommended to use a cron job or an alternate workflow mechanism to clear .trash folder.

mlflow experiments delete [OPTIONS]

Options

-x, --experiment-id <experiment_id>

Required

rename

Renames an active experiment. Returns an error if the experiment is inactive.

mlflow experiments rename [OPTIONS]

Options

-x, --experiment-id <experiment_id>

Required

--new-name <new_name>

Required

restore

Restore a deleted experiment. This also applies to experiment’s metadata, runs and associated data. The command throws an error if the experiment is already active, cannot be found, or permanently deleted.

mlflow experiments restore [OPTIONS]

Options

-x, --experiment-id <experiment_id>

Required

gateway

Manage the MLflow Gateway service

mlflow gateway [OPTIONS] COMMAND [ARGS]...

start

Start the MLflow Gateway service

mlflow gateway start [OPTIONS]

Options

--config-path <config_path>

Required The path to the gateway configuration file.

--host <host>

The network address to listen on (default: 127.0.0.1).

--port <port>

The port to listen on (default: 5000).

--workers <workers>

The number of workers.

Environment variables

MLFLOW_GATEWAY_CONFIG

Provide a default for --config-path

gc

Permanently delete runs in the deleted lifecycle stage from the specified backend store. This command deletes all artifacts and metadata associated with the specified runs. If the provided artifact URL is invalid, the artifact deletion will be bypassed, and the gc process will continue.

mlflow gc [OPTIONS]

Options

--older-than <older_than>

Optional. Remove run(s) older than the specified time limit. Specify a string in #d#h#m#s format. Float values are also supported. For example: –older-than 1d2h3m4s, –older-than 1.2d3h4m5s

--backend-store-uri <PATH>

URI of the backend store from which to delete runs. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’) or local filesystem URIs (e.g. ‘file:///absolute/path/to/directory’). By default, data will be deleted from the ./mlruns directory.

--artifacts-destination <URI>

The base artifact location from which to resolve artifact upload/download/list requests (e.g. ‘s3://my-bucket’). This option only applies when the tracking server is configured to stream artifacts and the experiment’s artifact root location is http or mlflow-artifacts URI. Otherwise, the default artifact location will be used.

--run-ids <run_ids>

Optional comma separated list of runs to be permanently deleted. If run ids are not specified, data is removed for all runs in the deleted lifecycle stage.

--experiment-ids <experiment_ids>

Optional comma separated list of experiments to be permanently deleted including all of their associated runs. If experiment ids are not specified, data is removed for all experiments in the deleted lifecycle stage.

Environment variables

MLFLOW_ARTIFACTS_DESTINATION

Provide a default for --artifacts-destination

models

Deploy MLflow models locally.

To deploy a model associated with a run on a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.

mlflow models [OPTIONS] COMMAND [ARGS]...

build-docker

Builds a Docker image whose default entrypoint serves an MLflow model at port 8080, using the python_function flavor. The container serves the model referenced by --model-uri, if specified when build-docker is called. If --model-uri is not specified when build_docker is called, an MLflow Model directory must be mounted as a volume into the /opt/ml/model directory in the container.

Building a Docker image with --model-uri:

# Build a Docker image named 'my-image-name' that serves the model from run 'some-run-uuid'
# at run-relative artifact path 'my-model'
mlflow models build-docker --model-uri "runs:/some-run-uuid/my-model" --name "my-image-name"
# Serve the model
docker run -p 5001:8080 "my-image-name"

Building a Docker image without --model-uri:

# Build a generic Docker image named 'my-image-name'
mlflow models build-docker --name "my-image-name"
# Mount the model stored in '/local/path/to/artifacts/model' and serve it
docker run --rm -p 5001:8080 -v /local/path/to/artifacts/model:/opt/ml/model "my-image-name"

Important

Since MLflow 2.10.1, the Docker image built with --model-uri does not install Java for improved performance, unless the model flavor is one of ["johnsnowlabs", "h2o", "mleap", "spark"]. If you need to install Java for other flavors, e.g. custom Python model that uses SparkML, please specify the --install-java flag to enforce Java installation.

Warning

The image built without --model-uri doesn’t support serving models with RFunc / Java MLeap model server.

NB: by default, the container will start nginx and gunicorn processes. If you don’t need the nginx process to be started (for instance if you deploy your container to Google Cloud Run), you can disable it via the DISABLE_NGINX environment variable:

docker run -p 5001:8080 -e DISABLE_NGINX=true "my-image-name"

See https://www.mlflow.org/docs/latest/python_api/mlflow.pyfunc.html for more information on the ‘python_function’ flavor.

mlflow models build-docker [OPTIONS]

Options

-m, --model-uri <URI>

[Optional] URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-n, --name <name>

Name to use for built image

--env-manager <env_manager>

If specified, create an environment for MLmodel using the specified environment manager. The following values are supported:

- local: use the local environment
- virtualenv: use virtualenv (and pyenv for Python version management)
- conda: use conda

If unspecified, default to virtualenv.

--mlflow-home <PATH>

Path to local clone of MLflow project. Use for development only.

--install-java

Install Java in the image. Default is False in order to reduce both the image size and the build time. Model flavors requiring Java will enable this setting automatically, such as the Spark flavor.

--install-mlflow

If specified and there is a conda or virtualenv environment to be activated mlflow will be installed into the environment after it has been activated. The version of installed mlflow will be the same as the one used to invoke this command.

--enable-mlserver

Enable serving with MLServer through the v2 inference protocol. You can use environment variables to configure MLServer. (See https://mlserver.readthedocs.io/en/latest/reference/settings.html)

generate-dockerfile

Generates a directory with Dockerfile whose default entrypoint serves an MLflow model at port 8080 using the python_function flavor. The generated Dockerfile is written to the specified output directory, along with the model (if specified). This Dockerfile defines an image that is equivalent to the one produced by mlflow models build-docker.

mlflow models generate-dockerfile [OPTIONS]

Options

-m, --model-uri <URI>

[Optional] URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-d, --output-directory <output_directory>

Output directory where the generated Dockerfile is stored.

--env-manager <env_manager>

If specified, create an environment for MLmodel using the specified environment manager. The following values are supported:

- local: use the local environment
- virtualenv: use virtualenv (and pyenv for Python version management)
- conda: use conda

If unspecified, default to None, then MLflow will automatically pick the env manager based on the model’s flavor configuration. If model-uri is specified: if python version is specified in the flavor configuration and no java installation is required, then we use local environment. Otherwise we use virtualenv. If no model-uri is provided, we use virtualenv.

--mlflow-home <PATH>

Path to local clone of MLflow project. Use for development only.

--install-java

Install Java in the image. Default is False in order to reduce both the image size and the build time. Model flavors requiring Java will enable this setting automatically, such as the Spark flavor.

--install-mlflow

If specified and there is a conda or virtualenv environment to be activated mlflow will be installed into the environment after it has been activated. The version of installed mlflow will be the same as the one used to invoke this command.

--enable-mlserver

Enable serving with MLServer through the v2 inference protocol. You can use environment variables to configure MLServer. (See https://mlserver.readthedocs.io/en/latest/reference/settings.html)

predict

Generate predictions in json format using a saved MLflow model. For information about the input data formats accepted by this function, see the following documentation: https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools.

mlflow models predict [OPTIONS]

Options

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-i, --input-path <input_path>

CSV containing pandas DataFrame to predict against.

-o, --output-path <output_path>

File to output results to as json file. If not provided, output to stdout.

-t, --content-type <content_type>

Content type of the input file. Can be one of {‘json’, ‘csv’}.

--env-manager <env_manager>

If specified, create an environment for MLmodel using the specified environment manager. The following values are supported:

- local: use the local environment
- virtualenv: use virtualenv (and pyenv for Python version management)
- conda: use conda

If unspecified, default to virtualenv.

--install-mlflow

If specified and there is a conda or virtualenv environment to be activated mlflow will be installed into the environment after it has been activated. The version of installed mlflow will be the same as the one used to invoke this command.

-r, --pip-requirements-override <pip_requirements_override>

Specify packages and versions to override the dependencies defined in the model. Must be a comma-separated string like x==y,z==a.

prepare-env

Performs any preparation necessary to predict or serve the model, for example downloading dependencies or initializing a conda environment. After preparation, calling predict or serve should be fast.

mlflow models prepare-env [OPTIONS]

Options

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

--env-manager <env_manager>

If specified, create an environment for MLmodel using the specified environment manager. The following values are supported:

- local: use the local environment
- virtualenv: use virtualenv (and pyenv for Python version management)
- conda: use conda

If unspecified, default to virtualenv.

--install-mlflow

If specified and there is a conda or virtualenv environment to be activated mlflow will be installed into the environment after it has been activated. The version of installed mlflow will be the same as the one used to invoke this command.

serve

Serve a model saved with MLflow by launching a webserver on the specified host and port. The command supports models with the python_function or crate (R Function) flavor. For information about the input data formats accepted by the webserver, see the following documentation: https://www.mlflow.org/docs/latest/models.html#built-in-deployment-tools.

Warning

Models built using MLflow 1.x will require adjustments to the endpoint request payload if executed in an environment that has MLflow 2.x installed. In 1.x, a request payload was in the format: {'columns': [str], 'data': [[...]]}. 2.x models require payloads that are defined by the structural-defining keys of either dataframe_split, instances, inputs or dataframe_records. See the examples below for demonstrations of the changes to the invocation API endpoint in 2.0.

Note

Requests made in pandas DataFrame structures can be made in either split or records oriented formats. See https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html for detailed information on orientation formats for converting a pandas DataFrame to json.

Example:

$ mlflow models serve -m runs:/my-run-id/model-path &

# records orientation input format for serializing a pandas DataFrame
$ curl http://127.0.0.1:5000/invocations -H 'Content-Type: application/json' -d '{
    "dataframe_records": [{"a":1, "b":2}, {"a":3, "b":4}, {"a":5, "b":6}]
}'

# split orientation input format for serializing a pandas DataFrame
$ curl http://127.0.0.1:5000/invocations -H 'Content-Type: application/json' -d '{
    "dataframe_split": {"columns": ["a", "b"],
                        "index": [0, 1, 2],
                        "data": [[1, 2], [3, 4], [5, 6]]}
}'

# inputs format for List submission of array, tensor, or DataFrame data
$ curl http://127.0.0.1:5000/invocations -H 'Content-Type: application/json' -d '{
    "inputs": [[1, 2], [3, 4], [5, 6]]
}'

# instances format for submission of Tensor data
curl http://127.0.0.1:5000/invocations -H 'Content-Type: application/json' -d '{
    "instances": [
        {"a": "t1", "b": [1, 2, 3]},
        {"a": "t2", "b": [4, 5, 6]},
        {"a": "t3", "b": [7, 8, 9]}
    ]
}'
mlflow models serve [OPTIONS]

Options

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-p, --port <port>

The port to listen on (default: 5000).

-h, --host <HOST>

The network address to listen on (default: 127.0.0.1). Use 0.0.0.0 to bind to all addresses if you want to access the tracking server from other machines.

-t, --timeout <timeout>

Timeout in seconds to serve a request (default: 60).

-w, --workers <workers>

Number of gunicorn worker processes to handle requests (default: 1).

--env-manager <env_manager>

If specified, create an environment for MLmodel using the specified environment manager. The following values are supported:

- local: use the local environment
- virtualenv: use virtualenv (and pyenv for Python version management)
- conda: use conda

If unspecified, default to virtualenv.

--no-conda

If specified, use local environment.

--install-mlflow

If specified and there is a conda or virtualenv environment to be activated mlflow will be installed into the environment after it has been activated. The version of installed mlflow will be the same as the one used to invoke this command.

--enable-mlserver

Enable serving with MLServer through the v2 inference protocol. You can use environment variables to configure MLServer. (See https://mlserver.readthedocs.io/en/latest/reference/settings.html)

Environment variables

MLFLOW_PORT

Provide a default for --port

MLFLOW_HOST

Provide a default for --host

MLFLOW_SCORING_SERVER_REQUEST_TIMEOUT

Provide a default for --timeout

MLFLOW_WORKERS

Provide a default for --workers

update-pip-requirements

Add or remove requirements from a model’s conda.yaml and requirements.txt files. If using a remote tracking server, please make sure to set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.

REQUIREMENT_STRINGS is a list of pip requirements specifiers. See below for examples.

Sample usage:

# Add requirements using the model's "runs:/" URI

mlflow models update-pip-requirements -m runs:/<run_id>/<model_path> \
    add "pandas==1.0.0" "scikit-learn" "mlflow >= 2.8, != 2.9.0"

# Remove requirements from a local model

mlflow models update-pip-requirements -m /path/to/local/model \
    remove "torchvision" "pydantic"

Note that model registry URIs (i.e. URIs in the form models:/) are not supported, as artifacts in the model registry are intended to be read-only. Editing requirements is read-only artifact repositories is also not supported.

If adding requirements, the function will overwrite any existing requirements that overlap, or else append the new requirements to the existing list.

If removing requirements, the function will ignore any version specifiers, and remove all the specified package names. Any requirements that are not found in the existing files will be ignored.

mlflow models update-pip-requirements [OPTIONS] {add|remove}
                                      [REQUIREMENT_STRINGS]...

Options

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

Arguments

OPERATION

Required argument

REQUIREMENT_STRINGS

Optional argument(s)

recipes

Run MLflow Recipes and inspect recipe results.

mlflow recipes [OPTIONS] COMMAND [ARGS]...

clean

Remove all recipe outputs from the cache, or remove the cached outputs of a particular recipe step if specified. After cached outputs are cleaned for a particular step, the step will be re-executed in its entirety the next time it is run.

mlflow recipes clean [OPTIONS]

Options

-s, --step <step>

The name of the recipe step for which to remove cached outputs.

-p, --profile <profile>

Required The name of the recipe profile to use. Profiles customize the configuration of one or more recipe steps, and recipe executions with different profiles often produce different results.

Environment variables

MLFLOW_RECIPES_PROFILE

Provide a default for --profile

get-artifact

Get the location of an artifact output from the recipe.

mlflow recipes get-artifact [OPTIONS]

Options

-a, --artifact <artifact>

Required The name of the artifact to retrieve.

-p, --profile <profile>

Required The name of the recipe profile to use. Profiles customize the configuration of one or more recipe steps, and recipe executions with different profiles often produce different results.

Environment variables

MLFLOW_RECIPES_PROFILE

Provide a default for --profile

inspect

Display a visual overview of the recipe graph, or display a summary of results from a particular recipe step if specified. If the specified step has not been executed, nothing is displayed.

mlflow recipes inspect [OPTIONS]

Options

-s, --step <step>

The name of the recipe step to inspect.

-p, --profile <profile>

Required The name of the recipe profile to use. Profiles customize the configuration of one or more recipe steps, and recipe executions with different profiles often produce different results.

Environment variables

MLFLOW_RECIPES_PROFILE

Provide a default for --profile

run

Run the full recipe, or run a particular recipe step if specified, producing outputs and displaying a summary of results upon completion.

mlflow recipes run [OPTIONS]

Options

-s, --step <step>

The name of the recipe step to run.

-p, --profile <profile>

Required The name of the recipe profile to use. Profiles customize the configuration of one or more recipe steps, and recipe executions with different profiles often produce different results.

Environment variables

MLFLOW_RECIPES_PROFILE

Provide a default for --profile

run

Run an MLflow project from the given URI.

For local runs, the run will block until it completes. Otherwise, the project will run asynchronously.

If running locally (the default), the URI can be either a Git repository URI or a local path. If running on Databricks, the URI must be a Git repository.

By default, Git projects run in a new working directory with the given parameters, while local projects run from the project’s root directory.

mlflow run [OPTIONS] URI

Options

-e, --entry-point <NAME>

Entry point within project. [default: main]. If the entry point is not found, attempts to run the project file with the specified name as a script, using ‘python’ to run .py files and the default shell (specified by environment variable $SHELL) to run .sh files

-v, --version <VERSION>

Version of the project to run, as a Git commit reference for Git projects.

-P, --param-list <NAME=VALUE>

A parameter for the run, of the form -P name=value. Provided parameters that are not in the list of parameters for an entry point will be passed to the corresponding entry point as command-line arguments in the form –name value

-A, --docker-args <NAME=VALUE>

A docker run argument or flag, of the form -A name=value (e.g. -A gpus=all) or -A name (e.g. -A t). The argument will then be passed as docker run –name value or docker run –name respectively.

--experiment-name <experiment_name>

Name of the experiment under which to launch the run. If not specified, ‘experiment-id’ option will be used to launch run.

--experiment-id <experiment_id>

ID of the experiment under which to launch the run.

-b, --backend <BACKEND>

Execution backend to use for run. Supported values: ‘local’, ‘databricks’, kubernetes (experimental). Defaults to ‘local’. If running against Databricks, will run against a Databricks workspace determined as follows: if a Databricks tracking URI of the form ‘databricks://profile’ has been set (e.g. by setting the MLFLOW_TRACKING_URI environment variable), will run against the workspace specified by <profile>. Otherwise, runs against the workspace specified by the default Databricks CLI profile. See https://github.com/databricks/databricks-cli for more info on configuring a Databricks CLI profile.

-c, --backend-config <FILE>

Path to JSON file (must end in ‘.json’) or JSON string which will be passed as config to the backend. The exact content which should be provided is different for each execution backend and is documented at https://www.mlflow.org/docs/latest/projects.html.

--env-manager <env_manager>

If specified, create an environment for MLproject using the specified environment manager. The following values are supported:

- local: use the local environment
- virtualenv: use virtualenv (and pyenv for Python version management)
- conda: use conda

If unspecified, the appropriate environment manager is automatically selected based on the project configuration. For example, if MLproject.yaml contains a python_env key, virtualenv is used.

--storage-dir <storage_dir>

Only valid when backend is local. MLflow downloads artifacts from distributed URIs passed to parameters of type ‘path’ to subdirectories of storage_dir.

--run-id <RUN_ID>

If specified, the given run ID will be used instead of creating a new run. Note: this argument is used internally by the MLflow project APIs and should not be specified.

--run-name <RUN_NAME>

The name to give the MLflow Run associated with the project execution. If not specified, the MLflow Run name is left unset.

--build-image

Only valid for Docker projects. If specified, build a new Docker image that’s based on the image specified by the image field in the MLproject file, and contains files in the project directory.

Default

False

Arguments

URI

Required argument

Environment variables

MLFLOW_EXPERIMENT_NAME

Provide a default for --experiment-name

MLFLOW_EXPERIMENT_ID

Provide a default for --experiment-id

MLFLOW_TMP_DIR

Provide a default for --storage-dir

runs

Manage runs. To manage runs of experiments associated with a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.

mlflow runs [OPTIONS] COMMAND [ARGS]...

delete

Mark a run for deletion. Return an error if the run does not exist or is already marked. You can restore a marked run with restore_run, or permanently delete a run in the backend store.

mlflow runs delete [OPTIONS]

Options

--run-id <run_id>

Required

describe

All of run details will print to the stdout as JSON format.

mlflow runs describe [OPTIONS]

Options

--run-id <run_id>

Required

list

List all runs of the specified experiment in the configured tracking server.

mlflow runs list [OPTIONS]

Options

--experiment-id <experiment_id>

Required Specify the experiment ID for list of runs.

-v, --view <view>

Select view type for list experiments. Valid view types are ‘active_only’ (default), ‘deleted_only’, and ‘all’.

Environment variables

MLFLOW_EXPERIMENT_ID

Provide a default for --experiment-id

restore

Restore a deleted run. Returns an error if the run is active or has been permanently deleted.

mlflow runs restore [OPTIONS]

Options

--run-id <run_id>

Required

sagemaker

Serve models on SageMaker.

To serve a model associated with a run on a tracking server, set the MLFLOW_TRACKING_URI environment variable to the URL of the desired server.

mlflow sagemaker [OPTIONS] COMMAND [ARGS]...

build-and-push-container

Build new MLflow Sagemaker image, assign it a name, and push to ECR.

This function builds an MLflow Docker image. The image is built locally and it requires Docker to run. The image is pushed to ECR under current active AWS account and to current active AWS region.

mlflow sagemaker build-and-push-container [OPTIONS]

Options

--build, --no-build

Build the container if set.

--push, --no-push

Push the container to AWS ECR if set.

-c, --container <container>

image name

--env-manager <env_manager>

If specified, create an environment for MLmodel using the specified environment manager. The following values are supported:

- local: use the local environment
- virtualenv: use virtualenv (and pyenv for Python version management)
- conda: use conda

If unspecified, default to virtualenv.

--mlflow-home <PATH>

Path to local clone of MLflow project. Use for development only.

deploy-transform-job

Deploy model on Sagemaker as a batch transform job. Current active AWS account needs to have correct permissions setup.

By default, unless the --async flag is specified, this command will block until either the batch transform job completes (definitively succeeds or fails) or the specified timeout elapses.

mlflow sagemaker deploy-transform-job [OPTIONS]

Options

-n, --job-name <job_name>

Required Transform job name

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

--input-data-type <input_data_type>

Required Input data type for the transform job

-u, --input-uri <input_uri>

Required S3 key name prefix or manifest of the input data

--content-type <content_type>

Required The multipurpose internet mail extension (MIME) type of the data

-o, --output-path <output_path>

Required The S3 path to store the output results of the Sagemaker transform job

--compression-type <compression_type>

The compression type of the transform data

-s, --split-type <split_type>

The method to split the transform job’s data files into smaller batches

-a, --accept <accept>

The multipurpose internet mail extension (MIME) type of the output data

--assemble-with <assemble_with>

The method to assemble the results of the transform job as a single S3 object

--input-filter <input_filter>

A JSONPath expression used to select a portion of the input data for the transform job

--output-filter <output_filter>

A JSONPath expression used to select a portion of the output data from the transform job

-j, --join-resource <join_resource>

The source of the data to join with the transformed data

-e, --execution-role-arn <execution_role_arn>

SageMaker execution role

-b, --bucket <bucket>

S3 bucket to store model artifacts

-i, --image-url <image_url>

ECR URL for the Docker image

--region-name <region_name>

Name of the AWS region in which to deploy the transform job

-t, --instance-type <instance_type>

The type of SageMaker ML instance on which to perform the batch transform job. For a list of supported instance types, see https://aws.amazon.com/sagemaker/pricing/instance-types/.

-c, --instance-count <instance_count>

The number of SageMaker ML instances on which to perform the batch transform job

-v, --vpc-config <vpc_config>

Path to a file containing a JSON-formatted VPC configuration. This configuration will be used when creating the new SageMaker model associated with this application. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_VpcConfig.html

-f, --flavor <flavor>

The name of the flavor to use for deployment. Must be one of the following: [‘python_function’, ‘mleap’]. If unspecified, a flavor will be automatically selected from the model’s available flavors.

--archive

If specified, any SageMaker resources that become inactive after the finished batch transform job are preserved. These resources may include the associated SageMaker models and model artifacts. Otherwise, if –archive is unspecified, these resources are deleted. –archive must be specified when deploying asynchronously with –async.

--async

If specified, this command will return immediately after starting the deployment process. It will not wait for the deployment process to complete. The caller is responsible for monitoring the deployment process via native SageMaker APIs or the AWS console.

--timeout <timeout>

If the command is executed synchronously, the deployment process will return after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the health and status of the pending deployment via native SageMaker APIs or the AWS console. If the command is executed asynchronously using the –async flag, this value is ignored.

push-model

Push an MLflow model to Sagemaker model registry. Current active AWS account needs to have correct permissions setup.

mlflow sagemaker push-model [OPTIONS]

Options

-n, --model-name <model_name>

Required Sagemaker model name

-m, --model-uri <URI>

Required URI to the model. A local path, a ‘runs:/’ URI, or a remote storage URI (e.g., an ‘s3://’ URI). For more information about supported remote URIs for model artifacts, see https://mlflow.org/docs/latest/tracking.html#artifact-stores

-e, --execution-role-arn <execution_role_arn>

SageMaker execution role

-b, --bucket <bucket>

S3 bucket to store model artifacts

-i, --image-url <image_url>

ECR URL for the Docker image

--region-name <region_name>

Name of the AWS region in which to push the Sagemaker model

-v, --vpc-config <vpc_config>

Path to a file containing a JSON-formatted VPC configuration. This configuration will be used when creating the new SageMaker model. For more information, see https://docs.aws.amazon.com/sagemaker/latest/dg/API_VpcConfig.html

-f, --flavor <flavor>

The name of the flavor to use for deployment. Must be one of the following: [‘python_function’, ‘mleap’]. If unspecified, a flavor will be automatically selected from the model’s available flavors.

terminate-transform-job

Terminate the specified Sagemaker batch transform job. Unless --archive is specified, all SageMaker resources associated with the batch transform job are deleted as well.

By default, unless the --async flag is specified, this command will block until either the termination process completes (definitively succeeds or fails) or the specified timeout elapses.

mlflow sagemaker terminate-transform-job [OPTIONS]

Options

-n, --job-name <job_name>

Required Transform job name

-r, --region-name <region_name>

Name of the AWS region in which the transform job is deployed

--archive

If specified, resources associated with the application are preserved. These resources may include unused SageMaker models and model artifacts. Otherwise, if –archive is unspecified, these resources are deleted. –archive must be specified when deleting asynchronously with –async.

--async

If specified, this command will return immediately after starting the termination process. It will not wait for the termination process to complete. The caller is responsible for monitoring the termination process via native SageMaker APIs or the AWS console.

--timeout <timeout>

If the command is executed synchronously, the termination process will return after the specified number of seconds if no definitive result (success or failure) is achieved. Once the function returns, the caller is responsible for monitoring the health and status of the pending termination via native SageMaker APIs or the AWS console. If the command is executed asynchronously using the –async flag, this value is ignored.

server

Run the MLflow tracking server.

The server listens on http://localhost:5000 by default and only accepts connections from the local machine. To let the server accept connections from other machines, you will need to pass --host 0.0.0.0 to listen on all network interfaces (or a specific interface address).

mlflow server [OPTIONS]

Options

--backend-store-uri <PATH>

URI to which to persist experiment and run data. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’) or local filesystem URIs (e.g. ‘file:///absolute/path/to/directory’). By default, data will be logged to the ./mlruns directory.

--registry-store-uri <URI>

URI to which to persist registered models. Acceptable URIs are SQLAlchemy-compatible database connection strings (e.g. ‘sqlite:///path/to/file.db’). If not specified, backend-store-uri is used.

--default-artifact-root <URI>

Directory in which to store artifacts for any new experiments created. For tracking server backends that rely on SQL, this option is required in order to store artifacts. Note that this flag does not impact already-created experiments with any previous configuration of an MLflow server instance. By default, data will be logged to the mlflow-artifacts:/ uri proxy if the –serve-artifacts option is enabled. Otherwise, the default location will be ./mlruns.

--serve-artifacts, --no-serve-artifacts

Enables serving of artifact uploads, downloads, and list requests by routing these requests to the storage location that is specified by ‘–artifacts-destination’ directly through a proxy. The default location that these requests are served from is a local ‘./mlartifacts’ directory which can be overridden via the ‘–artifacts-destination’ argument. To disable artifact serving, specify –no-serve-artifacts. Default: True

--artifacts-only

If specified, configures the mlflow server to be used only for proxied artifact serving. With this mode enabled, functionality of the mlflow tracking service (e.g. run creation, metric logging, and parameter logging) is disabled. The server will only expose endpoints for uploading, downloading, and listing artifacts. Default: False

--artifacts-destination <URI>

The base artifact location from which to resolve artifact upload/download/list requests (e.g. ‘s3://my-bucket’). Defaults to a local ‘./mlartifacts’ directory. This option only applies when the tracking server is configured to stream artifacts and the experiment’s artifact root location is http or mlflow-artifacts URI.

-h, --host <HOST>

The network address to listen on (default: 127.0.0.1). Use 0.0.0.0 to bind to all addresses if you want to access the tracking server from other machines.

-p, --port <port>

The port to listen on (default: 5000).

-w, --workers <workers>

Number of gunicorn worker processes to handle requests (default: 1).

--static-prefix <static_prefix>

A prefix which will be prepended to the path of all static paths.

--gunicorn-opts <gunicorn_opts>

Additional command line options forwarded to gunicorn processes.

--waitress-opts <waitress_opts>

Additional command line options for waitress-serve.

--expose-prometheus <expose_prometheus>

Path to the directory where metrics will be stored. If the directory doesn’t exist, it will be created. Activate prometheus exporter to expose metrics on /metrics endpoint.

--app-name <app_name>

Application name to be used for the tracking server. If not specified, ‘mlflow.server:app’ will be used.

Options

basic-auth | basic-auth

--dev

If enabled, run the server with debug logging and auto-reload. Should only be used for development purposes. Cannot be used with ‘–gunicorn-opts’. Unsupported on Windows.

Default

False

Environment variables

MLFLOW_BACKEND_STORE_URI

Provide a default for --backend-store-uri

MLFLOW_REGISTRY_STORE_URI

Provide a default for --registry-store-uri

MLFLOW_DEFAULT_ARTIFACT_ROOT

Provide a default for --default-artifact-root

MLFLOW_SERVE_ARTIFACTS

Provide a default for --serve-artifacts

MLFLOW_ARTIFACTS_ONLY

Provide a default for --artifacts-only

MLFLOW_ARTIFACTS_DESTINATION

Provide a default for --artifacts-destination

MLFLOW_HOST

Provide a default for --host

MLFLOW_PORT

Provide a default for --port

MLFLOW_WORKERS

Provide a default for --workers

MLFLOW_STATIC_PREFIX

Provide a default for --static-prefix

MLFLOW_GUNICORN_OPTS

Provide a default for --gunicorn-opts

MLFLOW_EXPOSE_PROMETHEUS

Provide a default for --expose-prometheus