LLM RAG Evaluation with MLflow using llama2-as-judge Example Notebook

In this notebook, we will demonstrate how to evaluate various a RAG system with MLflow. We will use llama2-70b as the judge model, via a Databricks serving endpoint.

Download this Notebook

Notebook compatibility

With rapidly changing libraries such as langchain, examples can become outdated rather quickly and will no longer work. For the purposes of demonstration, here are the critical dependencies that are recommended to use to effectively run this notebook:

Package

Version

langchain

0.1.16

lanchain-community

0.0.33

langchain-openai

0.0.8

openai

1.12.0

mlflow

2.12.1

If you attempt to execute this notebook with different versions, it may function correctly, but it is recommended to use the precise versions above to ensure that your code executes properly.

Installing Requirements

Before proceeding with this tutorial, ensure that your versions of the installed packages meet the requirements listed above.

pip install langchain==0.1.16 langchain-community==0.0.33 langchain-openai==0.0.8 openai==1.12.0

Configuration

We need to set our OpenAI API key.

In order to set your private key safely, please be sure to either export your key through a command-line terminal for your current instance, or, for a permanent addition to all user-based sessions, configure your favored environment management configuration file (i.e., .bashrc, .zshrc) to have the following entry:

OPENAI_API_KEY=<your openai API key>

In order to run this notebook, using a Databricks hosted Llama2 model, you will need to set your host and personal access token. Please ensure that these are set either using the Databricks SDK or setting the environment variables:

DATABRICKS_HOST=<your Databricks workspace URI>

DATABRICKS_TOKEN=<your personal access token>

[1]:
import pandas as pd
from langchain.chains import RetrievalQA
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain_openai import OpenAI, OpenAIEmbeddings

import mlflow
from mlflow.deployments import set_deployments_target
from mlflow.metrics.genai import EvaluationExample, faithfulness, relevance

Set the deployment target to “databricks” for use with Databricks served models.

[2]:
set_deployments_target("databricks")

Create a RAG system

Use Langchain and Chroma to create a RAG system that answers questions based on the MLflow documentation.

[3]:
loader = WebBaseLoader("https://mlflow.org/docs/latest/index.html")

documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)

embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)

qa = RetrievalQA.from_chain_type(
    llm=OpenAI(temperature=0),
    chain_type="stuff",
    retriever=docsearch.as_retriever(),
    return_source_documents=True,
)

Evaluate the RAG system using mlflow.evaluate()

Create a simple function that runs each input through the RAG chain

[4]:
def model(input_df):
    answer = []
    for index, row in input_df.iterrows():
        answer.append(qa(row["questions"]))

    return answer

Create an eval dataset

[5]:
eval_df = pd.DataFrame(
    {
        "questions": [
            "What is MLflow?",
            "How to run mlflow.evaluate()?",
            "How to log_table()?",
            "How to load_table()?",
        ],
    }
)

Create a faithfulness metric using databricks-llama2-70b-chat as the judge

[6]:
# Create a good and bad example for faithfulness in the context of this problem
faithfulness_examples = [
    EvaluationExample(
        input="How do I disable MLflow autologging?",
        output="mlflow.autolog(disable=True) will disable autologging for all functions. In Databricks, autologging is enabled by default. ",
        score=2,
        justification="The output provides a working solution, using the mlflow.autolog() function that is provided in the context.",
        grading_context={
            "context": "mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function."
        },
    ),
    EvaluationExample(
        input="How do I disable MLflow autologging?",
        output="mlflow.autolog(disable=True) will disable autologging for all functions.",
        score=5,
        justification="The output provides a solution that is using the mlflow.autolog() function that is provided in the context.",
        grading_context={
            "context": "mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function."
        },
    ),
]

faithfulness_metric = faithfulness(
    model="endpoints:/databricks-llama-2-70b-chat", examples=faithfulness_examples
)
print(faithfulness_metric)
EvaluationMetric(name=faithfulness, greater_is_better=True, long_name=faithfulness, version=v1, metric_details=
Task:
You must return the following fields in your response in two lines, one below the other:
score: Your numerical score for the model's faithfulness based on the rubric
justification: Your reasoning about the model's faithfulness score

You are an impartial judge. You will be given an input that was sent to a machine
learning model, and you will be given an output that the model produced. You
may also be given additional information that was used by the model to generate the output.

Your task is to determine a numerical score called faithfulness based on the input and output.
A definition of faithfulness and a grading rubric are provided below.
You must use the grading rubric to determine your score. You must also justify your score.

Examples could be included below for reference. Make sure to use them as references and to
understand them before completing the task.

Input:
{input}

Output:
{output}

{grading_context_columns}

Metric definition:
Faithfulness is only evaluated with the provided output and provided context, please ignore the provided input entirely when scoring faithfulness. Faithfulness assesses how much of the provided output is factually consistent with the provided context. A higher score indicates that a higher proportion of claims present in the output can be derived from the provided context. Faithfulness does not consider how much extra information from the context is not present in the output.

Grading rubric:
Faithfulness: Below are the details for different scores:
- Score 1: None of the claims in the output can be inferred from the provided context.
- Score 2: Some of the claims in the output can be inferred from the provided context, but the majority of the output is missing from, inconsistent with, or contradictory to the provided context.
- Score 3: Half or more of the claims in the output can be inferred from the provided context.
- Score 4: Most of the claims in the output can be inferred from the provided context, with very little information that is not directly supported by the provided context.
- Score 5: All of the claims in the output are directly supported by the provided context, demonstrating high faithfulness to the provided context.

Examples:

Example Output:
mlflow.autolog(disable=True) will disable autologging for all functions. In Databricks, autologging is enabled by default.

Additional information used by the model:
key: context
value:
mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function.

Example score: 2
Example justification: The output provides a working solution, using the mlflow.autolog() function that is provided in the context.


Example Output:
mlflow.autolog(disable=True) will disable autologging for all functions.

Additional information used by the model:
key: context
value:
mlflow.autolog(log_input_examples: bool = False, log_model_signatures: bool = True, log_models: bool = True, log_datasets: bool = True, disable: bool = False, exclusive: bool = False, disable_for_unsupported_versions: bool = False, silent: bool = False, extra_tags: Optional[Dict[str, str]] = None) → None[source] Enables (or disables) and configures autologging for all supported integrations. The parameters are passed to any autologging integrations that support them. See the tracking docs for a list of supported autologging integrations. Note that framework-specific configurations set at any point will take precedence over any configurations set by this function.

Example score: 5
Example justification: The output provides a solution that is using the mlflow.autolog() function that is provided in the context.


You must return the following fields in your response in two lines, one below the other:
score: Your numerical score for the model's faithfulness based on the rubric
justification: Your reasoning about the model's faithfulness score

Do not add additional new lines. Do not add any other fields.
    )

Create a relevance metric using databricks-llama2-70b-chat as the judge

[7]:
relevance_metric = relevance(model="endpoints:/databricks-llama-2-70b-chat")
print(relevance_metric)
EvaluationMetric(name=relevance, greater_is_better=True, long_name=relevance, version=v1, metric_details=
Task:
You must return the following fields in your response in two lines, one below the other:
score: Your numerical score for the model's relevance based on the rubric
justification: Your reasoning about the model's relevance score

You are an impartial judge. You will be given an input that was sent to a machine
learning model, and you will be given an output that the model produced. You
may also be given additional information that was used by the model to generate the output.

Your task is to determine a numerical score called relevance based on the input and output.
A definition of relevance and a grading rubric are provided below.
You must use the grading rubric to determine your score. You must also justify your score.

Examples could be included below for reference. Make sure to use them as references and to
understand them before completing the task.

Input:
{input}

Output:
{output}

{grading_context_columns}

Metric definition:
Relevance encompasses the appropriateness, significance, and applicability of the output with respect to both the input and context. Scores should reflect the extent to which the output directly addresses the question provided in the input, given the provided context.

Grading rubric:
Relevance: Below are the details for different scores:- Score 1: The output doesn't mention anything about the question or is completely irrelevant to the provided context.
- Score 2: The output provides some relevance to the question and is somehow related to the provided context.
- Score 3: The output mostly answers the question and is largely consistent with the provided context.
- Score 4: The output answers the question and is consistent with the provided context.
- Score 5: The output answers the question comprehensively using the provided context.

Examples:

Example Input:
How is MLflow related to Databricks?

Example Output:
Databricks is a data engineering and analytics platform designed to help organizations process and analyze large amounts of data. Databricks is a company specializing in big data and machine learning solutions.

Additional information used by the model:
key: context
value:
MLflow is an open-source platform for managing the end-to-end machine learning (ML) lifecycle. It was developed by Databricks, a company that specializes in big data and machine learning solutions. MLflow is designed to address the challenges that data scientists and machine learning engineers face when developing, training, and deploying machine learning models.

Example score: 2
Example justification: The output provides relevant information about Databricks, mentioning it as a company specializing in big data and machine learning solutions. However, it doesn't directly address how MLflow is related to Databricks, which is the specific question asked in the input. Therefore, the output is only somewhat related to the provided context.


Example Input:
How is MLflow related to Databricks?

Example Output:
MLflow is a product created by Databricks to enhance the efficiency of machine learning processes.

Additional information used by the model:
key: context
value:
MLflow is an open-source platform for managing the end-to-end machine learning (ML) lifecycle. It was developed by Databricks, a company that specializes in big data and machine learning solutions. MLflow is designed to address the challenges that data scientists and machine learning engineers face when developing, training, and deploying machine learning models.

Example score: 4
Example justification: The output provides a relevant and accurate statement about the relationship between MLflow and Databricks. While it doesn't provide extensive detail, it still offers a substantial and meaningful response. To achieve a score of 5, the response could be further improved by providing additional context or details about how MLflow specifically functions within the Databricks ecosystem.


You must return the following fields in your response in two lines, one below the other:
score: Your numerical score for the model's relevance based on the rubric
justification: Your reasoning about the model's relevance score

Do not add additional new lines. Do not add any other fields.
    )
[8]:
results = mlflow.evaluate(
    model,
    eval_df,
    model_type="question-answering",
    evaluators="default",
    predictions="result",
    extra_metrics=[faithfulness_metric, relevance_metric, mlflow.metrics.latency()],
    evaluator_config={
        "col_mapping": {
            "inputs": "questions",
            "context": "source_documents",
        }
    },
)
print(results.metrics)
2024/04/23 14:24:36 INFO mlflow.models.evaluation.base: Evaluating the model with the default evaluator.
2024/04/23 14:24:36 INFO mlflow.models.evaluation.default_evaluator: Computing model predictions.
/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
  warn_deprecated(
2024/04/23 14:24:46 INFO mlflow.models.evaluation.default_evaluator: Testing metrics on first row...
2024/04/23 14:24:50 WARNING mlflow.metrics.metric_definitions: Failed to load 'toxicity' metric (error: RuntimeError("Failed to import transformers.pipelines because of the following error (look up to see its traceback):\ncannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/urllib3/util/ssl_.py)")), skipping metric logging.
2024/04/23 14:24:50 WARNING mlflow.models.evaluation.default_evaluator: Did not log builtin metric 'toxicity' because it returned None.
2024/04/23 14:24:50 WARNING mlflow.models.evaluation.default_evaluator: Did not log builtin metric 'exact_match' because it returned None.
/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3464: RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/numpy/core/_methods.py:192: RuntimeWarning: invalid value encountered in scalar divide
  ret = ret.dtype.type(ret / rcount)
/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/numpy/core/fromnumeric.py:3747: RuntimeWarning: Degrees of freedom <= 0 for slice
  return _methods._var(a, axis=axis, dtype=dtype, out=out, ddof=ddof,
/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/numpy/core/_methods.py:226: RuntimeWarning: invalid value encountered in divide
  arrmean = um.true_divide(arrmean, div, out=arrmean,
/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/numpy/core/_methods.py:261: RuntimeWarning: invalid value encountered in scalar divide
  ret = ret.dtype.type(ret / rcount)
2024/04/23 14:24:50 WARNING mlflow.metrics.metric_definitions: Failed to load 'toxicity' metric (error: RuntimeError("Failed to import transformers.pipelines because of the following error (look up to see its traceback):\ncannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' (/Users/benjamin.wilson/miniconda3/envs/mlflow-dev-env/lib/python3.8/site-packages/urllib3/util/ssl_.py)")), skipping metric logging.
2024/04/23 14:24:50 WARNING mlflow.models.evaluation.default_evaluator: Did not log builtin metric 'toxicity' because it returned None.
2024/04/23 14:24:50 WARNING mlflow.models.evaluation.default_evaluator: Did not log builtin metric 'exact_match' because it returned None.
{'latency/mean': 2.329627513885498, 'latency/variance': 6.333362589765358, 'latency/p90': 5.018124270439149, 'flesch_kincaid_grade_level/v1/mean': 3.7, 'flesch_kincaid_grade_level/v1/variance': 42.96, 'flesch_kincaid_grade_level/v1/p90': 10.9, 'ari_grade_level/v1/mean': 5.25, 'ari_grade_level/v1/variance': 71.20249999999999, 'ari_grade_level/v1/p90': 14.8, 'faithfulness/v1/mean': nan, 'faithfulness/v1/variance': nan, 'relevance/v1/mean': nan, 'relevance/v1/variance': nan}
[ ]:
results.tables["eval_results_table"]