Prompt flow is a development tool designed to streamline the creation of LLM applications. It does so by providing tooling that simplifies the process of prototyping, experimenting, iterating, and deploying LLM applications.

Most notably, Prompt flow allows you to author chains of native and prompts and visualize them as a graph. This allows you and the rest of your team to easily create and test AI powered capabilities in both Azure Machine Learning Studio and locally with VS Code.

With Azure Machine Learning prompt flow, you can:

Prompt Flow can also be used together with the LangChain python library, which is the framework for developing applications powered by LLMs, agents and dependency tools. In this document, we'll show you how to supercharge your LangChain development on our prompt Flow.

Ideal Prompt flow process

prompt_flow_01.png

The lifecycle consists of the following stages:

  1. Initialization: Identify the business use case, collect sample data, learn to build a basic prompt, and develop a flow that extends its capabilities.

  2. Experimentation: Run the flow against sample data, evaluate the prompt's performance, and iterate on the flow if necessary. Continuously experiment until satisfied with the results.

  3. Evaluation & Refinement: Assess the flow's performance by running it against a larger dataset, evaluate the prompt's effectiveness, and refine as needed. Proceed to the next stage if the results meet the desired criteria.

  4. Production: Optimize the flow for efficiency and effectiveness, deploy it, monitor performance in a production environment, and gather usage data and feedback. Use this information to improve the flow and contribute to earlier stages for further iterations.

Creating a simple app

1. Set up connection ...

Prompt flow provides various prebuilt connections, including Azure OpenAI, OpenAI, and Azure Content Safety. These prebuilt connections enable seamless integration with these resources within the built-in tools and helps securely store and manage secret keys or other sensitive credentials required for interacting with LLM (Large Language Models) and other external tools.

... with OpenAI

Create an OpenAI key
  1. Connect to your Openai account and go to the API keys section
  2. Click Create a new secret key and save it for later
Create the connection
  1. Connect to https://ml.azure.com
  2. Create a workspace, select it and enter the Prompt flow section
  3. Select one of the provided connection (OpenAI in this case)
  4. Then provide the connection name, API key (copied earlier) and click Save

... with Azure-OpenAI

Create an Azure OpenAI service
  1. Connect to Azure portal and go to the Azure OpenAI service
  2. Click the Create Azure OpenAI button, fill the required fields and deploy it
Create an Azure OpenAI Chat instance
  1. Connect the chat playground
  2. Click Create a new deployement
  3. Select a model, give it a name and create the ressource
Create the connection
  1. Connect to https://ml.azure.com
  2. Create a workspace, select it and enter the Prompt flow section
  3. Select one of the provided connection (Azure OpenAI in this case)
  4. Then a right-hand panel will appear. Select the subscription and resource name, provide the connection name, API key, API base, API type, and API version (open the chat playground and click View code on the selected chat-instance) before selecting the Save button

2. Create and develop your prompt flow

  1. In Flows tab of Prompt flow home page, select Create to create your first prompt flow. You can both create a flow from a root template (by type) or by cloning one of the samples in the gallery (which is the approach we will use here)
  2. Clone of of the provided template (e.g. Web Classification) and then you'll enter the flow authoring page
  3. Click Select runtime (top right) and Select with advanced settings in order to chose a running instance type (you will see the characteristics and the price) or simply use Start if you want to use the default instance setting

⚠️ Be careful, the runtime is not free... it adds up with the LLM model calls (and the deployment entrypoint if any).

prompt_flow_02.png

At the left of authoring page, it's the main working area where you can author the flow, for example add a new node, edit the prompt, select the flow input data, etc.

The top right corner shows the folder structure of the flow. Each flow has a folder that contains a flow.dag.yaml file, source code files, and system folders. You can export or import a flow easily for testing, deployment, or collaborative purposes.

In the bottom right corner, it's the graph view for visualization only. You can zoom in, zoom out, auto layout, etc.

  1. Set up LLM nodes: for each LLM node, you need to select a connection to set your LLM API keys.
  2. Run single nodes: To test and debug a single node, select the Run icon on node in flatten view. Run status is shown at the very top, once running completed, check output in node output section.
  3. Run the whole flow: To test and debug the whole flow, select the Run button at the right top (you can change the flow inputs to test how the flow behaves differently)

3. Test and evaluation

After the flow run successfully with a single row of data, you might want to test if it performs well in large set of data, you can run a bulk test and choose some evaluation methods then check the metrics.

Evaluate

  1. Select Evaluate button next to Run button, then a right panel pops up. It's a wizard that guides you to submit a batch run and to select the evaluation method (optional).
  2. You need to set a batch run name, description, select a runtime, then select Add new data to upload the data you just downloaded (it supports csv, tsv, and jsonl file for now).
  3. Setup the input mapping if needed. It supports mapping your flow input to a specific data column in your dataset, which means that you can use any column as the input, even if the column names don't match.
  4. Select one or multiple evaluation methods (e.g. use a Classification Accuracy Evaluation to evaluate a classification scenario etc.). The evaluation methods are also flows that use Python or LLM etc., to calculate metrics like accuracy, relevance score
  5. Map the correct column of the evaluation dataset to the groundtruth parameter and the appropriate flow output parameter to the preddiction parameter
  6. Select Review + submit to submit a batch run and the selected evaluation

Check results

  1. Select View batch run to navigate to the batch run list of this flow
  2. Click View latest batch run outputs (or View batch run if you want to see the number of tokens used)
    prompt_flow_03.png
  3. Click Metrics to see the results of the selected metrics
    prompt_flow_04.png
  4. Select Export to download the output table for further investigation

4. Deployment

After you build a flow and test it properly, you might want to deploy it as an endpoint so that you can invoke the endpoint for real-time inference.

Configure the endpoint

  1. (Optional) To deploy a given version, select a batch run link in the View batch runs
  2. Select Deploy (top left). A wizard pops up to allow you to configure the endpoint.
  3. Specify an endpoint and deployment name, select a virtual machine, set connections, do some settings (you can use the default settings)
  4. Select Review + create to start the deployment

Test the endpoint

It takes several minutes to deploy the endpoint.

  1. Click Assets / Endpoints and select the Real-time endpoints tab
  2. Click your endpoint-name and check the Endpoint attributes (left) and the Deployment attributes (bottom right). Wait until the both provisioning states are Succeeded
  3. Click the Test button that appeared next to the Details button
  4. Put the test inputs in the input box, and select Test
  5. Then you'll see the result predicted by your endpoint

prompt_flow_05.png

5. Clean up ressources

Stop compute instance

If you're not going to use it now, stop the compute instance:

  1. In the studio, in the left navigation area, select Compute
  2. In the top tabs, select Compute instances
  3. Select the compute instance in the list
  4. On the top toolbar, select Stop

prompt_flow_07.png

Delete endpoint

If you don't need the endpoint anymore:

  1. In the studio, in the left navigation area, select Endpoint
  2. In the top tabs, select Details then Delete
  3. Confirm the deletion

prompt_flow_06.png

Delete all resources

If you don't plan to use any of the resources that you created, delete them so you don't incur any charges:

  1. In the Azure portal, select Resource groups
  2. From the list, select the resource group that you created.
  3. Select Delete resource group.

prompt_flow_08.png