Power BI AI Agent with Strands, MCP, and Streamlit

Imagine giving your team a simple chat interface where they can ask, “What were the sales trends for product X last month?” and get an instant, accurate answer pulled directly from your Power BI datasets.

In this guide, we will build exactly that. We will combine three powerful technologies:


  1. Strands Agents: An open-source, model-driven framework for building AI agents.
  2. Power BI MCP Server: A bridge that allows AI to “speak” to Power BI.
  3. Streamlit: A Python library to build the web interface.

Note on the MCP Server: While Microsoft has released a preview “Remote MCP Server” for VS Code, it is currently tightly coupled to VS Code’s authentication. For a custom application like this, we will use the robust Community Power BI MCP Server (by sulaiman013), which supports Service Principal authentication and is designed for programmatic access.

PowerBI Agent

Prerequisites

Before we start, ensure you have:

  • Python 3.10+ installed.
  • Docker (optional, but recommended for running the MCP server).
  • Azure Service Principal: You need a Client ID, Client Secret, and Tenant ID with access to your Power BI Workspace.
    • Tip: Grant your Service Principal Workspace.Read.All and Dataset.Read.All permissions in Azure and add it as a “Viewer” or “Member” in your Power BI Workspace.
  • OpenAI API Key: For the underlying LLM (or you can configure Strands to use Anthropic/Bedrock).

Step 1: Set Up the Power BI MCP Server

First, we need a running server that speaks “MCP” and connects to Power BI. We will run this locally (or on a server) so our agent can connect to it.

Option A: Using Docker (Recommended) Create a .env file with your credentials:

Bash

DEFAULT_TENANT_ID=your_tenant_id
DEFAULT_CLIENT_ID=your_client_id
DEFAULT_CLIENT_SECRET=your_client_secret
OPENAI_API_KEY=your_openai_key

Run the server container:

Bash

docker run -it --rm \
  --env-file .env \
  -p 8000:8000 \
  sulaiman013/powerbi-mcp \
  python src/server.py --host 0.0.0.0 --port 8000

Option B: Running Locally (Python)

  1. Clone the repo: git clone https://github.com/sulaiman013/powerbi-mcp
  2. Install dependencies: pip install -r requirements.txt
  3. Run the server: python src/server.py (Defaults to port 8000)

Your MCP server is now listening at http://localhost:8000/sse.

Step 2: Create the Strands Agent

Now we will write the Python code for our agent. This agent will connect to the MCP server, discover the “Power BI” tools (like execute_dax or get_tables), and use them to answer questions.

1. Install Libraries

Bash

pip install strands-agents strands-agents-tools streamlit mcp

2. Create agent_backend.py This file handles the agent logic.

Python

import os
from strands import Agent
from strands.tools.mcp import MCPClient
from mcp.client.sse import sse_client
# Using OpenAI for this example, but Strands supports Bedrock, Anthropic, etc.
from strands.models.openai import OpenAIModel 

# Initialize the Model
model = OpenAIModel(
    model="gpt-4-turbo",
    api_key=os.getenv("OPENAI_API_KEY")
)

async def create_powerbi_agent():
    """
    Creates and returns a Strands Agent connected to the Power BI MCP Server.
    """
    # Connect to the running MCP Server (SSE Transport)
    # The URL must match where you are running the Docker container/script
    mcp_client = MCPClient(
        lambda: sse_client(url="http://localhost:8000/sse")
    )
    
    # The 'with' block initializes the connection and discovers tools
    async with mcp_client:
        # Get the list of tools available (e.g., 'run_dax_query', 'get_metadata')
        tools = await mcp_client.list_tools()
        
        # Define the system prompt to guide the agent's behavior
        system_prompt = """
        You are an expert Data Analyst Assistant. 
        You have access to a Power BI semantic model via tools.
        
        Your goal is to answer user questions by:
        1. Inspecting the model schema if you don't know table names.
        2. Constructing valid DAX queries.
        3. Executing them and summarizing the results.
        
        Always double-check your DAX syntax before executing.
        """

        # Initialize the Strands Agent
        agent = Agent(
            model=model,
            tools=tools,
            system_prompt=system_prompt
        )
        
        return agent

async def process_user_query(agent, user_input):
    """
    Sends the user input to the agent and returns the text response.
    """
    # Strands agents are callable. We await the result.
    response = await agent(user_input)
    return response

Step 3: Build the Streamlit Frontend

Now, let’s build the UI. Create a file named app.py. We need to handle the asynchronous nature of the agent within Streamlit.

Python

import streamlit as st
import asyncio
from agent_backend import create_powerbi_agent, process_user_query

# Page Config
st.set_page_config(page_title="Power BI AI Agent", page_icon="๐Ÿ“Š")

st.title("๐Ÿ“Š Chat with your Power BI Data")
st.markdown("Ask questions about your sales, revenue, or inventory.")

# Initialize Session State for Chat History
if "messages" not in st.session_state:
    st.session_state.messages = []

# Initialize Agent (Cached to avoid recreating on every rerun)
if "agent" not in st.session_state:
    with st.spinner("Connecting to Power BI MCP Server..."):
        try:
            # We use a helper loop to run the async setup
            loop = asyncio.new_event_loop()
            asyncio.set_event_loop(loop)
            st.session_state.agent = loop.run_until_complete(create_powerbi_agent())
            st.success("Connected to Power BI!")
        except Exception as e:
            st.error(f"Failed to connect to MCP Server. Is it running on port 8000? Error: {e}")

# Display Chat History
for message in st.session_state.messages:
    with st.chat_message(message["role"]):
        st.markdown(message["content"])

# Handle User Input
if prompt := st.chat_input("What would you like to know?"):
    # 1. Display User Message
    with st.chat_message("user"):
        st.markdown(prompt)
    st.session_state.messages.append({"role": "user", "content": prompt})

    # 2. Get Agent Response
    with st.chat_message("assistant"):
        with st.spinner("Analyzing data..."):
            try:
                # Run the async agent processing
                loop = asyncio.new_event_loop()
                asyncio.set_event_loop(loop)
                response = loop.run_until_complete(
                    process_user_query(st.session_state.agent, prompt)
                )
                
                st.markdown(response)
                st.session_state.messages.append({"role": "assistant", "content": response})
            except Exception as e:
                st.error(f"An error occurred: {e}")

Step 4: Run the Application

You are now ready to launch.

  1. Start the MCP Server: Ensure your Docker container or local script from Step 1 is running.
  2. Start the Streamlit App:Bashstreamlit run app.py
  3. Interact: Open your browser to the provided URL (usually http://localhost:8501).
    • Try asking: “Show me the tables in this dataset.”
    • Try asking: “What was the total revenue for last year?”

Troubleshooting Tips

  • Connection Refused: Ensure the url in agent_backend.py (http://localhost:8000/sse) matches exactly where your MCP server is running. If using Docker, ensure port 8000 is mapped (-p 8000:8000).
  • Permissions Error: If the agent says it can’t read data, check your Azure Service Principal permissions. It must have access to the specific Workspace ID referenced in your query or default configuration.
  • Context Window: If your database schema is massive, passing the entire schema to the LLM might exceed token limits. The MCP server handles this by providing tools to “search” metadata rather than dumping it all at once.

You have effectively built a private “Copilot” for your data. The Strands framework manages the reasoning loop, MCP provides the standardized tool interface, and Streamlit delivers the user experience. This architecture is modularโ€”you can easily swap the Power BI MCP server for a SQL Server one, or add a “Web Search” MCP server to give your agent internet access alongside your data.

Scroll to Top