Bemærk
Adgang til denne side kræver godkendelse. Du kan prøve at logge på eller ændre mapper.
Adgang til denne side kræver godkendelse. Du kan prøve at ændre mapper.
Azure Databricks provides system.ai.python_exec, a built-in Unity Catalog function that lets AI agents dynamically run Python code written by the agent, provided by a user, or retrieved from a codebase. It is available by default and can be used directly in a SQL query:
SELECT python_exec("""
import random
numbers = [random.random() for _ in range(10)]
print(numbers)
""")
To learn more about agent tools, see AI agent tools.
Add the code interpreter to your agent
To add python_exec to your agent, connect to the managed MCP server for the system.ai Unity Catalog schema. The code interpreter is available as a pre-configured MCP tool at https://<workspace-hostname>/api/2.0/mcp/functions/system/ai/python_exec.
OpenAI Agents SDK (Apps)
from agents import Agent, Runner
from databricks.sdk import WorkspaceClient
from databricks_openai.agents import McpServer
# WorkspaceClient picks up credentials from the environment (Databricks Apps, notebook, CLI)
workspace_client = WorkspaceClient()
host = workspace_client.config.host
# The context manager manages the MCP connection lifecycle and ensures cleanup on exit.
# from_uc_function constructs the endpoint URL from UC identifiers and wires in auth
# from workspace_client, avoiding hardcoded URLs and manual token handling.
async with McpServer.from_uc_function(
catalog="system",
schema="ai",
function_name="python_exec",
workspace_client=workspace_client,
name="code-interpreter",
) as code_interpreter:
agent = Agent(
name="Coding agent",
instructions="You are a helpful coding assistant. Use the python_exec tool to run code.",
model="databricks-claude-sonnet-4-5",
mcp_servers=[code_interpreter],
)
result = await Runner.run(agent, "Calculate the first 10 Fibonacci numbers")
print(result.final_output)
Grant the app access to the function in databricks.yml:
resources:
apps:
my_agent_app:
resources:
- name: 'python_exec'
uc_securable:
securable_full_name: 'system.ai.python_exec'
securable_type: 'FUNCTION'
permission: 'EXECUTE'
LangGraph (Apps)
from databricks.sdk import WorkspaceClient
from databricks_langchain import ChatDatabricks, DatabricksMCPServer, DatabricksMultiServerMCPClient
from langgraph.prebuilt import create_react_agent
workspace_client = WorkspaceClient()
host = workspace_client.config.host
# DatabricksMultiServerMCPClient provides a unified get_tools() interface across
# multiple MCP servers, making it easy to add more servers later without refactoring.
mcp_client = DatabricksMultiServerMCPClient([
DatabricksMCPServer(
name="code-interpreter",
url=f"{host}/api/2.0/mcp/functions/system/ai/python_exec",
workspace_client=workspace_client,
),
])
async with mcp_client:
tools = await mcp_client.get_tools()
agent = create_react_agent(
ChatDatabricks(endpoint="databricks-claude-sonnet-4-5"),
tools=tools,
)
result = await agent.ainvoke(
{"messages": [{"role": "user", "content": "Calculate the first 10 Fibonacci numbers"}]}
)
# LangGraph returns the full conversation history; the last message is the agent's final response
print(result["messages"][-1].content)
Grant the app access to the function in databricks.yml:
resources:
apps:
my_agent_app:
resources:
- name: 'python_exec'
uc_securable:
securable_full_name: 'system.ai.python_exec'
securable_type: 'FUNCTION'
permission: 'EXECUTE'
Model Serving
from databricks.sdk import WorkspaceClient
from databricks_mcp import DatabricksMCPClient
import mlflow
workspace_client = WorkspaceClient()
host = workspace_client.config.host
mcp_client = DatabricksMCPClient(
server_url=f"{host}/api/2.0/mcp/functions/system/ai/python_exec",
workspace_client=workspace_client,
)
tools = mcp_client.list_tools()
# get_databricks_resources() extracts the UC permissions the agent needs at runtime.
# Passing these to log_model lets Model Serving grant access automatically at deployment,
# without requiring manual permission configuration.
mlflow.pyfunc.log_model(
"agent",
python_model=my_agent,
resources=mcp_client.get_databricks_resources(),
)
To deploy the agent, see Deploy an agent for generative AI applications (Model Serving). For details on logging agents with MCP resources, see Use Databricks managed MCP servers.
Next steps
- AI agent tools - Explore other tool types for your agent.
- Create AI agent tools using Unity Catalog functions - Create custom tools using Unity Catalog functions.
- Use Databricks managed MCP servers - Learn more about managed MCP servers on Azure Databricks.