Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Microsoft Fabric Maps provides geospatial visualization and analysis to deliver actionable insights from real-time and historical spatial data.
In this tutorial, an electric utility field dispatcher uses Microsoft Fabric Maps to create and manage repair work orders when outages or asset faults are reported. The scenario focuses on locating affected customers, visualizing active work orders in real time, and dispatching crews efficiently for service restoration.
This tutorial demonstrates how customer locations are mapped, live work orders appear on the map, and an optimal route is calculated using the Azure Maps Route Directions API. The tutorial concludes with an optimized route shown on the map.
Fabric Maps runs within Fabric Real‑Time Intelligence, ingesting streaming telemetry using Eventstream and Eventhouse for real‑time monitoring. Work order completions and operational outcomes are stored in OneLake, where they can be used for route optimization and analytics that are displayed on the map.
In this tutorial, you will:
- Create a lakehouse and upload sample work order data.
- Set up an eventstream to write work order data to an eventhouse.
- Create a Kusto function to extract customer coordinates from the imported work order data.
- Create a map and add the function as a map layer.
- Compute an optimal route using the Azure Maps Route Directions API.
- Add the optimized route to the map as a layer.
- Configure map and layer settings.
Prerequisites
Before starting this tutorial, it's helpful to review the Real-Time Intelligence tutorials to become familiar with the core concepts and workflows.
If you don't have an Azure subscription, create a free account before you begin.
An Azure Maps subscription key.
A Fabric account. For more information on Microsoft Fabric, see What is Microsoft Fabric?.
Permission to create Eventstream, Eventhouse (KQL database), Lakehouse, Notebooks, and map item. For more information, see About tenant settings
A workspace with a Microsoft Fabric-enabled capacity. For more information on creating a workspace, see Create a workspace
A basic understanding of Fabric Lakehouse, a data repository for storing, managing, and analyzing structured and unstructured data in a single location. For information on creating a lakehouse, see Create a lakehouse in Microsoft Fabric.
A basic understanding of Fabric Eventhouse, used to ingest, process, and analyze data in near real-time. For information on creating an eventhouse, see Create an eventhouse.
A basic understanding of Kusto user-defined functions.
A Basic understanding of How to use Microsoft Fabric notebooks.
Create a lakehouse and upload the sample work order data
To simulate a real-time streaming source, the notebook in the following steps uses sample data uploaded to a lakehouse. In production, this data would be streamed rather than static.
Create the work order data file
The work order data file contains sample work order records used in this tutorial to simulate a real‑time streaming source. After creating the file, you'll import it into a lakehouse in the next step.
Copy and paste the following content into a text file, then save it as WorkorderLocations.csv. You'll use this file in the next step.
WorkorderID,Latitude,Longitude
100,48.22610712,16.32977412
101,48.23519063,16.37364699
102,48.19785896,16.38669028
103,48.18125837,16.37068261
107,48.15151126,16.41766590
108,48.20290349,16.32492121
104,48.23400591,16.4563533
105,48.18145603,16.40506946
106,48.16366378,16.36001083
Create a lakehouse and import the work order data file
Create a new lakehouse for incoming work order data and import the previously created work order location file.
- From your workspace, select New item, and enter lakehouse in the search box and select it to create a new lakehouse.
- Enter a name WorkorderLocationsLakehouse and select Create.
- In the new lakehouse, select Upload files and upload the WorkorderLocations.csv file created in the previous step.
- In the new lakehouse, select the Explorer pane on the left side of the screen.
- In the Files section of the Explorer, select WorkorderLocations.csv to view the file you uploaded.
- In the View settings, select First row as header.
- (Optional) In the view drop-down list, select Table view.
Create an eventstream and write data to an eventhouse
In this section, you design an eventstream flow using a custom endpoint and send data using a notebook to simulate real‑time streaming.
Microsoft Fabric Eventstream is a real-time data streaming service that enables users to ingest, process, and route event data within the Microsoft Fabric ecosystem. It provides a no-code experience for building event-driven workflows, allowing seamless integration of real-time data from various sources and routing it to multiple destinations. For more information on supported data sources or how to connect to a custom endpoint, see the Overview of Microsoft Fabric eventstreams.
By ingesting eventstream data into an eventhouse, streaming events become available for processing with Kusto, where they can be transformed and analyzed in real time using tables or functions. For more information, see Eventhouse overview.
Create an eventstream and eventhouse
From your workspace, select New item, and enter eventstream in the search box.
Select Eventstream.
In the New Eventstream dialog, enter a Name: "WorkordersEventstream", then select Create.
In the Design a flow to ingest, transform, and route streaming events screen, select Use custom endpoint
In the custom endpoint Add source dialog, select Add.
The eventstream is now created. Next, add an Eventhouse as the destination.
In the WorkordersEventstream node of the eventstream designer select Eventhouse from the Transform events or add destination drop-down list.
The Eventhouse destination configuration pane appears on the right side of the screen. Fill out the details requested as follows, then select Save:
- Data ingestion mode: Set to Event processing before ingestion.
- Destination name: Set to WorkordersEventhouse.
- Workspace: A dropdown showing the name of your workspace.
- Eventhouse: Select Create new and create an eventhouse named WorkordersEventhouse.
- KQL Database: Select WorkordersEventhouse.
- KQL Destination table: Select the Create new link and create a new table named Workorders.
- Input data format: Select Json.
- Activate ingestion after adding the data source: check the checkbox.
Once the eventhouse is added as a destination, select Publish to publish your new eventstream.
Get required SAS key authentication keys
You need the Event hub name and Connection string-primary key values from the SAS Key authentication section in your notebook code.
Select the custom endpoint source tile you just added.
In the Details pane, select SAS Key Authentication.
Copy the following two values and save them for use in your notebook code:
- Event hub name: Used for the EVENT_HUB_NAME variable.
- Connection string-primary key: Used for the CONNECTION_STR variable.
Simulate real‑time ingestion using a notebook
In this section, you create a notebook connected to the lakehouse you created earlier, then use the provided code to read the CSV data and send events to the eventstream. This simulates real‑time data ingestion; for demos, you can run the notebook manually or schedule it to run periodically.
Create a notebook in your Fabric workspace
Create a notebook with code to import the work order location file from your lakehouse into the eventstream you created in the previous section. This simulates a real-time streaming source, which in a production environment would be streamed rather than static.
From your workspace, select New item, and enter notebook in the search box.
Select Notebook.
In the New Notebook dialog, enter WorkorderLocations in the Name field, then select Create.
To connect your notebook to the lakehouse, select From OneLake catalog from the Add data items dropdown list.
Select WorkorderLocationsLakehouse from the OneLake catalog and select the Connect button. This is the lakehouse you created previously.
After creating the notebook and connecting it to your lakehouse, paste the following code into the first cell and run it to install the Azure Event Hub SDK:
# Install Azure Event Hub SDK (only needed once per environment) %pip install azure-eventhubSelect + Code to create a new cell in the notebook.
Select the new cell and enter the following code into it:
from azure.eventhub import EventHubProducerClient, EventData import pandas as pd import json import time # Replace with your actual connection string and Event Hub name CONNECTION_STR = "" # Connection string-primary key EVENT_HUB_NAME = "" # Event hub name producer = EventHubProducerClient.from_connection_string(conn_str=CONNECTION_STR, eventhub_name=EVENT_HUB_NAME) df = spark.read.csv("Files/WorkorderLocations.csv", header=True, inferSchema=True) pdf = df.toPandas() total_records = len(pdf) for index, row in pdf.iterrows(): # Convert row to dictionary row_dict = row.to_dict() # Truncate coordinates to 5 decimal digits if 'lat' in row_dict: row_dict['Latitude'] = round(float(row_dict['Latitude']), 5) if 'lon' in row_dict: row_dict['Longitude'] = round(float(row_dict['Longitude']), 5) # Serialize to JSON payload = json.dumps(row_dict) # Send to Event Hub event_data = EventData(payload) with producer: producer.send_batch([event_data]) # Wait 100ms time.sleep(0.1)Add the values for the variables CONNECTION_STR and EVENT_HUB_NAME obtained in the previous section titled Get required SAS key authentication keys.
Run the notebook code. This creates the Workorders table in the KQL database in the WorkordersEventhouse eventhouse.
Create a Kusto function and add it as a map layer
In this section, you create a Kusto function that retrieves current work order location data from the Workorders table in your eventhouse, and then use that function as a data source for a Fabric Maps map. The function enables the map to display active work orders as a layer, providing a visual view of jobs that need to be planned and assigned to field crews.
Create a Kusto function
From your eventhouse (KQL database):
Open the KQL database associated with your eventhouse.
Select Functions then New function.
This creates a query that when run creates a new function named WorkordersFunction.
Enter the following:
.create-or-alter function WorkordersFunction() { Workorders | project Latitude, Longitude, WorkorderID }Run the query
1 & 2 - The KQL query used to create the WorkordersFunction function.
3 - The newly created WorkordersFunction function
In the Functions folder, select WorkordersFunction, then Preview results to verify that it returns work order records with valid location fields.
This function serves as a reusable data source for a Fabric Maps map data layer, which is demonstrated in the next section.
Create a map and add the function as a layer
In this section, you create a Fabric Maps map and use the previously created KQL function as a data layer. The map is configured with a refresh interval so that streaming work order data updates automatically, providing a near real‑time spatial view of active work orders. You then rename the layer and adjust its settings to control how the data is displayed on the map. This live geospatial context helps dispatchers monitor field activity, assess demand across service areas, and make more informed routing and assignment decisions.
Create a new map
- From your workspace, select New item.
- In the New item panel, enter map into the search field, and select Map.
- In the New Map dialog, enter WorkordersMap in the Name field and select Create.
Add eventhouse to map
In the Explorer pane, select Fabric items then the Add button.
Select KQL database from the menu that appears when selecting the Add button.
From the OneLake catalog, select the eventhouse WorkordersEventhouse that you created previously, then select Add.
Tip
If you get an error such as The KQL database has a protected label that restricts access. Please contact your database owner for assistance. Check the sensitivity label on your KQL database, as it can be restricting access. For more information, see Apply sensitivity labels to Fabric items.
Show function on map
In the Explorer pane in your new map, select the eventhouse WorkordersEventhouse that you added in the previous step.
Navigate to the KQL function WorkordersFunction, and select the ellipse (...) to show the popup menu.
Select Show on map from the popup menu.
The View Eventhouse data on map dialog appears with Preview data selected. No changes are required. Ensure it's correct, then select Next
In the Set geometry and data refresh interval step, set the fields as follows, then select Next:
Data layer Name: WorkordersFunction
Geometry column location: Latitude and longitude data locate on separate columns
Latitude column: Latitude
Longitude column: Longitude
Data refresh interval: 5 minutes
In the Review and add to map step, review settings and select Add to map.
The function results are now displayed in the updated map.
Generate an optimized multi‑stop route with the Azure Maps Route Directions API
In this section, you create a new notebook that retrieves work order coordinates from the KQL database and calls the Azure Maps Route Directions REST API. You enable the service's multi‑stop optimization capability to determine the most efficient order for visiting each location and return the route geometry in that optimized sequence. This output is used later to visualize a recommended technician route on the map.
To complete this section, you need an Azure account with an Azure Maps account and subscription key. If you don't have an Azure account, create a free account before you begin. For more information on creating an Azure Maps account, see Create an Azure Maps account. For more information on getting an Azure Maps subscription key, see Get the subscription key for your account in the Azure Maps quickstart.
Create a notebook in your Fabric workspace that retrieves the optimal route
From within your workspace, open the eventhouse WorkordersEventhouse you created previously.
In the left navigation panel under KQL databases, select WorkordersEventhouse.
The top menu bar should now display an option for Notebook. Select it to create a new notebook.
In the new notebook, save the value for the kustoUri variable. You use this value in the new notebook code you create in step 6.
Connect your notebook to WorkorderLocationsLakehouse by selecting From OneLake catalog from the Add data items dropdown list.
Note
When you create a Lakehouse in Microsoft Fabric, a SQL analytics endpoint with the same name is created automatically.
Both items appear in the workspace, but they serve different purposes:
- The Lakehouse is used for notebooks, Spark processing, and data ingestion.
- The SQL analytics endpoint is a read-only T-SQL query surface over the Lakehouse data.
When attaching or creating a notebook, make sure you select the Lakehouse, not the SQL analytics endpoint. If WorkorderLocationsLakehouse appears twice in the OneLake catalog, filter by Lakehouse .
Once your new notebook is created and connected to your lakehouse, enter the following code into the second cell of your notebook, replacing the default code, then add the variable values saved in the previous step:
import os, json, requests from pyspark.sql import types as T from pyspark.sql.functions import col # ---- Configuration ---- AZMAPS_SUBSCRIPTION_KEY = os.environ.get( 'AZMAPS_SUBSCRIPTION_KEY', '<Your Azure Maps subscription key>' ) API_VERSION = '2025-01-01' BASE_URL = 'https://atlas.microsoft.com' # KQL query that invokes the stored function kustoQuery = """WorkordersFunction()""" # Your Kusto URI kustoUri = "" # Your KQL database name database = "WorkordersEventhouse" # The access credentials. accessToken = mssparkutils.credentials.getToken(kustoUri) kustoDf = spark.read\ .format("com.microsoft.kusto.spark.synapse.datasource")\ .option("accessToken", accessToken)\ .option("kustoCluster", kustoUri)\ .option("kustoDatabase", database)\ .option("kustoQuery", kustoQuery).load() # Write transformed response to a new file so the raw output is preserved OUTPUT_GEOJSON_PATH_TRANSFORMED = ( 'Files/OptimizedRoute.geojson' # GeoJSON output file ) # ---- Read Stores from KQL database table ---- stores_df = spark.read\ .format("com.microsoft.kusto.spark.synapse.datasource")\ .option("accessToken", accessToken)\ .option("kustoCluster", kustoUri)\ .option("kustoDatabase", database)\ .option("kustoQuery", kustoQuery).load()\ .select( col("WorkorderID").alias("workorder_id"), col("Latitude").alias("lat"), col("Longitude").alias("lon") ) # Ordered waypoints: origin first, then the rest by workorder_id # (API will re-order when optimizeWaypointOrder=True) stores_pd = stores_df.orderBy('workorder_id').toPandas() waypoints_lonlat = [[float(r['lon']), float(r['lat'])] for _, r in stores_pd.iterrows()] # ---- Build Directions request body (GeoJSON) ---- features = [] for idx, (lon, lat) in enumerate(waypoints_lonlat): features.append({ "type": "Feature", "geometry": {"type": "Point", "coordinates": [lon, lat]}, "properties": {"pointIndex": idx, "pointType": "waypoint"} }) dir_body = { "type": "FeatureCollection", "features": features, "optimizeRoute": "fastestWithTraffic", "routeOutputOptions": ["routePath"], # ensures route path geometry in response "travelMode": "truck", "optimizeWaypointOrder": True } # ---- Call Azure Maps Directions (POST) ---- url = f"{BASE_URL}/route/directions" params = {"api-version": API_VERSION} headers = { "Accept": "application/geo+json", "Content-Type": "application/geo+json", "subscription-key": AZMAPS_SUBSCRIPTION_KEY } resp = requests.post(url, params=params, data=json.dumps(dir_body), headers=headers) resp.raise_for_status() resp_json = resp.json() # exact payload as returned by the API # ---- Transform: move order.optimizedIndex -> properties.optimizedIndex for all Waypoint features to add as a data label in the map---- for feat in resp_json.get("features", []): props = feat.get("properties") or {} if props.get("type") == "Waypoint": order = props.get("order") or {} opt_idx = order.pop("optimizedIndex", None) if opt_idx is not None: props["optimizedIndex"] = opt_idx + 1 # reassign possibly-updated order (still contains inputIndex if present) props["order"] = order feat["properties"] = props # ---- Write transformed GeoJSON ---- from notebookutils import mssparkutils mssparkutils.fs.put(OUTPUT_GEOJSON_PATH_TRANSFORMED, json.dumps(resp_json), True) print(f"Transformed Directions GeoJSON (waypoints carry properties.optimizedIndex) written to {OUTPUT_GEOJSON_PATH_TRANSFORMED}")Enter Your Azure Maps subscription key in the notebook code for the AZMAPS_SUBSCRIPTION_KEY variable by replacing "<Your Azure Maps subscription key>" with Your Azure Maps subscription key.
Important
This example hardcodes the Azure Maps subscription key for simplicity. Do not hardcode secrets in production environments. Store and manage secrets securely by using Azure Key Vault and reference them at runtime. For more information, see Best practices for protecting secrets.
Select the Save as button in the menu and save the notebook as OptimizeRoute.
Run the notebook to create the OptimizedRoute.geojson file in the Files directory of your lakehouse.
Add lakehouse to map
- In the Explorer pane, select Fabric items then the Add button.
- Select Lakehouse from the menu that appears when selecting the Add button.
- From the OneLake catalog, select the lakehouse WorkorderLocationsLakehouse that you created previously, then select Add.
Show the optimized route on the map
In the Explorer pane in your new map, select the lakehouse WorkorderLocationsLakehouse that you added in the previous step.
Navigate to OptimizedRoute.geojson in the Files directory of your lakehouse and select the ellipse (...) to show the popup menu.
Select Show on map from the popup menu.
In the Data layers panel, toggle visibility off for the WorkordersFunction layer.
Once completed, the new map layer appears in your Fabric Maps map.
Map layer settings
Fabric Maps provides a range of layer settings that let you control how data is presented on the map. In this section, you customize the layer created from the route optimization process by renaming the layer, adjusting the symbol style, and configuring labels based on field values. These settings help improve readability and make it easier to interpret work order data at a glance.
Rename the layer
In the Data layers panel, open the OptimizedRoute options menu by selecting the ellipse (...).
Once in the options menu, select Rename.
Remove labels at the map level
When you toggle Labels on or off at the map level, it affects basemap text labels. These labels come from the underlying map style and include:
- City and town names
- Country and region names
- Road and highway names
- Water feature names (rivers, lakes, oceans)
- Other administrative or geographic place names
When Labels aren't shown, the basemap appears "cleaner" and more minimal, with no place-name text rendered on the basemap.
To turn off basemap labels:
Open your map in Fabric Maps.
Select Map settings from the menu bar.
Locate the Labels checkbox, and unchecked it.
For more information on Map settings in Fabric Maps, see Change Map settings
Add data labels to the layer
Data labels are data‑driven annotations that come from one or more fields in the layer's dataset. They're tied directly to layer level map features, such as the points on the map that represent work order locations. For more information of Fabric Maps data labels, see Data label settings.
Select Optimized Route in the Data layers panel. The Optimized Route settings panel appears in the right side of the screen.
In the Optimized Route settings panel, select > Data label settings.
Select the Enable data labels toggle to turn on data labels. This shows more data label settings.
Change the following data label settings:
- Data labels: optimizedIndex
- Text color: white
- Text size slider set to 20
- Text stroke color: Black
- Text stroke width slider set to 2
Summary
This tutorial demonstrated how to build an end-to-end, real-time work order routing scenario using Microsoft Fabric Real-Time Intelligence and Fabric Maps. Streaming work order data is ingested, transformed, and queried using KQL, then visualized on a map to create a dynamic, continuously updating view of work order locations. By integrating routing logic and optimal path calculations, the solution shows how real-time geospatial analytics can help dispatchers and field operations teams make faster, better-informed decisions.
This pattern can be extended to other location-based scenarios such as fleet tracking, asset monitoring, and incident response. By combining event-driven data, KQL-based analytics, and map-based visualization, Microsoft Fabric enables a transition from raw streaming data to actionable geographic insights in near real time.
Next steps
For more information on Fabric Maps articles covered in this tutorial: