Share via

How to override the default system prompt injection in the Teams AI Library

Conover Kathrina Jr 40 Reputation points
2026-04-05T22:57:32.3433333+00:00

Our Teams AI Library bot is triggering "Context length exceeded" errors because the Copilot middleware injects too much system metadata into each turn. This overhead truncates user messages and exhausts our token budget before the LLM can process the conversation history.

How to prune these injected prompts without breaking the bot's action-mapping logic

Microsoft Teams | Development
Microsoft Teams | Development

Building, integrating, or customizing apps and workflows within Microsoft Teams using developer tools and APIs

0 comments No comments

2 answers

Sort by: Most helpful
  1. Steven-N 23,700 Reputation points Microsoft External Staff Moderator
    2026-04-06T00:13:40.3766667+00:00

    Hi Conover Kathrina Jr

    By default, the Copilot middleware takes every single action your bot can perform and crams their full JSON descriptions into the hidden system prompt on every single turn. So, if you have more than a few actions, that hidden prompt gets so heavy that it starves the LLM of tokens, leaving absolutely no room for the actual chat history.

    Given this, to fix this without breaking your bot's mapping logic, you basically need to put that system prompt on a strict diet. Here are the two cleanest ways to handle it:

    • Dynamic action loading Don't load your entire function registry globally. Run a quick pre-check (like regex or a basic intent classifier) on the user's message, then only inject the specific actions relevant to that flow into the ActionPlanner.
    • Minify your schemas Your function.json files are token hogs. Strip all verbose text from your descriptions. Drop optional parameters and ensure max_input_tokens is set in config.json so it safely truncates old chat history instead of crashing.

    Hope this saves you some debugging time.


    If the answer is helpful, please click "Accept Answer" and kindly upvote it. If you have extra questions about this answer, please click "Comment".   

    Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.


  2. Q&A Assist
    2026-04-05T22:57:41.74+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The Teams AI Library is designed so that the agent maintains conversation context, injects system metadata, and uses that context for action mapping and safety. The injected system prompts and metadata are part of the built‑in safety and UX requirements and aren’t exposed in the provided documentation as something that can be selectively removed or overridden.

    The available guidance instead focuses on:

    • Ensuring the agent tracks and uses conversation history via the Teams SDK so it can understand user references and maintain context.
    • Using the Teams AI Library’s built‑in moderation and safety features and its conversation session history mechanism.

    Because the documentation here doesn’t describe any supported way to prune or customize the Copilot/Teams AI Library system prompt or metadata, there’s no documented mechanism in this context to:

    • Remove or trim the injected system metadata, or
    • Change how that metadata is constructed, while preserving the action‑mapping logic.

    Any such customization would need to follow guidance that is not present in the provided material.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.