Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Conversations: Applies to: Dynamics 365 Contact center only
Cases: Applies to: Dynamics 365 Customer Service only
This article explains how to use, edit, and extend evaluation criteria, including best practices to create clear and actionable instructions. Learn how to use built-in criteria, customize evaluation plans, and optimize quality assessments for your organization.
Prerequisites
- Enable Quality Evaluation Agent.
- Assign the required roles and privileges.
- Set up Microsoft Copilot credits.
- Provide consent for potential data movement across regions.
Use the out-of-the-box evaluation criteria
As a Quality Evaluator, you can use or copy the out-of-the-box evaluation criteria, create a new evaluation criteria, and edit published evaluation criteria.
- Use the Support quality or Closed Conversations Default Criteria.
Note
The out-of-the-box evaluation criteria are prefilled, published, and read-only.
To view evaluation criteria, complete the following steps:
In the site map of Copilot Service workspace, go to Evaluation criteria.
On the Evaluation criteria page, select the out-of-the-box evaluation criteria to view the details.
Create evaluation criteria
Refer to the best practices when you create evaluation criteria.
On the Evaluation criteria page, select New.
On the New evaluation criteria page, in the Criteria details section, provide the Criteria name and Description.
To enable scoring per criteria, switch the Criteria scoring toggle to on.
Select your language from the Language dropdown list. By default, all existing criteria are in English. You can’t modify the language after you save a criteria, even in the Draft state. Evaluation results are returned in the same language.
In the Add form level instructions section, provide instructions, if any.
In Section 1, enter the following details:
Section name: Provide a name.
Description: Provide a description.
Section weight (%): Assign a weight to the evaluation criteria. The total weight across sections must equal 100%.
Select Add question, if you want to add a question.
For each question, provide the following details:
Select answer type: Select from the options, Yes/No, Multiple choice, Choose from list, or Text selection.
Form question text: Enter the form question text.
Add question-level instructions: Provide instructions for the question, if any. Instructions help Quality Evaluation Agent generate answers and improve accuracy.
Select AI response enabled to allow AI to predict an answer for this question. If you don't select this option, AI doesn't process the input or return an answer. Evaluation creation fails if the criteria lacks AI-enabled questions for the selected mode in an evaluation plan, as follows:
- AI agent mode: All questions must be AI-enabled (manual editing isn't allowed).
- AI assisted mode: At least one question must be AI-enabled.
This applies to both on-demand evaluations and evaluation plans.
Depending on the answer type, select Scoring enabled to add scoring. Turn off the scoring toggle if you don't want to create a criteria with scoring.
Select Mark as critical question if you want the question to be marked as critical.
For Answer options, depending on the answer type you select, the answer options appear. Provide answer-level instructions for your answers, as required. If you marked the question as critical, you must select the Mark as fail option for an answer to avoid errors.
You can delete or duplicate a section or question, as required.
Select Save, and then select Publish.
Mark a question as critical
Turn on Mark as critical question to designate a question as critical within a criteria. Critical questions highlight mandatory requirements such as compliance, safety, or mandatory process steps that must not be missed.
If a critical question is answered with a fail option, the entire evaluation or simulation fails. You can mark multiple questions as critical within a criteria, but each critical question must have at least one fail option configured. If not, an error appears. Scoring doesn't change when a critical question causes an evaluation to fail.
During simulations and evaluations, the results indicate whether a critical question caused the failure. The critical question information is shown at both in the Evaluation Summary level and at the individual question level in the side panel. Additionally, the evaluations grid also includes a column that identifies evaluations that failed due to a critical question.
The following examples show questions where a failure should immediately fail the entire evaluation:
Did the agent select the correct action when prompted with a compliance checkpoint? (Fail if an incorrect action was chosen.)
Was customer consent obtained before recording the call? (Fail if consent wasn't explicitly confirmed.)
Edit your published evaluation criteria
You can copy the out-of-the-box evaluation criteria or create new evaluation criteria and then make edits.
To edit your published evaluation criteria, complete the following steps:
- In site map of Copilot Service workspace, go to Evaluation criteria.
- On the Evaluation criteria page, select the required evaluation criteria.
- On the selected evaluation criteria page, select Edit.
- Save the changes. The criteria gets saved as a draft. You can also revert to the published criteria at this stage.
- Publish the changes.
Note
- You can't change the scoring toggle at the criteria level after the criteria is published.
- Evaluation plans that are already running continue to use the existing criteria. After you publish the updated criteria, evaluation plans use the latest criteria in the next run.
Manage evaluation criteria versions
Each edit and publish action increments the evaluation criteria version, and the latest published version is always used for new evaluations. Supervisors can review prior versions, restore any version to make it the current one, or discard draft changes as needed.
- Select the source criteria and go to the Versioning History tab to view versions and version numbers along with the latest data that might be added to the criteria.
- Select Record to go a specific version and view the details in read-only.
- Select Restore/Publish to republish the selected version as the latest. This discards the current draft and increments the version number of the published criteria.
You can also add the Version column to the evaluation grid to track versions. Learn more in Evaluations.
Create and run a simulation
You need to enable the QEA Simulation flow before you run a simulation. Learn more in Configure connection references.
Supervisors can select any criteria in Draft or Published state and run a simulation test with real case and conversation data to preview Quality Evaluation Agent prediction outcomes.
Note
You can’t include attachments for case data.
To create and run a simulation:
- Select the criteria, and then select Create Simulation.
- On the New Criteria Simulation page, General tab, Simulation overview section, provide the following information:
- Criteria Name: Provide a name.
- Record Type: Select Case or Conversation.
- Criteria Version: Select the version of the published criteria. Criteria in Draft state don't have a version assigned.
- Conditions: Select the conditions to run a simulation. By default, the simulation runs on the 25 most recent records that match the selected conditions.
- Save the changes.
- Select Run Simulation. Your criteria simulation record is created.
- Select the Simulation Results tab to view the simulation results.
You can also view the simulation details from the Simulation Run tab of your criteria. You can view details such as Evaluation Criteria version, Record Type, Status, and Created On data.
- Select a simulation record that's in Completed state.
- From the simulation record page, select the Simulation Results tab and then select the required simulation result to view the prediction done by the Quality Evaluation Agent on the side pane.
Supervisors can view simulation results. The results don’t affect records or quality metrics, so you can validate and refine criteria before publishing. Each simulation consumes Microsoft Copilot credits.
Extend your evaluation criteria
After you create a source criteria for your business unit, you can extend the criteria to suit your organizational requirements. Updates to the source criteria automatically appear in all the extended criteria. Select any custom evaluation criteria in the Published state as source criteria to extend it further.
Select the criteria, and then select Extend criteria. The New extended evaluation criteria page appears.
In the Extended criteria details section, enter the following details:
Criteria name: Provide a name.
Description: Provide a description. To provide your own instructions, turn off the Use source instructions toggle. To use the source instructions, turn on the Use source instructions toggle.
Extended scoring weight: If the source criteria has scoring criteria enabled, provide the extended scoring weight.
Source scoring weight: Gets auto calculated, depending on the extended scoring criteria.
Select Add question to add questions to the extended criteria.
Select Save, and then select Publish.
Note
- The Source criteria details in the Extended criteria details page isn’t editable. However, if you make updates to the source criteria, the extended criteria is updated as well.
- The Criteria scoring reflects the selection you made in the source criteria.
- You can’t extend an out-of-the-box criteria, for example, Support quality.
Copy the out-of-the-box evaluation criteria for cases and conversations
Select the checkbox for the out-of-the-box criteria and then select Copy. A copy of the prefilled out-of-the-box criteria form is provided to you as the source. You can make edits as needed.
Select Save, once you're done making edits.
Best practices to create evaluation criteria
Criteria-level instructions: Define instructions that apply to the entire evaluation criteria. Include comprehensive goals, expectations, and constraints to guide the behavior of the Quality Evaluation Agent across all questions and answers.
Question-level instructions: Provide specific instructions for each question to guide the Quality Evaluation Agent’s evaluation. These instructions are scoped to the question only, and don’t influence other parts of the evaluation.
Answer-level instructions: Include answer-specific instructions for each answer option to help the Quality Evaluation Agent understand the intent and context of the response.
Answer choices: Clearly define answer options, especially for multi-choice or list-type questions. Ensure that you include fallback options to handle ambiguous or unexpected responses.
Clarity drives accuracy: Use precise and detailed instructions to improve the accuracy of Quality Evaluation Agent evaluations. Avoid vague language and ensure that all instructions are explicit, contextual, and actionable.
Question text: Define each evaluation question to assess a single, well-defined objective to help ensure clarity and direct alignment with the answer options.
Related information
Manage Quality Evaluation Agent
Use evaluation plan
Use evaluations