An Azure service that provides a general-purpose, serverless container platform.
In my case, the issue was with the subnet associated with the ContainerApp Environment. I added a route table to the subnet that messed with the routing.
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Getting either of the following errors each time we try to deploy via GitHub Actions:
ERROR: Failed to provision revision for container app 'case-manager-ai-dev-aca'. Error details: The following field(s) are either invalid or missing. Field 'template.containers.app.image' is invalid with details: 'Invalid value: "evernestacr-ced7cacpc7gyazdn.azurecr.io/case-manager-ai:dev-48": failed to resolve registry 'evernestacr-ced7cacpc7gyazdn.azurecr.io': lookup evernestacr-ced7cacpc7gyazdn.azurecr.io on 100.100.253.30:53: server misbehaving';..
ERROR: Failed to provision revision for container app 'case-manager-ai-dev-aca'. Error details: The following field(s) are either invalid or missing. Field 'template.containers.app.image' is invalid with details: 'Invalid value: "evernestacr-ced7cacpc7gyazdn.azurecr.io/case-manager-ai:dev-48": unable to pull image using Managed identity /subscriptions/***/resourceGroups/case-manager-ai-dev-rg/providers/Microsoft.ManagedIdentity/userAssignedIdentities/case-manager-ai-dev-identity for registry evernestacr-ced7cacpc7gyazdn.azurecr.io';..
An Azure service that provides a general-purpose, serverless container platform.
In my case, the issue was with the subnet associated with the ContainerApp Environment. I added a route table to the subnet that messed with the routing.
Hi
This wasn't related to any configs or changes - nothing was changed or updated.
The container app environment was left in some weird broken state after the related outage - it looked fine on the surface but the managed infra resource group was empty (among other weird inconsistencies).
Deleting and recreating the CAE (as well as all related resources) fixed the problem.
Hi @JK
I see this recommendation on backend:
This issue occurs when the target container registry cannot be resolved or reached by the Azure Container Apps environment during image pull. This can happen if the registry name is incorrect, the registry no longer exists, or there are DNS or networking-related issues preventing resolution of the registry FQDN. In some cases, even when the registry exists and is correctly configured, transient platform or node‑level issues can cause failures while resolving *.azurecr.io, leading to deployment or revision provisioning errors. These failures can appear intermittently and may surface as DNS resolution errors or image pull failures using Managed Identity.
Refer below points to resolve this issue / workaround
Validate that the registry exists and the name is correct
Please confirm that the registry name specified in the container app configuration is spelled correctly and that the registry still exists. If a registry was previously deleted or renamed, image pull will fail. If this is an Azure Container Registry, ensure it is visible and available in Azure Subscription Context (ASC).
Test registry name resolution from a running container
If there is at least one Container App in the environment that is running successfully, you can use it to validate DNS resolution. Deploy a simple container (for example, a quickstart or helloworld image), open the Console blade, and install basic networking tools such as nslookup or dig.
Example test:
nslookup myregistry.azurecr.io
You can also test against a public DNS server:
nslookup myregistry.azurecr.io 8.8.8.8
This helps confirm whether the registry FQDN is resolvable from within the Container Apps environment.
Validate custom DNS configuration (if VNET‑integrated) If the Container Apps environment is integrated with a VNET and uses custom DNS servers, ensure these DNS servers can resolve Azure public endpoints. Azure DNS (168.63.129.16) may need to be configured to forward unresolved queries. Note that in peered VNET scenarios, custom DNS settings may originate from a peered VNET even if they are not visible directly on the current VNET.
Check Private Endpoint and Private DNS zone configuration If the Azure Container Registry is accessed via a Private Endpoint, ensure the corresponding Private DNS zone is correctly linked and configured. A misconfigured or missing Private DNS zone for ACR can cause resolution failures for the registry endpoint.
Review DNS responses from custom DNS servers Custom DNS servers returning incorrect or invalid DNS records for the registry FQDN can also cause this issue. Use nslookup or dig from within the environment to verify the actual DNS records being returned.
Restart the affected revision (non‑VNET environments only) If the environment does not use a VNET, the registry exists, and DNS resolution appears correct, you can try restarting the affected revision from the Revisions blade in the Azure Portal. If this resolves the issue, it may indicate a transient node‑level problem. If the issue keeps recurring, further investigation by the support team may be required.