A 408 error in Microsoft Teams Phone becomes much easier to fix once you stop treating it as one single fault. Sometimes the timeout sits on the client path. Sometimes it sits between Microsoft and the Session Border Controller. Sometimes the downstream PSTN side answers too slowly. The detail that changes the fix is the response code paired with 408.
Start Here
- Capture the exact Microsoft response code shown with 408.
- Confirm whether the call uses Direct Routing.
- Open one failed call and one successful call from the same trunk or region.
- Check whether the pattern is tied to one destination, carrier, SBC, or site.
- Only after that, change timers, firewall rules, or SBC settings.
Table of Contents
What the 408 Error Means in Teams Phone
On the web, a generic HTTP 408 means the server closed an idle connection instead of waiting any longer. That definition is useful because it explains the word timeout, but it does not explain most Teams Phone failures by itself. Engineers often compare the case with other documented Microsoft Teams Direct Routing error codes to see whether the timeout belongs to signaling, SBC behavior, or client establishment delays. [✅Source-1]
In Microsoft Teams Phone, the documented 408 cases are tied to Direct Routing and split into separate response-code combinations. Microsoft lists 1106, 500001, 560408, and 0, and each one points to a different part of the call path. Same SIP code, different fix. [✅Source-2]
Practical reading: if your tenant uses Direct Routing, a 408 fix usually starts with SIP signaling, SBC health, timers, TLS, FQDNs, firewall policy, and route behavior before you spend time on app reinstall steps.
Map the Code Before Changing Anything
Do not group every 408 into one bucket. Code mapping is the shortest path to the right fix.
| Microsoft Response Code | What It Usually Means | Where to Look First | Most Likely Fix Direction |
|---|---|---|---|
| 1106 | The callee answered, but the SBC did not acknowledge Microsoft’s SIP 200 OK. | SBC SIP stack, signaling trace, TCP/TLS session health. | Repair signaling acknowledgment flow, confirm stable TLS and proper SIP handling. |
| 500001 | The gateway failover timer expired before the SBC replied in time. | Failover timer, slow PSTN setup path, trunk response delay. | Measure setup delay, then review FailoverTimeSeconds and trunk behavior. |
| 560408 | The SBC reported that the user did not respond. | Call records, destination patterns, region or carrier clustering. | Look for repeated failure by number range, country, route, or downstream provider. |
| 0 | Establishment timeout on the client side. | Client network path, VPN, firewall, local connectivity to Teams. | Validate end-user network, packet path, DNS, and transport behavior. |
Microsoft’s own 408 matrix is unusually specific here, and that is why this error should be read as a typed timeout, not a vague timeout. [✅Source-3]
Fix the Problem in the Right Order
Confirm the Call Path First
Microsoft documents Direct Routing as a path that includes the SBC, Direct Routing components in the Microsoft cloud, and telecom trunks. If the failure is a Teams Phone 408 in this path, start there. Not at the desktop icon. [✅Source-4]
Run the Built-In Direct Routing Test
Microsoft provides a built-in diagnostic in the Microsoft 365 admin center that checks whether the user is correctly configured for Direct Routing. Use it early. It can surface tenant, user, or policy misconfiguration before you start editing live SBC settings. [✅Source-5]
Open SIP Call Flow for One Failed Call
- Go to Teams admin center.
- Open Analytics & reports.
- Generate the PSTN Usage report.
- Open the Direct Routing tab.
- Select the failed call and open SIP call flow.
- Check the message sequence, final SIP code, and where the delay starts.
SIP call flow data can take up to 30 minutes to appear, and records older than 30 days are not available there. That detail matters, because a missing trace often looks like a product issue when it is really just a reporting window issue. [✅Source-6]
What to compare in SIP call flow:
- Is the INVITE answered slowly on one route only?
- Does Microsoft send 200 OK but the SBC never acknowledge it?
- Does the failure happen before media setup or after answer?
- Does a working call on the same trunk show a cleaner message sequence?
Network and SBC Checks That Solve Many 408 Cases
TLS, FQDN, and SIP Options
- Use TLS 1.2.
- Make sure the SBC certificate is not self-signed.
- Check that the Record-Route or Contact header shows the correct FQDN, not a raw IP address.
- Confirm the FQDN is validated in Teams and resolves to the expected IP.
- Allow incoming signaling traffic from Microsoft’s published SIP proxy addresses.
- Watch whether the SBC receives and returns 200 OK for SIP options.
These checks matter because Microsoft marks the SBC active only after the TLS and SIP options path behaves as expected. A mismatch in FQDN, certificate, or signaling allowance can leave the path half-open and produce timeouts that look random when they are not. [✅Source-7]
Connection Points and Failover Timing
Direct Routing expects the SBC to try sip.pstnhub.microsoft.com first, then sip2.pstnhub.microsoft.com, then sip3.pstnhub.microsoft.com. Microsoft publishes the signaling subnets and states that SIP/TLS uses destination port 5061. [✅Source-8]
On the trunk side, FailoverTimeSeconds defaults to 10 seconds. Microsoft notes that slow networks and slow gateway responses can cause calls to drop unnecessarily at that value. When the 408 pattern is specifically 500001, and your traces show long setup time rather than broken signaling, a move to 20 seconds can be justified. Fix the path first, then the timer. [✅Source-9]
Firewall, NAT, UDP, and VPN Behavior
- Keep external DNS resolution working for Teams endpoints.
- Do not let the firewall rewrite mapped UDP NAT addresses or ports.
- Check NAT pool size so port exhaustion does not appear under load.
- Use split-tunnel VPN where possible for real-time traffic.
- Make sure Teams media connectivity rules are actually present, not assumed.
Microsoft’s network guidance calls out DNS reachability, UDP session persistence, NAT pool sizing, and split-tunnel VPN design as real-time media requirements. This is where many “intermittent” 408 cases are born. [✅Source-10]
For Teams endpoints more broadly, Microsoft publishes media and service connectivity ranges such as UDP 3478–3481 and TCP/UDP 443 for the listed Teams categories. If the network team says “Teams is already allowed,” verify against the current published ranges instead of relying on memory. [✅Source-11]
Use Admin Data to Separate Noise From the Root Cause
Once the immediate call trace is open, switch to pattern analysis. One failed call can mislead you. Ten failed calls on the same subnet, trunk, or route do not.
| Metric | What It Tells You | Useful Starting Target |
|---|---|---|
| Setup Failure Rate | Media path could not be established at call start. | 1.0% overall, 0.5% internal |
| Drop Failure Rate | A successfully established stream terminated unexpectedly. | 3.0% overall, 2.0% internal |
| Poor Stream Rate | Audio quality degraded during the session. | 3.0% overall, 2.0% internal |
| Rate My Call | User feedback that can expose trouble missed by pure signaling data. | Teams prompts after 1 in 10 calls |
Microsoft defines Setup Failure Rate and Drop Failure Rate clearly in CQD and even publishes starting targets. That makes CQD useful for more than trend watching; it gives you a measurable way to prove whether a 408 case belongs to a wider service-health problem or a narrow route issue. [✅Source-12]
Call Analytics is the per-user microscope. Teams admins can review user calls from the last 30 days, then open session details and inspect networking, devices, and quality markers. Red indicators usually point to the main trouble area. For live meeting troubleshooting, Microsoft keeps Real-Time Analytics data available for 72 hours after the meeting ends. [✅Source-13]
Health Dashboard for Direct Routing helps when the failure smells like route or trunk health rather than one user’s device. Microsoft defines the monitored network parameters there as jitter, packet loss, and latency, measured between the Direct Routing interface and the SBC. [✅Source-14]
A useful pattern to remember: if 408 appears mostly on one destination range, one country route, one carrier handoff, or one SBC, the fault is rarely random. Clustering is a signal. Follow it.
Common Mistakes That Keep 408 Coming Back
- Changing the desktop client first when the response code clearly points to signaling or SBC timing.
- Increasing failover timers too early before proving that the path is simply slow rather than broken.
- Testing only one failure instead of comparing a failed call with a successful call from the same route.
- Using raw IP addresses where Microsoft expects FQDN-based routing and SIP options behavior.
- Assuming firewall allow rules are current without checking Microsoft’s latest published ranges.
- Ignoring VPN and NAT design even though real-time media reacts badly to unstable UDP handling.
- Looking only at one user when CQD or Health Dashboard can show a route-wide pattern.
FAQ
Is Microsoft Teams Phone Error Code 408 always a client internet problem?
No. In Teams Phone, a 408 can point to client establishment timeout, SBC acknowledgment failure, gateway timer expiry, or a downstream user-no-response indication. The response code paired with 408 tells you where to start.
What does 1106 with 408 usually point to?
It usually means the callee answered, but the SBC did not acknowledge Microsoft’s SIP 200 OK in time. Look at SBC signaling, SIP handling, TCP/TLS stability, and any path issue that can interrupt the acknowledgment.
Should I raise FailoverTimeSeconds on every trunk?
No. Raise it only after traces show that setup is genuinely slow and not broken. If the real cause is FQDN mismatch, TLS trouble, firewall filtering, or SIP options failure, a longer timer only hides the fault for a while.
Where can I inspect the failed Teams Phone call in the admin tools?
Use Teams admin center, generate the PSTN Usage report, open the Direct Routing tab, and review SIP call flow for the failed call. Then move to CQD, Call Analytics, and Health Dashboard if the pattern affects more than one user or route.
What should I collect before escalating to the SBC vendor or Microsoft?
Collect the exact 408 pair code, time of failure, called and calling numbers, affected route or region, one failed and one successful SIP call flow, SBC logs, any recent certificate or firewall change, and whether the issue clusters by destination or trunk.