Your multi-agent system is humming along in production when suddenly one of your third-party LLM providers starts returning garbled partial outputs. Within seconds, an orchestrator agent retries the call, a downstream summarization agent stalls waiting for a response, a vector search step times out, and your entire pipeline grinds to