Summary
On March 2, 2026, Dixa experienced platform-wide degraded performance lasting approximately 3 hours and 45 minutes. Customers experienced slow or failed conversation loading, timeouts on email sending, conversation transfers, assignments, and flow processing. No data was lost, and there were no security issues at any point.
Impact
Root Cause
The incident was caused by a significant and atypical spike in inbound conversations that fell well outside expected operational parameters. This unexpected volume put pressure on a central database component, which began throttling requests. The throttling then cascaded to other parts of the system, resulting in platform-wide slowness and temporary inaccessibility.
Timeline (CET)
Mar 1, 02:14 Significant and atypical spike in inbound conversations begins
Mar 2, ~17:45 Database throttling begins; connection pools saturate
Mar 2, 19:08 Incident declared; investigation begins
Mar 2, 20:06Mitigation measures applied; investigation ongoing
Mar 2, 20:18 Root cause identified
Mar 2, 21:09 Fix deployed; error rates drop
Mar 2, 21:17 Resolution confirmed
Mar 2, 21:21Incident resolved
Resolution
We identified and addressed the source of the abnormal conversation volume, which immediately caused error rates to drop and the platform to recover.
What We're Doing to Prevent Recurrence
We have identified several systemic improvements and are actively working on them:
Closing Note
We sincerely apologize for the disruption this caused. We take platform reliability seriously and are committed to the systemic improvements outlined above. If you have any questions, please reach out to friends@dixa.com.