NTA - Notice history

All systems operational

Inbound Calls - Operational

98% - uptime
Dec 2025 · 98.61%Jan 2026 · 95.59%Feb · 99.82%
Dec 2025
Jan 2026
Feb 2026

Outbound Calls - Operational

98% - uptime
Dec 2025 · 98.61%Jan 2026 · 95.59%Feb · 99.82%
Dec 2025
Jan 2026
Feb 2026

Web Interface - Operational

98% - uptime
Dec 2025 · 98.61%Jan 2026 · 95.59%Feb · 99.82%
Dec 2025
Jan 2026
Feb 2026

Busy Lamps - Operational

97% - uptime
Dec 2025 · 98.61%Jan 2026 · 95.59%Feb · 97.39%
Dec 2025
Jan 2026
Feb 2026

API - Operational

98% - uptime
Dec 2025 · 98.61%Jan 2026 · 95.59%Feb · 99.82%
Dec 2025
Jan 2026
Feb 2026

No Service Impact Expected - Operational

98% - uptime
Dec 2025 · 98.61%Jan 2026 · 95.59%Feb · 99.82%
Dec 2025
Jan 2026
Feb 2026

Notice history

Feb 2026

Network Issues
  • Postmortem
    Postmortem

    At 15:42, our upstream connectivity supplier experienced a service outage,  causing interface instability (“flapping”) and subsequent service disruption within our network environment.

    At 15:50, investigations confirmed that the primary customer impact was related to call completion. The instability affected SIP signalling, causing a core service to become unresponsive and preventing calls from completing successfully. Also, preventing access to disable the affected routes and allow the Disaster Recovery to kick in

    At 16:21, NTA engineers managed to disable the affected interfaces to stabilise the platform and initiate established disaster recovery procedures.

    At 16:55, service stability was progressively restored following supplier recovery actions, and all services were fully operational and performing within normal parameters.

    NTA has already scheduled a programme of infrastructure improvements for April, including the migration from the current connectivity supplier to an alternative provider that will deliver enhanced redundancy while improving service resilience. This programme also includes increased rack capacity, deployment of new server infrastructure, and a major platform upgrade to support modern technologies, including AI capabilities and future business growth

  • Resolved
    Resolved

    This incident has been resolved. If your handset is still offline, please power cycle it. A reason for the outage will be issued after a thorough investigation into the cause has been conducted. Sorry for any inconvenience this may have caused you

  • Update
    Update

    Whilst services are fully restored, we are waiting for an update from our suppliers as to the reason for the outage. Once we have that,a reason for outage will be issued.

  • Monitoring
    Monitoring

    We now have a restricted service back-up and running, but services such as BLF will not be fully operational for a while as we concentrate on getting things running back as normal

  • Update
    Update

    Whilst we do have failover mechanisms in place, these only apply when the primary connection is hard down.
    As it currently flapping, this prevents the system moving into failover as the connection is not hard down.
    Unfortunately, this has put the system in a state where we will have to manually disconnect the link in order to utilise the other links. Our engineers are at site at the datacentre and working on the servers

  • Update
    Update

    Engineers have been dispatched to one of the datacentres to work on the issue

  • Identified
    Identified
    We are continuing to work on a fix for this incident.
  • Investigating
    Investigating

    We are currently investigating this incident at our datacentres, which is affecting our network and the abilility to make and receive calls.

Jan 2026

No notices reported this month

Dec 2025

Outage
  • Postmortem
    Postmortem

    RFO – Wednesday 31st December

    At 13:39, our primary service provider experienced extremely high latency on the connections supplying our network. While this caused service degradation, the connections did not go fully offline, which prevented automatic disaster recovery from initiating.

    At 13:51, we manually shut down the affected interfaces, allowing disaster recovery procedures to begin. Due to the nature of the fault, services on the impacted connections then had to be manually terminated in order to fully restore service.

    At 14:21, we identified that the primary impact was on call completion. One service was becoming stuck as SIP signalling was adversely affected by the high latency, preventing calls from completing successfully.

    By 14:35, all services were fully restored and operating normally.

  • Update
    Update
    This incident has been resolved.
  • Resolved
    Resolved

    This incident has been resolved. The system is now up and working. A reason for outage will be issued after a thorough investigation

  • Monitoring
    Monitoring
    We implemented a fix and are currently monitoring the result.
  • Identified
    Identified

    While the service is now working, we are continuing to work on a fix for this incident.

  • Update
    Update
    We are currently investigating this incident.
  • Investigating
    Investigating
    We are currently investigating this incident.
  • Identified
    Identified
    We are continuing to work on a fix for this incident.
  • Investigating
    Investigating

    We have become aware of an outage on one of the upstream providers. We are currently investigating this incident.

Dec 2025 to Feb 2026

Next