EGroupware Cloud Status
Malfunction reports
EGroupware Cloud, Mail & Rocket,Chat: operational
Past Incidents
EGroupware mail services: Wednesday 27.11.2024 15:00 (CET):
IONOS reports network issues in the Frankfurt datacenter. This leads to connection errors in Mail while reading or sending emails. IONOS is still working an a solution and we are monitoring our services closely. In general there is not much load on the mail servers, its something in the connection in between matching what IONOS reported.
IONOS reports that the network issue was solved at around 5.30 p.m., but the network problem affected the connection to the NFS servers shortly before that. The first restart of the nodes only helped temporarily, after which the load went up again. It was therefore necessary to rebuild all nodes so that they could reconnect to the NFS server.
EGroupware cloud services: Friday 23.08.2024 12:55 – 14.15 (CEST):
Problems with the NFS server / Network-Connection, which is no longer accessible from EGroupware. This was causing high load on the database cluster. NFS was moved to an other pod and database nodes have been restartet. The root cause is not yet clear.
14:00 CEST: same problem happens again, we are restarting another node and move NFS there. Node where NFS was moved to, hasn’t been restartet before.
All systems are up and running.
EGroupware maintenance window: Saturday night 03.08.2024 22:00 – 01:00 (CEST)
Update of the Kubernetes version with multiple restarts of all systems during which downtimes of a maximum of 5 minutes are to be expected in the meantime.
EGroupware cloud services: Friday 02.08.2024 10:30 – 11.30 (CEST):
EGroupware cluster failure – probably a network problem with IONOS. The Kubernetes nodes can no longer reach each other, which has an impact on the databases and their availability.
After analysing the problem, the nodes have been rebuild and the services are up and running.
EGroupware Mail services: 01.07.2024 12:00 (MESZ)
Problem with incoming Mailqueues, which hold the mails and don’t deliver them into Inbox of the user.
The examination revealed that a “historical (much too small) value” was used for how many connections can be active. This value has now been increased and the queues are working normally again.
Maintenance work EGroupware Cloud & Mail services: 03.02.2024 22:00 – 24:00 (MEZ)
Changes to basic configuration parameters and restart the systems was successfully processed. The service have been available during that time.
EGroupware cloud services: Tuesday 16.01.2024 16:20 – 17.10 (CET):
EGroupware Cluster failure – there was again a problem on the storage system at IONOS. EGroupware Mail was available throughout, EGroupware Cloud & Rocket.Chat have been available again since 17.10. We are still clarifying the exact cause of the outage at IONOS and will inform you by email.
EGroupware cloud services: Tuesday 09.01.2024 13:20 – 13.40 (CET):
EGroupware cluster failure due to a problem on the NFS server (Storage from IONOS). After restarting the systems, the services are all available again.
Groupware Cloud Services: Wednesday 04.10.2023 10:00 -10:50(CEST):
EGroupware cluster hangs due to an issue with the NFS server. We are restarting the system. This will probably take 30-45min. We regret the outage and will report as soon as it is fixed.
Services are up and running again. In case you still get errors, please delete browser cache as browser may cached that the instance was not working for an certain amount of time.
EGroupware Cloud services: Thursday 28.09.2023 19:05 – 18.50 (CEST):
One of the Kybernetes nodes crashed and was restarted That took around 45min in total until everything was u and running on that node. EGroupware Instances on that node may experiences problem during that time, but all data was fully save. Unfortunately syncing all services and sourced take some time especially when there is still load on the system because of working hours.
But we regret and apologize for the disturbance.
EGroupware Cloud & Mail services: Wednesday 27.09.2023 13:59 – 14.03 (CEST):
IONOS had a brief network disruption in the Frankfurt data centers. The kubernetes control plane and all services where affected by this incident.
The connection was restored after a short time.
EGroupware Cloud & Mail services: Saturday 19.08.2023 08:00 – 08.35 (CEST):
After an incident at IONOS with failure of a storage system on Friday afternoon, all systems were running, although 2 important components were affected by the failure:
The primary mail gateway in Frankfurt and the PFSense (firewall) in Frankfurt. The redundancy had taken effect and all data and services ran via the secondary mail gateway2 in Karlsruhe and the PFSense there.
Around midnight, the tunnel between the two locations collapsed; the cause is still under investigation. As a result, no new emails were delivered since midnight, including the alert emails. The EGroupware Cloud and mail services were running for the time being.
Around 8 o’clock, when trying to switch off the hanging PFSense in Frankfurt, the connection broke down completely and EGroupware Mail & Cloud services are not available.
As we cannot access the console of the machines, IONOS stops the PFSense and restarts it successfully. This means that all EGroupware & mail services are available again, the outstanding emails since midnight have been delivered.
The primary mail gateway is not yet up and running again and we are currently checking why new emails are currently delayed. The primary mail gateway will also be restarted in consultation with IONOS, we do not expect any impact or outages.
EGroupware Cloud & Mail services: Wednesday 24.05.2023 0:30 – 02:00 (CEST):
Update of the firewall systems in Frankfurt caused unexpected downtime of the services. System is rebooting and get restartet. All services are up and running at 01.10h again. Further steps of the updates may cause some small outtakes when the systems get restarted.
EGroupware Cloud: Thursday 11.05.2023 9:20 – 10:15):
Load balancer (Route53) detects disconnections in Frankfurt and then temporarily switches all instances to Karlsruhe. This is associated with a new logon to the system. Currently the connection seems to be ok again. In some cases there are also “packet losses” in the connection, which leads to slower connections and loading times. We are in touch with IONOS and waiting for some feedback there. In case it is not getting better, we will switch system for the rest of the day permanently to Karlsruhe. We will inform here.
EGroupware Cloud: DNS problem with our DNS provider Core-Networks
26 April 2023 13:00 – 15:00 (CEST):
All services are running except for the DNS provider, which gives an error “ERR_NAME_NOT_RESOLVED”. We are checking what possibilities there are to make the instances accessible again for the customers.
As a workaround, you can enter 157.97.107.123 in your router or local DNS server for your EGroupware instance.
DNS Services are back and EGroupware and Mail is working again without any workarounds.
EGroupware Cloud & Mail in Frankfurt and Karlsruhe maintenance work
2023 Jan 27 22:00 – 02:00 (CET):
Connection problems to pfSense in FRA force a restart of the systems. EGroupware and mail availability disrupted for several minutes. Backend in Frankfurt is directly available again and therefore everything was switched to FRA. Backend in Karlsruhe was restarted.
Again high load on the pfSense in FRA, therefore another reboot. Due to a crash during the boot process, the pfSense has to be restored from a backup. All systems need a complete reboot including synchronization of the Galera cluster. EGroupware and mail services were unavailable for about 2 hours. All systems are running correctly again in both availability zones.
EGroupware Cloud & Mail problem caused by IONOS provisioning failure
2023 January 5th 13:30 – 14.00 (CET):
EGroupware and Mail problems for some minutes. FRA Backend is up again and everything switched to FRA. KA Backend need restarting which happens now. After that everything will be switched again to use both availibility zones.
EGroupware Cloud in Frankfurt and Karlsruhe maintenance work:
2022 Oktober 12th 22:00 – 13th 0:50 (CEST):
Maintenance work finished successfully, Cloud and mail services have been temporary not available.
EGroupware Cloud in Frankfurt and Karlsruhe effected:
2022 Oktober 12th 14:38 (CEST):
Our infrastructure provider IONOS had an incident / outage (see also https://status.ionos.cloud):
We are currently investigating an issue with the IONOS Cloud Kubernetes API.
EGroupware wasn’t effected in the first phase.
2022 Oktober 12th 15:20 (CEST):
EGroupware infrastructure gets effected and we are trying to get more details from IONOS, when the issue can be solved.
2022 Oktober 12th 15:30 (CEST):
Nodes in KA are back, but FRA has still problems and after 10min, KA also is again not available.
We are still working on the situation and apologise for the outtage.
2022 Oktober 12th 16:30 (CEST):
According to IONOS they fixed something with the provisioning, but we still don’t have access via Kybernetes API. Still working on the issue.
2022 Oktober 12th 17:00 (CEST):
Currently all EGroupware services are back up, but we still can’t use the Kubernetes API. Therefore, we can only wait and see if it is completely fixed.
We will inform further as soon as there is additional information.
EGroupware Cloud in Frankfurt and Karlsruhe effected:
2022 September 27th 14:10 – 14:19 and 16.12 – 16.15 (CEST):
Our infrastructure provider IONOS had an incident / outage for some minutes. This also effected our infrastructure and the service have been offline for 10 minutes. According to the incident report from IONOS this problem will be fixed with the next Kubernetes Stability Upgrade on October 4th.
Anyway we apologize for the disturbance during working hours!
EGroupware Cloud in Frankfurt – one DB node effected: 2022 September 9th 08:00 – 08:30 (CEST):
Our infrastructure provider IONOS had an outage at a host in Frankfurt that affected a database node. Due to a change in PHP 8.1 it did not automatically switch to another DB Node. As a result, instances on the database node were temporarily unavailable. The error could be fixed before the primary node was restored, so this should not happen again.
We apologize for the outage.
EGroupware Rocket.Chat temporarily unavailable, maintenance work EGroupware Cluster: 2022 May 26th 10:30 – 12.30 (CEST):
After a scheduled Kybernetes update in the night from May 25 to May 26, Rocket chat does not start anymore.
Therefore, further maintenance work has to be carried out today during the day, which may result in short-term outages of the EGroupware Cloud. The outage will affect a few minutes at most. EGroupware Mail is not affected and will be available the whole time. We regret the possible circumstances without prior notice, but the maintenance work is currently necessary and it is a public holiday, so outside the core working hours of our customers.
==> Maintenance successfully finished, no failure of EGroupware Cloud, Rocket.Chat operational again
EGroupware Mail & cloud service temporary unavailable: 2022 May 5th 16:10 – 16:25 (CEST):
Network issues from IONOS in KA and FRA.
After reproting to IONOS it was restored and all services are available again.
EGroupware Mail & cloud service are up and running: 2022 April 15th 17:00 (CEST):
EGroupware services are now also up and running, but not all kubernetes & database nodes are already back. We’re still working on it, but we don’t expect any more service downtime.
EGroupware Mail service up and running, while EGroupware ist still affected: 2022 April 15th 16:00 (CEST):
Power supply in the datacenter is back, IONOS is still working on recovering all services.
EGroupware Mail is available, EGroupware itself is still down as underlying filesystem is not yet available. We are still working on the issue.
EGroupware Cloud services FRA & KA: 2022 April 28th 15.00h (CEST):
Power failure in the data center in Frankfurt – see also https://status.ionos.cloud/
Apparently Karlsruhe is also not reachable at the moment, more detailed information is not available yet. We will inform as soon as there is an update.
EGroupware Cloud services FRA & KA: 2021 October 28th 7.30h (CEST):
EGroupware Cloud services are available again, Mail & Rocket.Chat have been working all the time
EGroupware Cloud services FRA & KA: 2021 October 28th 7.15h (CEST):
Renewed outage of EGroupware Cloud services in the early morning. Problem in the database cluster that no more write accesses can be executed. Databases have already been stopped, and are syncing the second database node. The sync should be complete around 07.30h.
The measures since the last incident have worked in the area of monitoring, which informed us this morning at 6:17h and we could therefore initiate the restart earlier. The investigation, why the problem appears again this morning, must still take place.
EGroupware Cloud services FRA & KA: 2021 October 20th 9.00h (CEST):
EGroupware Cloud, Rocket.Chat and Mail services are available again. You may experience that the services are slower at tthe moment. The remaining database nodes will be synchronized after work today.
EGroupware Cloud services FRA & KA: 2021 October 20th 8.30h (CEST):
Two Database knodes are available and the third is syncing. EGroupware Mail and Rocket.Chat is online again. EGroupware Cloud will take about 15min to be up.
EGroupware Cloud services FRA & KA: 2021 October 20th:
Outage of EGroupware Cloud services in the early morning hours. Problem in the database cluster that no more write accesses can be executed. Databases are in the process of being stopped, then restarted. The second and third database needs to join with the first database, that can take up to 30min each.
EGroupware Cloud and mail services are expected to be available again at 9am (CEST)
EGroupware Cloud services FRA & KA: 2021 September 29th 11.30 (CEST):
EGroupware Cluster: new failure of a database node. We need to shut down systems temporarily to get back to normal operation with at least 3 database nodes. Services are fully available again from 12.10h (CEST).
EGroupware Cloud services FRA & KA: 2021 September 29th 08.30 (CEST):
EGroupware Cloud up and running on two database cluster notes, rest will be synct during the evening.
EGroupware Cloud services FRA & KA: 2021 September 29th:
Outage of EGroupware Cloud services at night. Problem in the database cluster that no more write accesses can be executed. Databases are in the process of being stopped, then restarted. The second database needs to join with the first database, that can take up to 20min. So EGroupware Cloud will be offline until 08:00h (CEST).
The outage may also affect mail services.
EGroupware Email services: 2021 July 6th:
Scrub (Filesystem check) is running to check everything in details. This will take anyway some days. Until its finished we moved half of the instances now to Karlsruhe to not slow down the filesystem check.
Effected mailboxes has been restored from KA and are working properly now again in FRA.
EGroupware Email services: 2021 July 5th 10.30 (CEST)
Storage System in the datacenter in Frankfurt shows Checksum errors and Mailboxes are not available.
- Opened Ticket at the service provider of the datacenter IONOS and waiting for there response
- Switched Mail backend in Frankfurt off temporary, so the redundancy could take over and Mail services are now running all in the datacenter in Karlsruhe.
Your data is save, but the performance will be a bit slower until Frankfurt can be switched on again.
We will inform here as soon as we have any news on that topic.
EGroupware Cloud maintenance window: 2021 June 2nd from 8 – 9.30 pm (CEST)
Our guess on yesterday’s problem is that an “broken request” from a client on a single domain, then causes “Traefik” to respond to more than that client for some time with a “500 Internal Server Error”.
We will re-enable “Traefik” tonight and try to find out which request, domain and IP is causing the problem.
Problem has been identified with high probability and everything has been reset to normal operation.
Internal Server Error in Frankfurt: 01.06.2021 21:00 – 23.59 hrs
There was a problem in the EGroupware Cloud availability zone in Frankfurt from around 9pm, so that “500 Internal Server Error” occurred there again and again. The availability zone in Karlsruhe was not affected by the problem or only for a short time after we had switched everything to Karlsruhe as a workaround. Further investigations suggest that there is NO direct connection to the update to 21.1, but rather a problem with “Traefik” as a proxy / Kubenetes Ingress Controller, which only comes into play under very specific conditions.
As a first step, we updated the version of “Traefik”, which reduced the problem but did not eliminate it. A search in the “Github Forums of Traefik” gave a similar error description in the following post. In order to be able to provide a meaningfully usable EGroupware Cloud today, we removed “Traefik” and are talking directly to Nginx, so there were then no more “Internal Server Errors”.
Failure of all EGroupware and mail services: 06.04.2021: 17.45 – 19.20 CEST
IONOS has caused a network problem, hence the outage of the EGroupware and Mail services.
Colleagues are working as quickly as possible to clean up and restore connections.
06.04.2021: 18.30h The IONOS network is back up, but it will take some time until EGroupware and Mail are available again.
06.04.2021: 19.20h The nodes in Karlsruhe and then in Frankfurt are available again, so all EGroupware and Mail services are running.
SERVICE FAILURE EGROUPWARE NODE KARLSRUHE & FRANKFURT 24.08.2020 15.40 (CEST)
Service failure EGroupware node Karlsruhe & Frankfurt 24.08.2020 15.40 (CEST)
We are in the process of determining where the problem lies.
Currently, both nodes seem to be affected.
Only analysis shows a connection problem on the loadbalancers,
so there is no connection from outside.
18.00 o’clock (CEST): All systems (including the database cluster). were shut down.
The first database node was successfully restarted.
Currently the second database node is starting and synchronizing with the first.
As soon as this is completed, we will also restart the remaining systems.
18.30h (CEST): EGroupware and mail services are up again.