hasvalue.blogg.se

Signal messenger bad gateway 502
Signal messenger bad gateway 502








signal messenger bad gateway 502

As you can see, it throws the 502 error, then begins restarting the whole service without an indication as to exactly why this is happening. Here is a stack trace from the nifi-app.log file during a failed start. My question is: is this a bug or an intended function? Are there any known workarounds? I've been unable to find documentation or forum posts on this issue. This loop can go on anywhere from 10 minutes to several hours. When the UI thread finishes first, NiFi goes live.

signal messenger bad gateway 502

We believe that there is a race condition between a thread starting the UI and a thread starting the site-to-site components. HOWEVER - sometimes, eventually, magically, the UI will actually start. This causes the service to loop through its startup process indefinitely. However, instead of continuing to bring the UI up, the service receives a SHUTDOWN signal from bootstrap and restarts. This fails with a 502 error, because the UI is not yet up behind the load balancer. When we start NiFi, it attempts to refresh the status of the RPG, which we ping via the load balancer. However, when the cluster is coming up fresh, we have this issue: This works from a UI perspective, and we are able to get site-to-site working between cluster nodes using a Remote Process Group. We are attempting to set up a NiFi (v1.5.0) cluster on Amazon ECS, with a load balancer and DNS entry pointing to the UIs on every node.










Signal messenger bad gateway 502