Backend won't start: 503

I am trying out browsertrix-cloud, following the local deployment guide (using k3s). Everything seems to work except browsertrix-backend.

I have set up as recommended in the docs:

helm upgrade --install btrix \
https://github.com/webrecorder/browsertrix-cloud/releases/download/v1.8.0/browsertrix-cloud-v1.8.0.tgz

And then I check the deployment:
kubectl wait --for=condition=ready pod --all --timeout=300s

But it times out because browsertrix-backend is not running.

So I checked the pod with kubectl logs and got this:

Defaulted container "api" out of: api, op
INFO:     Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
INFO:     Started parent process [7]
INFO:     Started server process [10]
INFO:     Waiting for application startup.
INFO:     Started server process [9]
INFO:     Waiting for application startup.
Waiting DB
Waiting DB
INFO:     Application startup complete.
INFO:     Application startup complete.
INFO:     10.42.0.1:33168 - "GET /healthz HTTP/1.1" 503 Service Unavailable
INFO:     10.42.0.1:33184 - "GET /healthz HTTP/1.1" 503 Service Unavailable

(and many more lines with the same error.)

Everything else works, including the frontend, which displays a “Please wait while Browsertrix Cloud is initializing”

Any idea what the problem could be?

I figured it out: the local-mongo service wasn’t getting an ip address. I edited mongo.yaml to comment out “clusterIP: none” and now it works. Must be a problem with my setup (k3s).

1 Like

where is this mongo.yaml file located ?

Hello everyone, i have the same problem actually. I want to install Browsertrix and i have the Backend that don’t start for the same reason i guess…

My machine: Fedora 41
Procedure:

curl -sfL https://get.k3s.io | sh -
$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
$ chmod 700 get_helm.sh
$ ./get_helm.sh
export KUBECONFIG=~/.kube/config
mkdir ~/.kube 2> /dev/null
sudo k3s kubectl config view --raw > "$KUBECONFIG"
chmod 600 "$KUBECONFIG"
helm repo add browsertrix https://docs.browsertrix.com/helm-repo/
helm upgrade --install btrix browsertrix/browsertrix --version v1.12.2

RESULT:
kubectl get pods
NAME READY STATUS RESTARTS AGE
browsertrix-cloud-backend-86b64f754c-khc6z 1/2 Running 1 (2s ago) 5m32s
browsertrix-cloud-frontend-5fbdd7f84b-758mn 1/1 Running 0 5m32s
btrix-metacontroller-helm-0 1/1 Running 0 5m32s
local-minio-75f5875874-7k57q 1/1 Running 0 5m32s
local-mongo-0 1/1 Running 0 5m32s

kubectl logs browsertrix-cloud-backend-86b64f754c-khc6z
Defaulted container “api” out of: api, op
[2025-02-27 17:56:44 +0000] [1] [INFO] Starting gunicorn 23.0.0
[2025-02-27 17:56:44 +0000] [1] [INFO] Listening at: http://0.0.0.0:8000 (1)
[2025-02-27 17:56:44 +0000] [1] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2025-02-27 17:56:44 +0000] [7] [INFO] Booting worker with pid: 7
[2025-02-27 17:56:47 +0000] [7] [INFO] Started server process [7]
[2025-02-27 17:56:47 +0000] [7] [INFO] Waiting for application startup.
Waiting DB
[2025-02-27 17:56:48 +0000] [7] [INFO] Application startup complete.

kubectl describe pod browsertrix-cloud-backend-86b64f754c-khc6z
Events:
Type Reason Age From Message


Normal Scheduled 6m50s default-scheduler Successfully assigned default/browsertrix-cloud-backend-86b64f754c-khc6z to fedora
Normal Pulled 6m49s kubelet Successfully pulled image “dockerxxxxxxxxxxxxxxxxxx” in 1.075s (1.075s including waiting). Image size: 154888869 bytes.
Normal Created 6m49s kubelet Created container api
Normal Started 6m49s kubelet Started container api
Normal Pulling 6m49s kubelet Pulling image “dockerxxxxxxxxxxxxxxxxxxxxxx”
Normal Pulled 6m48s kubelet Successfully pulled image “dockerxxxxxxxxxxxxxxxxxxxxx” in 887ms (887ms including waiting). Image size: 154888869 bytes.
Normal Created 6m48s kubelet Created container op
Normal Started 6m48s kubelet Started container op
Warning Unhealthy 5m25s (x17 over 6m44s) kubelet Startup probe failed: HTTP probe failed with statuscode: 503
Normal Pulling 80s (x2 over 6m50s) kubelet Pulling image “dockerxxxxxxxxxxxxxxxx”

kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
browsertrix-cloud-backend ClusterIP 10.43.89.186 8000/TCP,8756/TCP 7m59s
browsertrix-cloud-frontend NodePort 10.43.88.43 80:30870/TCP 7m59s
kubernetes ClusterIP 10.43.0.1 443/TCP 7h33m
local-minio ClusterIP 10.43.86.98 9000/TCP 7m59s
local-mongo ClusterIP None 27017/TCP 7m59s

It’s been few days that i am stuck with this problem…

Sorry you’re having issues - the backend will return a 503 if it can’t reach the mongodb, since otherwise it can’t really run. It sounds like there’s some connectivity issue with reaching mongo.
Is this also in k3s? We’ll see if we can reproduce this.
If you have more details about your setup, that would help also.

I got the same problem
k3s version: v1.30.10+k3s1
helm chart version: 1.14.3

mongodb logs

{"t":{"$date":"2025-03-08T11:19:03.771+00:00"},"s":"I",  "c":"-",        "id":20883,   "ctx":"conn284","msg":"Interrupted operation as its client disconne │
│ {"t":{"$date":"2025-03-08T11:19:03.771+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn287","msg":"Connection ended","attr":{"remote":"127.0.0. │
│ {"t":{"$date":"2025-03-08T11:19:03.772+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn286","msg":"Connection ended","attr":{"remote":"127.0.0. │
│ {"t":{"$date":"2025-03-08T11:19:03.774+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn284","msg":"Connection ended","attr":{"remote":"127.0.0. │
│ {"t":{"$date":"2025-03-08T11:19:42.588+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127. │
│ {"t":{"$date":"2025-03-08T11:19:42.594+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn288","msg":"client metadata","attr":{"remote":"127.0.0.1 │
│ {"t":{"$date":"2025-03-08T11:19:42.614+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127. │
│ {"t":{"$date":"2025-03-08T11:19:42.614+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127. │
│ {"t":{"$date":"2025-03-08T11:19:42.616+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn289","msg":"client metadata","attr":{"remote":"127.0.0.1 │
│ {"t":{"$date":"2025-03-08T11:19:42.618+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn290","msg":"client metadata","attr":{"remote":"127.0.0.1 │
│ {"t":{"$date":"2025-03-08T11:19:42.621+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"127. │
│ {"t":{"$date":"2025-03-08T11:19:42.628+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn291","msg":"client metadata","attr":{"remote":"127.0.0.1 │
│ {"t":{"$date":"2025-03-08T11:19:43.726+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn290","msg":"Connection ended","attr":{"remote":"127.0.0. │
│ {"t":{"$date":"2025-03-08T11:19:43.726+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn291","msg":"Connection ended","attr":{"remote":"127.0.0. │
│ {"t":{"$date":"2025-03-08T11:19:43.726+00:00"},"s":"I",  "c":"NETWORK",  "id":22944,   "ctx":"conn289","msg":"Connection ended","attr":{"remote":"127.0.0. │
│

and backend

Waiting DB
kubectl get svc -n browsertrix 
NAME                         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)             AGE
browsertrix-cloud-backend    ClusterIP   10.43.119.109   <none>        8000/TCP,8756/TCP   48m
browsertrix-cloud-frontend   ClusterIP   10.43.100.54    <none>        80/TCP              48m
cm-acme-http-solver-gmhsr    NodePort    10.43.180.44    <none>        8089:30429/TCP      31m
local-mongo                  ClusterIP   None            <none>        27017/TCP           48m

fix by setting:
mongo_host: local-mongo

Hm, interesting, thanks for sharing the solution - I assume you’re still running in the default namespace, local-mongo.default should be the same as local-mongo. Perhaps we can just remove the namespace TLD in case it doesn’t work in some systems…

1 Like

Same issue here. kubectl get svc -n browsertrix returns the same output as above with no IP being given to Mongo.

@reo Is the mongo_host setting you’re editing in the chart values? I tried

helm upgrade --set mongo_host=local-mongo --install btrix browsertrix/browsertrix

but the issue persists. I’m assuming I’m trying to configure the wrong thing. I don’t see that option in the config yamls. Would you mind pointing me in the right direction?

I’ve tried this in Docker Desktop (with kubes) and Ubtuntu Server Microk8s, same outcome. This local mongo issue continues to be an issue everywhere for me

It looks like they did update the chart to just be local-mongo. However, it still gets no IP.

All containers show as ready without errors. If I kick off a crawl, everything falls apart like a house of cards with it unable to sync the crawl status

Sorry that you’re having issues deploying Browsertrix.

The lack of ip for this service is by design, the mongo service is a headless service and does not get an IP (it’s part of how StatefulSets work).

It looks like you’re using a custom namespace, can you try deploying with the default namespace to see if that works?

You can also look at the logs for the mongo container with kubectl logs local-mongo-0

Quick note, I can confirm that, currently, running with default setting and custom namespace, eg. -n browsertrix, won’t work unless the minio bucket path is changed, currently it includes the namespace.

storages:
  - name: "default"
    type: "s3"
    access_key: "ADMIN"
    secret_key: "PASSW0RD"
    bucket_name: *local_bucket_name

    endpoint_url: "http://local-minio.default:9000/"
    access_endpoint_url: "/data/"

This would need to be overriden in a custom values file to be, for example, endpoint_url: "http://local-minio.browsertrix:9000/". Since values files can’t have templates, there’s not an easy way to support automatically using the current namespace unfortunately.

FYI The 1.16.2 release now makes it easier to run in any namespace without modification.