Set Up AnythingLLM on AlmaLinux 10 with Docker
AnythingLLM is a self-hosted AI workspace that gives you a browser-based chat interface, document ingestion, multi-user support, and connectors for local or hosted language models. It is a good fit when you want a full AI application stack on your own server instead of only exposing a raw model API.
In this guide, we restore a fresh AlmaLinux 10.1 server on Shape.Host, verify the current AnythingLLM stable release from the official project, install Docker Engine and Docker Compose from Docker’s official EL repository, deploy AnythingLLM v1.11.2 in a localhost-only container, and validate the web interface safely over an SSH tunnel.
| Application | AnythingLLM |
|---|---|
| Application version | v1.11.2 |
| Operating system | AlmaLinux 10.1 |
| Container runtime | Docker Engine 29.3.1 with Docker Compose 5.1.1 |
| Supporting tools | Git 2.47.3 and OpenSSL 3.5.1 |
| Access pattern | AnythingLLM bound to 127.0.0.1:3001 with browser access through an SSH tunnel |
| Validated on | Live Shape.Host AlmaLinux 10.1 server |
Why Use AnythingLLM on AlmaLinux 10?
- AnythingLLM gives you a polished self-hosted AI workspace without forcing you to build a frontend around a model backend.
- AlmaLinux 10.1 is a current enterprise Linux base that works well for long-lived Docker deployments.
- Docker keeps the application easy to update and isolates the runtime from the host system.
- A localhost-only bind keeps the panel off the public network until you intentionally add a reverse proxy later.
Before You Begin
Make sure you have the following before you start:
- A fresh AlmaLinux 10 server
- Root or sudo access
- An SSH key that can log in to the server
- A local workstation where you can open an SSH tunnel to the server
1. Verify the AlmaLinux 10 Release
Start by confirming that the rebuilt server is really running AlmaLinux 10.1.
cat /etc/os-release

2. Install Docker Engine, Docker Compose, and Base Tools
The official AnythingLLM self-hosted guidance uses Docker. On AlmaLinux 10, the clean path is Docker’s official EL repository rather than an older distro container stack.
dnf -y install ca-certificates curl git openssl dnf-plugins-core
dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
dnf -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
systemctl enable --now docker
docker --version
docker compose version
git --version
getenforce
On the validated AlmaLinux 10.1 server, this installed Docker Engine 29.3.1, Docker Compose 5.1.1, Git 2.47.3, and kept SELinux in the Disabled state on this template.

3. Create the AnythingLLM Configuration
AnythingLLM needs a persistent storage directory and an environment file. The storage path must be writable by the container user, so set the directory ownership up front before starting the container.
mkdir -p /opt/anythingllm/storage
chown -R 1000:1000 /opt/anythingllm/storage
chmod 775 /opt/anythingllm/storage
cd /opt/anythingllm
SIG_KEY="$(openssl rand -hex 32)"
SIG_SALT="$(openssl rand -hex 32)"
cat > .env <<EOF
SERVER_PORT=3001
STORAGE_DIR="/app/server/storage"
SIG_KEY="${SIG_KEY}"
SIG_SALT="${SIG_SALT}"
DISABLE_TELEMETRY="true"
EOF
cat > compose.yaml <<'EOF'
services:
anythingllm:
image: mintplexlabs/anythingllm:1.11.2
container_name: anythingllm
restart: unless-stopped
cap_add:
- SYS_ADMIN
ports:
- 127.0.0.1:3001:3001
env_file:
- .env
volumes:
- ./storage:/app/server/storage
- ./.env:/app/server/.env
extra_hosts:
- "host.docker.internal:host-gateway"
EOF
grep '^SERVER_PORT=' .env
grep '^DISABLE_TELEMETRY=' .env
grep 'image:' compose.yaml
grep '127.0.0.1:3001:3001' compose.yaml
docker compose config --services
This configuration pins the current AnythingLLM release, keeps the web app on 127.0.0.1:3001, disables telemetry, and includes SYS_ADMIN so the instance can support the upstream web-scraping capability when needed.

4. Start AnythingLLM
With the configuration in place, pull the image and start the application container.
4.1 Launch the Container
cd /opt/anythingllm
docker compose pull
docker compose up -d
sleep 30
docker compose ps
On the validated server, Docker pulled mintplexlabs/anythingllm:1.11.2, created the container, and brought the service up successfully.

4.2 Validate the Local HTTP Response
docker compose ps
docker compose images
curl -I http://127.0.0.1:3001
The live deployment reached a healthy container state and the local endpoint returned HTTP/1.1 200 OK, which is the expected result before opening a browser session through a tunnel.

5. Open the AnythingLLM Web Interface Through an SSH Tunnel
This installation keeps AnythingLLM on localhost instead of exposing it directly to the public network. Create an SSH tunnel from your local machine, then open the forwarded port in your browser.
ssh -L 13001:127.0.0.1:3001 root@YOUR_SERVER_IP
After the tunnel is open, browse to:
http://127.0.0.1:13001
On the live server, the AnythingLLM welcome page loaded correctly through the tunnel and was ready for the initial setup flow.

6. Run Final Server-Side Checks
Before you stop, confirm that the container is still healthy, the port is bound only to localhost, and the local HTTP response remains available.
cd /opt/anythingllm
docker compose ps
docker compose images
echo "SELinux status: $(getenforce || true)"
ss -lntp | grep ':3001'
curl -I http://127.0.0.1:3001
On the validated deployment, the final check showed the container in the healthy state, port 3001 listening only on 127.0.0.1, and the local AnythingLLM page still returning 200 OK.

Conclusion
You now have AnythingLLM on AlmaLinux 10 running in Docker with persistent storage, a pinned stable image, and a safer localhost-only access pattern. From here, you can complete the in-app onboarding flow, connect your preferred LLM provider, and only add a public reverse proxy later if you decide you need browser access without SSH tunneling.