Set Up Ollama on AlmaLinux 10 with Nginx and ZeroSSL
Ollama gives you a fast way to run large language models behind a simple local HTTP API, which makes it useful for private AI testing, internal assistants, automation workflows, and development environments that should not depend on a hosted model provider.
In this guide, we restore a fresh AlmaLinux 10.1 server on Shape.Host, verify the current stable Ollama release from the official project, install that exact version, bind the service to localhost only, publish it through Nginx on tutorials.shape.host, secure the endpoint with a trusted ZeroSSL certificate, and validate the finished deployment from both the local and public API endpoints.
| Application | Ollama |
|---|---|
| Application version | 0.18.3 |
| Operating system | AlmaLinux 10.1 |
| Reverse proxy | Nginx 1.26.3 |
| Public hostname | tutorials.shape.host |
| TLS issuer | ZeroSSL ECC Domain Secure Site CA |
| Validated on | Live Shape.Host AlmaLinux 10.1 server |
Why Run Ollama on AlmaLinux 10?
- AlmaLinux 10 gives you a modern enterprise-style base with a long support window.
- Ollama provides a clean local API without forcing you into a heavy web application stack.
- Nginx makes it easy to publish the local API over HTTPS while keeping Ollama bound to localhost.
- The Shape.Host workflow used here keeps the same public tutorial hostname for live certificate and endpoint validation.
Before You Begin
Make sure the following are ready before you start:
- A fresh AlmaLinux 10 server
- Root or sudo access
- A DNS record pointing
tutorials.shape.hostto your server IP - Ports
80and443open to the internet - Your ZeroSSL EAB key ID and EAB HMAC key for ACME registration
1. Verify the AlmaLinux 10 Release
Start by confirming that the rebuilt server is actually running AlmaLinux 10.1 before you install anything.
cat /etc/os-release

2. Install Nginx, Firewalld, SELinux Helpers, and Ollama Dependencies
On this AlmaLinux 10.1 template, the Ollama installer needed both zstd and tar because the upstream release is distributed as a compressed .tar.zst archive. Installing those packages up front avoids an otherwise confusing installer failure.
dnf makecache
dnf install -y curl nginx firewalld ca-certificates policycoreutils-python-utils zstd tar
systemctl enable --now firewalld
systemctl enable --now nginx
firewall-cmd --permanent --add-service=http
firewall-cmd --permanent --add-service=https
firewall-cmd --reload
curl --version | head -n 1
nginx -v
firewall-cmd --list-services
sestatus || true
On the validated server, this installed Curl 8.12.1, Nginx 1.26.3, enabled firewalld with HTTP and HTTPS open, and reported that SELinux was currently disabled on this template.

3. Install Ollama 0.18.3 and Bind It to Localhost Only
The official Ollama README uses the Linux install script. To keep the guide reproducible, pin the current stable release with OLLAMA_VERSION, then add a systemd override so the service listens on 127.0.0.1:11434 instead of exposing the native port publicly.
curl -fsSL https://ollama.com/install.sh | OLLAMA_VERSION=0.18.3 sh
mkdir -p /etc/systemd/system/ollama.service.d
cat > /etc/systemd/system/ollama.service.d/override.conf <<'EOF'
[Service]
Environment="OLLAMA_HOST=127.0.0.1:11434"
EOF
systemctl daemon-reload
systemctl enable --now ollama
systemctl restart ollama
sleep 5
systemctl is-active ollama
ollama -v
cat /etc/systemd/system/ollama.service.d/override.conf
systemctl cat ollama | sed -n '1,80p'
On the live AlmaLinux server, the official installer created the ollama user and service automatically, detected that no GPU was present on this test machine, and finished with ollama version is 0.18.3.

4. Validate the Local Ollama API
Before introducing the public reverse proxy, confirm that the service is healthy, listening only on localhost, and answering both the root endpoint and the version endpoint.
systemctl status --no-pager ollama | sed -n '1,15p'
ss -lntp | grep ':11434'
ollama -v
curl -fsS http://127.0.0.1:11434/
printf '\n'
curl -fsS http://127.0.0.1:11434/api/version
printf '\n'
On the validated deployment, Ollama listened on 127.0.0.1:11434, the root endpoint returned Ollama is running, and the version endpoint returned {"version":"0.18.3"}.

5. Configure Nginx and ZeroSSL for the Public Ollama Endpoint
The Ollama FAQ recommends proxying to localhost:11434 and forwarding the upstream Host header as localhost:11434. That detail matters, because forwarding the public hostname can produce a 403 response even when the local service is healthy.
mkdir -p /var/www/_letsencrypt /etc/nginx/ssl/tutorials.shape.host
cat > /etc/nginx/conf.d/tutorials.shape.host.conf <<'EOF'
server {
listen 80;
server_name tutorials.shape.host;
location /.well-known/acme-challenge/ {
root /var/www/_letsencrypt;
default_type "text/plain";
}
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
proxy_read_timeout 3600;
proxy_send_timeout 3600;
location / {
proxy_pass http://127.0.0.1:11434;
proxy_http_version 1.1;
proxy_set_header Host localhost:11434;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EOF
if command -v getsebool >/dev/null 2>&1; then
setsebool -P httpd_can_network_connect 1 || true
fi
nginx -t
systemctl reload nginx
curl -fsSL https://get.acme.sh | sh -s email=contact@shape.host
/root/.acme.sh/acme.sh --set-default-ca --server zerossl
/root/.acme.sh/acme.sh --register-account --server zerossl --eab-kid YOUR_ZEROSSL_EAB_KID --eab-hmac-key YOUR_ZEROSSL_EAB_HMAC_KEY
/root/.acme.sh/acme.sh --issue --server zerossl --webroot /var/www/_letsencrypt -d tutorials.shape.host --keylength ec-256
/root/.acme.sh/acme.sh --install-cert -d tutorials.shape.host --ecc \
--fullchain-file /etc/nginx/ssl/tutorials.shape.host/fullchain.cer \
--key-file /etc/nginx/ssl/tutorials.shape.host/tutorials.shape.host.key \
--reloadcmd "systemctl reload nginx"
cat > /etc/nginx/conf.d/tutorials.shape.host.conf <<'EOF'
server {
listen 80;
server_name tutorials.shape.host;
location /.well-known/acme-challenge/ {
root /var/www/_letsencrypt;
default_type "text/plain";
}
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl;
http2 on;
server_name tutorials.shape.host;
ssl_certificate /etc/nginx/ssl/tutorials.shape.host/fullchain.cer;
ssl_certificate_key /etc/nginx/ssl/tutorials.shape.host/tutorials.shape.host.key;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always;
client_max_body_size 0;
proxy_buffering off;
proxy_request_buffering off;
proxy_read_timeout 3600;
proxy_send_timeout 3600;
location / {
proxy_pass http://127.0.0.1:11434;
proxy_http_version 1.1;
proxy_set_header Host localhost:11434;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
EOF
nginx -t
systemctl reload nginx
On the validated AlmaLinux 10.1 run, Nginx accepted the final configuration, ZeroSSL issued a trusted ECC certificate, and the public endpoint started returning HTTP/2 200 with the corrected upstream Host localhost:11434 header.

6. Validate the Public HTTPS Ollama Endpoint
Ollama does not ship with a rich browser dashboard by default, so the final validation is more useful as live HTTPS and API checks than as a nearly blank browser capture.
systemctl status --no-pager ollama | sed -n '1,12p'
ss -lntp | grep -E ':80|:443|:11434'
firewall-cmd --list-services
ollama -v
curl -fsS http://127.0.0.1:11434/
printf '\n'
curl -fsS http://127.0.0.1:11434/api/version
printf '\n'
curl -I --resolve tutorials.shape.host:443:51.89.69.216 https://tutorials.shape.host/
curl --resolve tutorials.shape.host:443:51.89.69.216 -fsS https://tutorials.shape.host/api/version
printf '\n'
openssl x509 -in /etc/nginx/ssl/tutorials.shape.host/fullchain.cer -noout -issuer -subject
On the live server, the public HTTPS root returned HTTP/2 200, the public API returned {"version":"0.18.3"}, and the installed certificate issuer resolved to ZeroSSL ECC Domain Secure Site CA.

Hardening Notes
- Ollama does not include built-in public authentication, so place an additional access control layer in front of Nginx if the endpoint will be exposed beyond a trusted environment.
- Keep the native Ollama listener on
127.0.0.1:11434unless you intentionally need direct network access. - If you later enable SELinux on this server, keep
httpd_can_network_connectenabled so Nginx can continue proxying to the local Ollama port. - The default Ollama model storage can grow quickly, so check disk usage before pulling larger models.
Conclusion
You now have Ollama 0.18.3 running on AlmaLinux 10.1, bound to localhost, published through Nginx, and secured with a trusted ZeroSSL certificate on tutorials.shape.host. The final live checks confirm that both the local service and the public HTTPS API are working on the validated Shape.Host server.