How to self host Logseq DB graph sync
Before we get into the how, here’s a bit more about the why and the what.
Logseq is a local-first, Markdown-based note-taking application. I love it. The code is open source, and the plugin ecosystem is incredibly helpful.
For most of its history, Logseq has used a file-based database. You own your Markdown files and images, and you sync them through your drive folder across devices.
But file-based systems are slow and heavy. Last year, Logseq began moving toward a graph-based database to better support vector indexing and enable more advanced AI features.
While the new capabilities are exciting, adding another $10–15 to my monthly subscription stack isn’t. Thankfully, the founder Tienson Qin and contributors added support for self-hosting the sync server.
As of early 2026, this feature is in beta.
Table of Contents
- Part 1: Prerequisites & Setup
- Part 2: AWS Cognito Setup
- Part 3: Build & Deploy the Server
- Part 4: Build the Client
- Security & Maintenance
Part 1: Prerequisites & Setup
Development Machine Requirements
Build on your local machine — not on the VPS. The compile step is resource-intensive.
Required:
- Java 21
- Node LTS + yarn
- Clojure CLI tools
- Git
macOS:
brew install openjdk@21 clojure/tools/clojure
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
nvm install --lts && nvm use --lts
npm install -g yarn Ubuntu/Debian:
sudo apt update && sudo apt install -y openjdk-21-jdk
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
nvm install --lts && nvm use --lts
npm install -g yarn
curl -O https://download.clojure.org/install/linux-install-1.11.1.1435.sh
chmod +x linux-install-1.11.1.1435.sh && sudo ./linux-install-1.11.1.1435.sh Clone the Repository
git clone https://github.com/logseq/logseq.git
cd logseq
yarn install Part 2: AWS Cognito Setup
Logseq uses AWS Cognito for user auth (JWT tokens). You need your own Cognito pool so that your server can validate the tokens your client sends — without this, every request returns 401.
Cognito’s free tier covers 50,000 MAUs, which is more than enough for personal use.
Create a User Pool
- Go to AWS Cognito Console
- Click Create user pool
- Application type: Single-page application (SPA) (no client secret needed)
- Sign-in identifiers: Email
- Enable self-registration
- After creation, go to App integration → Domain and create an OAuth domain
Collect Your Credentials
From your User Pool, gather:
| Value | Where to find it |
|---|---|
REGION | AWS Console URL, e.g. ap-south-1 |
USER_POOL_ID | User pool overview tab, format: {region}_{string} |
CLIENT_ID | App integration → App clients |
OAUTH_DOMAIN | App integration → Domain, format: {prefix}.auth.{region}.amazoncognito.com |
Derive the rest:
COGNITO_ISSUER="https://cognito-idp.{REGION}.amazonaws.com/{USER_POOL_ID}"
COGNITO_JWKS_URL="${COGNITO_ISSUER}/.well-known/jwks.json" Keep these — you’ll need them for both the server and client config.
Part 3: Build & Deploy the Server
Step 1: Apply Required Code Fixes
The upstream Logseq repo has several bugs that prevent self-hosted sync from working. Apply the patch from the gist below — all fixes are included, and all are required.
Gist: sync fixes + config template
# Download the sync fixes patch (file 1 in the gist)
curl -L "https://gist.github.com/4shutosh/33af33932c3d5e776368bc30b59d0aa6/raw/self_host_sync_fixes.patch" -o self_host_sync_fixes.patch
git apply self_host_sync_fixes.patch Alternatively, cherry-pick directly from my fork:
git remote add 4shutosh https://github.com/4shutosh/logseq.git
git fetch 4shutosh self_host/21_march_2026
git cherry-pick 477061ffb What the patch changes (expand if curious)
malli_schema.cljs—outliner-opschema changed from[:maybe :keyword]to:any. The client sends keywords as strings in JSON; the server’s malli coercer was rejecting them.handler/sync.cljs— adds:skip-validate-db? truewhen applying client txs on the server. The server inherits client-side malli validation that requires full graph context — without this it incorrectly rejects valid transactions.db.cljs— guardsthrow-if-page-has-block-parent!and makes validation non-fatal for incoming remote transactions.pipeline.cljs— guardsensure-journal-page-protected-attrs-not-updated!so it only fires on local writes, not incoming remote txs.apply_txs.cljs— callsreplace-string-block-tempids-with-lookupsbefore applying remote txs (prevents “Tempids used only as value” Datascript error); convertsoutliner-opkeywords to strings in the outgoing payload.handle_message.cljs— adds handler for{"type":"error"}server responses. Without it, theinflightatom gets permanently stuck and the client silently stops pushing all future changes.sync.cljs— adds.catchto the WebSocketonmessagehandler so async errors surface instead of disappearing.
Step 2: Configure Cognito in start.sh
Edit deps/db-sync/start.sh with your values from Part 2:
: "${COGNITO_ISSUER:=https://cognito-idp.YOUR_REGION.amazonaws.com/YOUR_USER_POOL_ID}"
: "${COGNITO_CLIENT_ID:=YOUR_CLIENT_ID}"
: "${COGNITO_JWKS_URL:=https://cognito-idp.YOUR_REGION.amazonaws.com/YOUR_USER_POOL_ID/.well-known/jwks.json}"
: "${DB_SYNC_LOG_LEVEL:=info}" Step 3: Build the Server
cd deps/db-sync
yarn install
yarn build:node-adapter
# Output: worker/dist/node-adapter.js Step 4: Package and Deploy
Create a minimal deployment package:
# From deps/db-sync/
tar czf node-adapter-deploy.tar.gz
worker/dist/node-adapter.js
package.json
yarn.lock
start.sh Transfer to your VPS:
scp deps/db-sync/node-adapter-deploy.tar.gz root@YOUR_VPS_IP:/root/ Step 5: Set Up the VPS
ssh root@YOUR_VPS_IP
# Install Node
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.0/install.sh | bash
source ~/.bashrc
nvm install --lts && nvm use --lts
npm install -g pm2
# Install nginx
apt install -y nginx Extract the package — the directory structure must match what PM2 expects:
mkdir -p /root/logseq_server
cd /root/logseq_server
# Extract — the trailing slash on the destination is critical
tar xzf /root/node-adapter-deploy.tar.gz
# Creates: worker/dist/node-adapter.js, package.json, yarn.lock, start.sh
yarn install Deploying updates: Always scp the single compiled file directly to its exact path. Do not use
rsyncwith directory arguments — trailing slash behaviour causes silent misplacement.scp deps/db-sync/worker/dist/node-adapter.js root@YOUR_VPS_IP:/root/logseq_server/worker/dist/node-adapter.js pm2 restart node-ada
Step 6: Update start.sh to use PM2
The default start.sh runs node directly. Change the last line so it starts via PM2, which handles restarts and log management:
nano /root/logseq_server/start.sh Change the last line from:
node worker/dist/node-adapter.js to:
pm2 start worker/dist/node-adapter.js --name node-ada The env vars exported above it are inherited by PM2, so no other changes needed.
Step 7: Start the Server
cd /root/logseq_server
chmod +x start.sh
./start.sh
# Verify PM2 picked it up
pm2 status
# Should show node-ada as "online"
# Persist across reboots
pm2 save
pm2 startup
# Run the command it prints
# Check logs
pm2 logs node-ada --lines 20
# Health check
curl http://localhost:8787/worker/health
# {"ok":true} Step 7: Configure Nginx
nano /etc/nginx/sites-available/logseq-sync server {
listen 80;
server_name YOUR_VPS_IP;
client_max_body_size 100M;
location / {
proxy_pass http://localhost:8787;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_read_timeout 86400s;
proxy_send_timeout 86400s;
}
} ln -s /etc/nginx/sites-available/logseq-sync /etc/nginx/sites-enabled/
nginx -t && systemctl reload nginx
# Verify
curl http://YOUR_VPS_IP/worker/health Part 4: Build the Client
You must rebuild the Logseq desktop app with your server’s address and Cognito pool baked in.
Step 1: Apply Configuration Changes
Download the config template (file 2 in the same gist), fill in your values, then save it as your personal patch outside the repo so you can reapply it on any branch.
# Download the config template
curl -L "https://gist.github.com/4shutosh/33af33932c3d5e776368bc30b59d0aa6/raw/self_host_config_sample.patch" -o ~/self_host_config.patch
# Open it and replace every YOUR_* placeholder with your actual values:
# YOUR_REGION → e.g. ap-south-1
# YOUR_USER_POOL_ID → e.g. ap-south-1_AbC123
# YOUR_CLIENT_ID → from Cognito App clients tab
# YOUR_OAUTH_DOMAIN → e.g. my-prefix.auth.ap-south-1.amazoncognito.com
# YOUR_VPS_IP → your server IP or domain
nano ~/self_host_config.patch
# Apply it
git apply ~/self_host_config.patch To reapply on a fresh branch or after a rebase:
git apply ~/self_host_config.patch Step 2: Build the Desktop App
# From the repository root
ENABLE_DB_SYNC_LOCAL=true yarn release-electron
# Built apps in static/:
# macOS: Logseq-darwin-arm64/ or Logseq-darwin-x64/
# Linux: Logseq-linux-x64/ Step 3: Install and Test
- macOS: Copy
Logseq.appto/Applications/. If you see “app is damaged”, run:xattr -dr com.apple.quarantine Logseq.app - Linux: Run the AppImage or use the provided installer.
Verify sync is working:
- Launch your custom-built app
- Sign in via Settings → Sync (redirects to your Cognito login page)
- Create a new DB graph
- Enable sync in Settings → Sync
- Make an edit — check
pm2 logs node-adaon the server for WebSocket activity
Security & Maintenance
Firewall
ufw allow 22/tcp && ufw allow 80/tcp && ufw allow 443/tcp && ufw enable HTTPS (recommended)
apt install -y certbot python3-certbot-nginx
certbot --nginx -d your-domain.com Update your client config to use wss:// and https:// URLs, then rebuild.
Deploying Server Updates
Follow these steps in order every time you update the server.
Step 1 — Pull and reapply patches (on your local machine)
cd /path/to/logseq
git pull origin master
# Reapply the sync fixes
git cherry-pick 477061ffb
# Reapply your personal Cognito + IP config
git apply ~/self_host_config.patch Step 2 — Rebuild the server binary
cd deps/db-sync
yarn build:node-adapter
# Output: worker/dist/node-adapter.js Step 3 — Copy the new binary to the VPS
Always copy the single file directly to its exact path. Do not use rsync with directory arguments — the trailing slash behaviour causes silent misplacement and the server will fail to start.
scp worker/dist/node-adapter.js root@YOUR_VPS_IP:/root/logseq_server/worker/dist/node-adapter.js Step 4 — Restart on the VPS
ssh root@YOUR_VPS_IP
pm2 restart node-ada
# Confirm it came back up
pm2 status
# node-ada should show "online"
# Check for errors in the first few lines
pm2 logs node-ada --lines 30
# Sanity check
curl http://localhost:8787/worker/health
# {"ok":true} Backup
tar czf backup-$(date +%Y%m%d).tar.gz /root/logseq_server/data/ Quick Reference
pm2 status
pm2 logs node-ada
pm2 restart node-ada
curl http://localhost:8787/worker/health This post is AI-assisted, might have mistakes. Last verified: 22 March 2026.