Escaping the Vercel Tax: How I Saved a Startup by Moving Workers to Hetzner!

Once upon a serverless time, I was vibing hard with Vercel while building Social Lane.

Deploying our Next.js frontend? One click.
Shipping API routes? Instant.
Paying for every little background task that breathed? ...Also instant.

And that, my friends, is when the fun stopped.

☠ Vercel and the Silent Taxation of Background Jobs

So here's what happened. I was building Social Lane—a social media scheduling platform that lets users post content to all their channels from one dashboard.

Our setup looked like this:

Frontend → API Route (Vercel) → Redis → Worker → API Route (Vercel again)

The idea was simple:

  1. User clicks "Post"
  2. Vercel backend enqueues the job to Redis
  3. Worker picks it, does the magic
  4. Sends status back to Vercel backend

Now multiply that by hundreds of users, with dozens of posts each day...

Boom — Function execution bills through the roof.

Serverless Isn't Great for Workers

Let's face it — serverless is cool for bursty traffic and fast booting APIs. But for long-running background jobs?

  • You pay for cold starts
  • You pay for execution time
  • You pay even for logs
  • And you definitely pay when your job queues start behaving like a beehive

Every background Redis worker pinging Vercel was costing me real money, and I wasn't even profitable yet .

Time to do what every scrappy indie dev does best: move fast, save money, don't break stuff.

The Plan: Migrate Redis Workers to Hetzner Like a Grown-Up

Here's how I rearchitected this beautiful chaos:

Step 1: Host Backend on Hetzner

I moved the Node.js backend off Vercel and onto a Hetzner VPS. It's like getting a Ferrari for the price of a scooter. €4.49/month VPS? Count me in.

Now, the backend lives rent-free (almost) on a full VM with zero cold starts.

// Before: Vercel Function
export default async function handler(req, res) {
  // Cold start delay: 500-2000ms
  // Execution time limit: 10s (Hobby), 15s (Pro)
  // Cost: $0.20 per million invocations + compute time
  
  const job = await enqueueJob(req.body);
  res.json({ jobId: job.id });
}

// After: Hetzner VPS Express Server
app.post('/api/enqueue', async (req, res) => {
  // Cold start: 0ms (always warm)
  // Execution time: unlimited
  // Cost: €4.49/month total
  
  const job = await enqueueJob(req.body);
  res.json({ jobId: job.id });
});

Step 2: Redis Lives There Too

I could've used Upstash, but I wanted full control.

So I spun up Redis on the same Hetzner VPS, local and blazing fast. No limits, no token counting, no Redis whispering "you've reached your free command quota" at midnight.

# Redis setup on Hetzner
sudo apt update
sudo apt install redis-server

# Configure for production
sudo nano /etc/redis/redis.conf

# Set max memory and eviction policy
maxmemory 1gb
maxmemory-policy allkeys-lru

# Enable persistence
save 900 1
save 300 10
save 60 10000

# Start and enable
sudo systemctl start redis
sudo systemctl enable redis

# Test it
redis-cli ping
# PONG (the sound of freedom)

Step 3: Worker + PM2 = Chef's Kiss

I deployed the BullMQ Worker to a second VPS. Why? To isolate the workload. Scaling = easy.

Used PM2 with max mode:

// worker.js - The Hero We Deserved
const { Worker } = require('bullmq');
const { postToSocialMedia } = require('./services/socialService');

const worker = new Worker('social-posts', async (job) => {
  const { platform, content, mediaUrls, userId } = job.data;
  
  try {
    console.log(`Processing ${platform} post for user ${userId}`);
    
    // No cold starts, no timeouts, just pure processing power
    const result = await postToSocialMedia({
      platform,
      content,
      mediaUrls,
      userId
    });
    
    // Update status in real-time
    await updateJobStatus(job.id, 'completed', result);
    
    return result;
  } catch (error) {
    console.error(`Failed to post to ${platform}:`, error);
    await updateJobStatus(job.id, 'failed', { error: error.message });
    throw error;
  }
}, {
  connection: {
    host: 'localhost', // Redis is right here, no network latency
    port: 6379
  },
  concurrency: 5 // Process 5 jobs simultaneously
});

// The magic PM2 deployment
pm2 start worker.js -i max --name "social-worker"

Now the worker chews through jobs like a gorilla on espresso—maxing out every core on the VPS.

How It Works Now

Here's the glorious new flow:

Frontend (Vercel)
      ↓
Backend (Hetzner) → Redis → Worker (Hetzner) → DB + Status Updates

✔ No Vercel functions getting hammered
✔ No extra costs for retries, timeouts, or log explosions
✔ No more crying during billing week

Bonus Tweaks That Saved Us Even More

MongoDB Connection Caching:

// Smart connection reuse pattern
let cachedDb = null;

async function connectToDatabase() {
  if (cachedDb) {
    return cachedDb;
  }
  
  // Only create new connection if needed
  const client = await MongoClient.connect(process.env.MONGODB_URI, {
    useUnifiedTopology: true,
    maxPoolSize: 10, // Reuse connections
  });
  
  cachedDb = client.db();
  return cachedDb;
}

Direct Status Updates:

// Worker updates backend directly, not through Vercel
const updateJobStatus = async (jobId, status, data) => {
  await fetch('http://localhost:3001/api/jobs/status', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ jobId, status, data })
  });
  
  // No Vercel cold starts in the entire pipeline!
};

The Complete Architecture

The final setup became a thing of beauty:

// Frontend (Vercel) - Still perfect for this
Next.js App → Static files + API routes for auth

// Backend (Hetzner VPS #1) - €4.49/month
Express.js API → Redis → MongoDB Atlas
- Handles job queuing
- Manages user authentication
- Serves real-time status updates

// Worker (Hetzner VPS #2) - €4.49/month  
BullMQ Workers with PM2 → Social Media APIs
- Processes scheduled posts
- Handles media uploads
- Manages retries and failures

// Redis (On Backend VPS)
- Job queue storage
- Real-time caching
- Session management

The Migration Day Chronicles

Migrating a live system is like performing heart surgery while the patient is running a marathon. Here's how I survived:

8:00 AM: "This will be easy, just move some code around"
10:30 AM: First deployment to Hetzner successful
11:45 AM: Redis refusing connections (forgot to update security groups)
1:20 PM: Workers connecting but jobs timing out
3:15 PM: Discovered MongoDB connection limit issues
5:45 PM: First successful end-to-end test post
8:30 PM: Full migration complete, all systems green
9:00 PM: Celebratory beer and promise to never do this on a Friday again

The Numbers: What We Actually Saved

Let's talk cold, hard cash:

Aspect Before (Vercel) After (Hetzner)
Monthly Cost $85-$200+ (unpredictable) €9/month ($10) fixed
Cold Starts Every. Single. Job. Workers always hot
Job Execution Time 15s limit (then timeout) Unlimited background power
Retry Logic Extra function calls = $$$ Free retries all day
Monitoring Limited, costs extra Full PM2 monitoring included

Performance Improvements We Didn't Expect

Job Processing Speed:

// Before: Vercel Serverless
Average job completion: 3-8 seconds
Cold start penalty: 500-2000ms per job
Failed jobs due to timeout: ~5%

// After: Hetzner VPS
Average job completion: 800ms-2s  
Cold start penalty: 0ms (always warm)
Failed jobs due to timeout: 0%

Redis Connection Performance:

// Upstash (Redis as a Service): 50-200ms latency
// Local Redis on same VPS: 0.1-0.5ms latency

// The difference? Like switching from dial-up to fiber

The PM2 Magic

PM2 turned our single-threaded worker into a multi-core monster:

# Deploy worker across all CPU cores
pm2 start worker.js -i max --name "social-worker"

# Monitor everything
pm2 monit

# Zero-downtime deployments
pm2 reload social-worker

# Automatic restarts on crashes
pm2 startup
pm2 save

# Check the beautiful stats
pm2 show social-worker

Result? I went from processing 1 job at a time to processing 8 jobs simultaneously on the 4-core VPS.

Lessons Learned (The Hard Way)

  1. Serverless isn't always cheaper - Calculate your actual usage patterns
  2. VPS management isn't scary - Modern tools make it surprisingly easy
  3. Co-locating services reduces latency - Redis + Worker on same machine = magic
  4. PM2 is a game-changer - Process management made simple
  5. Fixed costs enable growth - No more scaling anxiety
  6. Migration planning is everything - Test the full flow before going live

Security Considerations (Because We're Adults)

Moving from Vercel's managed security to self-hosted required some grown-up decisions:

# Firewall configuration
ufw allow 22    # SSH
ufw allow 80    # HTTP  
ufw allow 443   # HTTPS
ufw deny 6379   # Redis (internal only)
ufw enable

# Redis security
# Bind to localhost only
bind 127.0.0.1
# Require auth
requirepass your-super-secret-redis-password

# Regular updates
sudo apt update && sudo apt upgrade
sudo systemctl restart redis
pm2 reload all

Scaling Strategy for the Future

The new architecture scales beautifully:

Horizontal Scaling:

// Need more workers? Spin up another VPS
// Need more Redis? Redis Cluster
// Need more backend? Load balancer + multiple VPS instances

Vertical Scaling:

// Hetzner makes it stupid easy
// Click, upgrade, reboot
// €4.49 → €8.99 for 2x performance

The Plot Twist

Here's the kicker: I didn't completely ditch Vercel. I kept what it's good at:

  • Frontend hosting - Next.js deployment is still magical
  • Edge functions for auth - Perfect for JWT verification
  • API routes for simple operations - User preferences, quick data fetches
  • Background job processing - This moved to Hetzner
  • Long-running operations - VPS handles these now

Best of both worlds: Vercel for what it excels at, VPS for what it doesn't.

Final Thoughts

If you're building anything that needs background job handling and you're still using Vercel functions for it...

Stop. Immediately.

You're paying a luxury tax for a service that wasn't made for workers.

Hetzner is cheap. PM2 is magic. Redis is happier when it's not throttled. And your startup budget? It'll finally stop crying at night.

"The best architecture decision I made wasn't choosing the newest, shiniest technology. It was choosing the right tool for each specific job. Vercel for frontend magic, Hetzner for background muscle."
Me, looking at my €9/month server bill

P.S. If you made it this far, congrats. You've unlocked background job enlightenment. Now go forth and Redis responsibly.

P.P.S. Social Lane is live and serving thousands of scheduled posts daily on my €9/month infrastructure. Sometimes the old ways are the best ways. Sometimes you just need a server that stays awake.