I got node.js express.js app that works on localhost and on render.com but do not on kinsta.com

Piscina worker pool fails on Kinsta.com / Cloud Run (503 error), but works locally and on Render

I have a Node.js/Express application that uses the piscina library to manage a worker pool for sending push notifications. The application works perfectly on my local machine (Windows) and when deployed to Render.

However, when I deploy the exact same application to Kinsta, it fails to launch. The only response I get is a 503 Service Temporarily Unavailable error from Nginx, and no application logs are produced. This indicates the app is crashing immediately on startup.


The Problem Code (worker-pool.js)

The issue seems to be centered around how I initialize the Piscina pool. My project structure has a server/ directory which contains worker-pool.js, the worker file SenderWorker.js, and my main server.js file.

Here is the content of server/worker-pool.js:

import Piscina from ‘piscina’;

import ‘dotenv/config’;

import { fileURLToPath } from ‘url’;

const workerFilePath = fileURLToPath(new URL(‘./SenderWorker.js’, import.meta.url));

let SERVER_URL;

if (process.env.NODE_ENV === ‘production’) {

SERVER_URL = urltomyapp.onrender.com;

} else if (process.env.NODE_ENV === ‘productionKinsta’) {

SERVER_URL = urltomyapp-h3dey.kinsta.app;

} else {

SERVER_URL = localhost:5500;

}

const pool = new Piscina({

filename: workerFilePath, // This is the line causing the issue on Kinsta

workerData: {

MONGO_URI: process.env.MONGO_URI,

VAPID_PUBLIC_KEY: process.env.VAPID_PUBLIC_KEY,

VAPID_PRIVATE_KEY: process.env.VAPID_PRIVATE_KEY,

MAIL_SENDER: process.env.MAIL_SENDER,

SERVER_URL: SERVER_URL,

PUSH_RETRIES: process.env.PUSH_RETRIES

}

});

export default pool;

Debugging Steps and Clues

  1. The .href Hack: If I change the filename line to filename: workerFilePath.href, the application successfully launches on Kinsta. However, the moment I try to run a task with pool.run(), it fails on all environments (localhost, Render, and Kinsta) with an error like Error: filename must be provided to run() or in options object. This tells me the initial file path resolution is the specific part that crashes the app on Kinsta.

Kinsta Logs: I added console.log statements to worker-pool.js to see what’s happening on Kinsta before the crash. The logs show that the file path is resolved correctly and the Piscina constructor appears to complete without error, yet the app still crashes immediately after.

— [worker-pool.js] INITIALIZING —

[worker-pool.js] Resolved worker file path: /server/SenderWorker.js

[worker-pool.js] Current NODE_ENV: productionKinsta

[worker-pool.js] Server URL set to: urltomyapp.kinsta.app

[worker-pool.js] Attempting to create Piscina pool…

[worker-pool.js] Piscina pool created successfully.

[worker-pool.js] Exporting pool instance.

(After these logs, the app dies and Nginx returns 503)


Environment and Dockerfile

I am using a Dockerfile for deployment on Kinsta. The Dockerfile is located in the server/ directory.

FROM node:22-alpine

WORKDIR /server

COPY server/package*.json ./

RUN npm install

WORKDIR /

COPY . .

EXPOSE 3000

CMD [“node”, “server/server.js”]

Honestly, I’m at a loss for what to try next. I’m tempted to abandon Kinsta altogether, but I’m concerned this isn’t a platform-specific problem and that this same error might appear on other container-based cloud providers as well.

Hi @Pawel

Welcome to the Kinsta community!

We can take a closer look. Could you please send me your app’s default Kinsta domain through private message?

Thanks!

Thank you for your patience :folded_hands:

We are still reviewing this with our system engineers.

I’ll update you once we have news.

Kind regards!

Hello @Pawel :waving_hand:

It seems that smallest pod size won’t be enough as the application seems to run out of memory on start.

I advise trying a pod with more resources, incrementally, try the next one in size which is S1 and see if that resolves the issue.

Kind regards :waving_hand:

In S1 working, thanks a lot!

You’re most welcome :slight_smile: I’m happy to hear that.

Let us know if you need anything else, or feel free to open a support chat :waving_hand: