If you've been or are interested in self-hosting Next.js using Docker, you may have encountered the following warning:
SecretsUsedInArgOrEnv: Do not use ARG or ENV instructions for sensitive data (ENV "SUPER_SECRET_API_TOKEN")
I certainly have and initially it left me a bit confused. At the time of writing neither the official Next.js self-hosting guide nor the guide on environment variables mention handling secrets at all. Since Next.js doesn't provide guidance on this topic, we need to look at how Docker itself recommends handling secrets securely. According to the docker documentation for the aforementioned warning, the concept of Secret Mounts is being introduced — let's explore how they support us in making our containers more secure!
ARG
and ENV
?First, let's get on the same page why ARG
and ENV
instructions may prove insecure. When you add them into your Dockerfile, you do so because you want to add dynamic data to your build process or container runtime.
For simplicity sake let's consider the following simple Dockerfile.
FROM alpine:latest
ARG PASSWORD_ARG
ENV PASSWORD_ENV=$PASSWORD_ARG
RUN echo $PASSWORD_ENV
While this certainly isn't a Dockerfile you would use in the real world, it mimics what we'd do in a regular Next.js application: Reading in a secret and exposing it via an environment variable at both build- and runtime to our application, e.g. to connect to a third party service and cache the initial data it provides. Basically it helps us inspect the potential issues ARG
and ENV
may produce for secrets.
By running docker build --tag 'secrets-app' --build-arg PASSWORD_ARG=foo --progress=plain .
you might already spot some issues.
#4 [1/2] FROM docker.io/library/alpine:latest@sha256:4bcff63911fcb4448bd4fdacec207030997caf25e9bea4045fa6c8c44de311d1
#4 DONE 0.0s
#5 [2/2] RUN echo foo
#5 CACHED
#6 exporting to image
#6 exporting layers done
#6 writing image sha256:b9d497d574302e8b2378caf5c605c65557fc5d945dd5f2794cb6d0d97e7c7cd7 done
#6 naming to docker.io/library/secrets-app done
#6 DONE 0.0s
First of all the build argument and the environment variable is simply baked into the build output. We can plainly see what the environment variable's value is. This can also be inspected retroactively, e.g. by using docker image history secrets-app --no-trunc
or manually exploring the layers using docker save secrets-app -o layers.tar
and sifting through the resulting archive.
At runtime similar problems affect the environment variable itself. Since we bake it into the resulting image, anyone with access to it or the running container can easily retrieve the secret, e.g. by invoking the env
command.
Yikes…
This is where Docker secret mounts come into play. During buildtime they come in two flavors, allowing you to either mount the secret as a file or as a environment variable. The easiest way to handle secrets is by using Docker Compose.
Docker secret mounts require Docker BuildKit.
Docker BuildKit is enabled by default in Docker Engine 23.0 and newer.
services:
app:
build:
context: .
secrets:
- password-secret # use a secret with the name `password-secret`
secrets:
password-secret: # create a secret with the name `password-secret`
file: secrets/password.txt `use the contents of this file as value for the secret`
Our Dockerfile is now able to mount the secret using the following syntax.
FROM alpine:latest
RUN --mount=type=secret,id=password-secret,env=PASSWORD_ENV \
echo $PASSWORD_ENV
So why is this better? Docker secrets are temporarily mounted to /run/secrets/<id>
for the duration of the command invoking it. You can think about this command like it sets the PASSWORD_ENV
variable inline similar to what RUN --mount=type=secret,id=password-secret PASSWORD_ENV=$(cat /run/secrets/password-secret) <command>
would achieve. In reality it is slightly different, as inspecting the layers of the image doesn't even show us that PASSWORD_ENV
had been defined in the first place. This ephemeral handling of the secret guarantees that it will not be exposed in the layers of the image and Docker even is so nice as to redact it from build output should you choose to echo it.
Be warned though the secret still could leak if you persist it into a file carelessly.
Glad you asked. Let's take a naive Dockerfile that builds a Next.js application and pre-builds some pages by connecting to a Content Management System using process.env.CMS_SECRET
.
FROM node:22-alpine
ARG CMS_SECRET
ENV CMS_SECRET=$CMS_SECRET
WORKDIR /src
COPY package.json package-lock.json ./
RUN npm install
RUN npm build
CMD ["npm", "start"]
We essentially need to follow the same steps as before:
cms-secret.txt
that includes our secret valuecompose.yaml
that includes our secretARG
and ENV
usages that relate to our secret CMS_SECRET
After doing all that, we end up with the following setup:
services:
app:
build:
context: .
secrets:
- cms-secret
secrets:
cms-secret:
file: cms-secret.txt
FROM node:22-alpine
WORKDIR /src
COPY package.json package-lock.json ./
RUN npm install
RUN --mount=type=secret,id=cms-secret,env=CMS_SECRET \
npm build
ENTRYPOINT ["npm", "start"]
Done!
Well, sort of. While the image will be successfully built using our changes, our application still needs access to the secret during runtime as well.
While there isn't a built in way to mount the secret as an environment variable during the entrypoint of a Dockerfile we can make use of the following syntax.
FROM node:22-alpine
WORKDIR /src
COPY package.json package-lock.json ./
RUN npm install
RUN --mount=type=secret,id=cms-secret,env=CMS_SECRET \
npm build
ENTRYPOINT ["sh", "-c", "CMS_SECRET=$(cat /run/secrets/cms-secret) exec npm start"]
To make the secrets available during container runtime as well, we need to adjust our compose.yaml
slightly.
services:
app:
build:
context: .
secrets:
- cms-secret
secrets: # run-time secrets are defined separately!
- cms-secret
secrets:
cms-secret:
file: cms-secret.txt
Are there any downside to this? Sadly, a few.
First of all the elephant in the room: Secrets are unencrypted plaintext files that are stored on both your host machine and in the container (You can verify this by running docker exec <image-name> cat /run/secrets/<secret-id>
). The reason this is more preferable to ARG
and ENV
is that the secret is not persisted in the image, so in case it leaks you are fine.
My suggested solution to provide access to the secret as a environment variable also comes with the downside of it being exposed through /proc/<pid>/environ
. Any process with access to the container can read it from /proc/<pid>/environ
, potentially exposing sensitive data to attackers or debugging tools. This happens due to the starting node process copying the environment of the host shell process. So while CMS_SECRET
may be an inlined variable during the shell process, it is nevertheless part of the environment when invoking the node process. In my opinion this is fine, as when you mount the secret into the running container it will already be available in plaintext via run/secrets/<secret-id>
.
Well, it depends.
You will never be able to completely resolve the issue of making all secrets in your running container ephemeral, but you can reduce the surface area of exposure and make your life easier in case your secrets do leak. While there is the option to make use of Docker Swarm, which will give you access to the docker secret
commands, allowing you to create encrypted secrets on your host machine, I believe this to be an unnecessary intermediary step.
Instead of maintaining n
potential secrets during runtime, you could maintain just 1
by incorporating a secret manager, such as the self-hostable Infisical.
Secret Managers allow you to rotate secrets in case of a leak, without having to rebuild or redeploy your application. As you can imagine this comes with the downside of needing to adapt not only a new application into your stack but also adapting your code around it. Depending on your user base this could be a valid trade-off especially if you store a large number of secrets. Maybe not worth it for your side project with less than 10 active users per month, but something you might want to consider once you hit 500 active users and can expect additional growth and app complexity.
When I first got to this point, I have to admit I felt a bit defeated. It's this sort of feeling you get when you thought for a second you found a silver bullet that will fix all of your problems. But this is reality and we can't expect a "one-size-fits-all"-solution. Security will always introduce tradeoffs, be it raw performance, user experience, complexity or something else entirely. Instead of being annoyed at what secret mounts don't give us, we should focus on what they do bring to the table.
Secret mounts helped us tremendously when it came to making sure that we do not accidentally leak secrets during container builds. Thanks to that we can at least rest easy in the case our container image leaks to third parties, assuming that we do not explicitly expose the secret. While the container runtime is still potentially unsafe, hardening your image and considering a secrets manager will prove useful in the long run. Hardening and making your Next.js docker image production ready will have to be a story for another time though.
If you're fine with "only" having a safer image build process, then providing environment variables through secret mounts can be a low effort change you can adopt. If you expect a bigger user base consider creating a small helper function that reads and caches the contents from regular secret mounts directly. For easier integration during build processes you could also fallback to environment variables if the secret isn't mounted. Introducing this single source of truth for secrets, will make it easy to adopt a full-blown secret manager later down the line as you would only need to update the helper function or you could simply jump into setting up a secret manager in the first place — if you are fine with the extra complexity it introduces.
Docker secret mounts will still be invaluable when building images, even if you use a secret manager — you have to get those values in there somehow after all.