Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A set of Node.js Docker images

License

NotificationsYou must be signed in to change notification settings

thecodingmachine/docker-images-nodejs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

64 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Build Status

General purpose NodeJS images for Docker

This repository contains a set offat, developer-friendly, general purpose NodeJS images for Docker.

  • 2 variants available:standalone andapache (for serving an SPA without NodeJS server as backend)
  • For each variant, you can select thebuild sub-variant that contains essential build tools (make, gcc, etc.) that can be needed to compile native extensions
  • Images are bundled with cron. Cron jobs can be configured using environment variables
  • Everything is done to limit file permission issues that often arise when using Docker. The image is actively tested on Linux, Windows and MacOS
  • Base image is Debian Bullseye (with more variants to come)

Images

NameNodeJs versionvariantbuild toolsbase distro
thecodingmachine/nodejs:v2-16-bullseye
ghcr.io/thecodingmachine/nodejs:v2-16-bullseye
16.xstandaloneNoDebian Bullseye
thecodingmachine/nodejs:v2-18-bullseye
ghcr.io/thecodingmachine/nodejs:v2-18-bullseye
18.xstandaloneNoDebian Bullseye
thecodingmachine/nodejs:v2-20-bullseye
ghcr.io/thecodingmachine/nodejs:v2-20-bullseye
20.xstandaloneNoDebian Bullseye
thecodingmachine/nodejs:v2-22-bullseye
ghcr.io/thecodingmachine/nodejs:v2-22-bullseye
22.xstandaloneNoDebian Bullseye
thecodingmachine/nodejs:v2-24-bullseye
ghcr.io/thecodingmachine/nodejs:v2-24-bullseye
24.xstandaloneNoDebian Bullseye
thecodingmachine/nodejs:v2-16-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-16-bullseye-build
16.xstandaloneYesDebian Bullseye
thecodingmachine/nodejs:v2-18-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-18-bullseye-build
18.xstandaloneYesDebian Bullseye
thecodingmachine/nodejs:v2-20-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-20-bullseye-build
20.xstandaloneYesDebian Bullseye
thecodingmachine/nodejs:v2-22-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-22-bullseye-build
22.xstandaloneYesDebian Bullseye
thecodingmachine/nodejs:v2-24-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-24-bullseye-build
24.xstandaloneYesDebian Bullseye
thecodingmachine/nodejs:v2-16-apache-bullseye
ghcr.io/thecodingmachine/nodejs:v2-16-apache-bullseye
16.xapacheNoDebian Bullseye
thecodingmachine/nodejs:v2-18-apache-bullseye
ghcr.io/thecodingmachine/nodejs:v2-18-apache-bullseye
18.xapacheNoDebian Bullseye
thecodingmachine/nodejs:v2-20-apache-bullseye
ghcr.io/thecodingmachine/nodejs:v2-20-apache-bullseye
20.xapacheNoDebian Bullseye
thecodingmachine/nodejs:v2-22-apache-bullseye
ghcr.io/thecodingmachine/nodejs:v2-22-apache-bullseye
22.xapacheNoDebian Bullseye
thecodingmachine/nodejs:v2-24-apache-bullseye
ghcr.io/thecodingmachine/nodejs:v2-24-apache-bullseye
24.xapacheNoDebian Bullseye
thecodingmachine/nodejs:v2-16-apache-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-16-apache-bullseye-build
16.xapacheYesDebian Bullseye
thecodingmachine/nodejs:v2-18-apache-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-18-apache-bullseye-build
18.xapacheYesDebian Bullseye
thecodingmachine/nodejs:v2-20-apache-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-20-apache-bullseye-build
20.xapacheYesDebian Bullseye
thecodingmachine/nodejs:v2-22-apache-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-22-apache-bullseye-build
22.xapacheYesDebian Bullseye
thecodingmachine/nodejs:v2-24-apache-bullseye-build
ghcr.io/thecodingmachine/nodejs:v2-24-apache-bullseye-build
24.xapacheYesDebian Bullseye

Note: we do not tag minor releases of NodeJS, only major versions. You will find for example an image for NodeJS 18.x, but no tagged image for NodeJS 18.2.This is because NodeJS follows SemVer and we believe you have no valid reason to ask explicitly for 18.2.When 18.3 is out, you certainly want to upgrade automatically to this minor release since it is backward compatible.

Images are automatically updated when a new patch version of NodeJS is released, so the NodeJS 18.x image will always containthe most up-to-date version of the NodeJS 18.x branch.

Usage

Example with standalone:

$ docker run -it --rm --name my-running-script -v"$PWD":/usr/src/app thecodingmachine/nodejs:v2-18-bullseye node your-script.js

Example with Apache:

$ docker run -p 80:80 --name my-apache-app -v"$PWD":/var/www/html thecodingmachine/nodejs:v2-18-apache-bullseye

Example with Apache + Node 18.x in a Dockerfile:

Dockerfile

FROM thecodingmachine/nodejs:v2-18-apache-bullseyeCOPY src/ /var/www/html/RUN yarn installRUN yarn run buld

Default working directory

The working directory (the directory in which you should mount/copy your application) depends on the image variantyou are using:

VariantWorking directory
standalone/usr/src/app
apache/var/www/html

Changing Apache document root

For theapache variant, you can change the document root of Apache (i.e. your "public" directory) by using theAPACHE_DOCUMENT_ROOT variable:

# The root of your website is in the "public" directory:APACHE_DOCUMENT_ROOT=public/

Enabling/disabling Apache extensions

You can enable/disable Apache extensions using theAPACHE_EXTENSION_[extension_name] environment variable.

For instance:

version:'3'services:my_app:image:thecodingmachine/nodejs:v2-18-apache-bullseyeenvironment:# Enable the DAV extension for ApacheAPACHE_EXTENSION_DAV:1# Enable the SSL extension for ApacheAPACHE_EXTENSION_SSL:1

As an alternative, you can use theAPACHE_EXTENSIONS global variable:

APACHE_EXTENSIONS="dav ssl"

Apache modules enabled by default: access_compat, alias, auth_basic, authn_core, authn_file, authz_core, authz_host, authz_user, autoindex, deflate, dir, env, expires, filter, mime, mpm_prefork, negotiation, reqtimeout, rewrite, setenvif, status

Apache modules available: access_compat, actions, alias, allowmethods, asis, auth_basic, auth_digest, auth_form, authn_anon, authn_core, authn_dbd, authn_dbm, authn_file, authn_socache, authnz_fcgi, authnz_ldap, authz_core, authz_dbd, authz_dbm, authz_groupfile, authz_host, authz_owner, authz_user, autoindex, buffer, cache, cache_disk, cache_socache, cgi, cgid, charset_lite, data, dav, dav_fs, dav_lock, dbd, deflate, dialup, dir, dump_io, echo, env, ext_filter, file_cache, filter, headers, heartbeat, heartmonitor, ident, include, info, lbmethod_bybusyness, lbmethod_byrequests, lbmethod_bytraffic, lbmethod_heartbeat, ldap, log_debug, log_forensic, lua, macro, mime, mime_magic, mpm_event, mpm_prefork, mpm_worker, negotiation, proxy, proxy_ajp, proxy_balancer, proxy_connect, proxy_express, proxy_fcgi, proxy_fdpass, proxy_ftp, proxy_html, proxy_http, proxy_scgi, proxy_wstunnel, ratelimit, reflector, remoteip, reqtimeout, request, rewrite, sed, session, session_cookie, session_crypto, session_dbd, setenvif, slotmem_plain, slotmem_shm, socache_dbm, socache_memcache, socache_shmcb, speling, ssl, status, substitute, suexec, unique_id, userdir, usertrack, vhost_alias, xml2enc

Permissions

Ever faced file permission issues with Docker? Good news, this is a thing of the past!

If you are used to running Docker containers with the base NodeJS image, you probably noticed that when running commands(likeyarn install) within the container, files are associated to theroot user. This is because the base userof the image is "root".

When you mount your project directory into/var/www/html or/usr/src/app, it would be great if the default user used by Docker couldbe your current host user.

The problem with Docker is that the container and the host do not share the same list of users. For instance, you mightbe logged in on your host computer assuperdev (ID: 1000), and the container has no user whose ID is 1000.

Thethecodingmachine/nodejs images solve this issue with a bit of black magic:

The image contains a user nameddocker. On container startup, the startup script will look at the owner of theworking directory (/var/www/html for Apache, or/usr/src/app for standalone). The script will then assume thatyou want to run commands as this user. So it willdynamically change the ID of the docker user to match the ID ofthe current working directory user.

Furthermore, the image is changing the Apache default user/group to bedocker/docker (instead ifwww-data/www-data).So Apache will run with the same rights as the user on your host.

The direct result is that, in development:

  • Your NodeJS application can edit any file
  • Your container can edit any file
  • You can still edit any file created by Apache or by the container in CLI

Setting up CRON jobs

You can set up CRON jobs using environment variables too.

To do this, you need to configure 3 variables:

# configure the user that will run cron (defaults to root)CRON_USER=root# configure the schedule for the cron job (here: run every minute)CRON_SCHEDULE=*******# last but not least, configure the commandCRON_COMMAND=yarn run stuff

By default, CRON output will be redirected to Docker output.

If you have more than one job to run, you can suffix your environment variable with the same string. For instance:

CRON_USER_1=rootCRON_SCHEDULE_1=*******CRON_COMMAND_1=yarn run stuffCRON_USER_2=www-dataCRON_SCHEDULE_2=*******CRON_COMMAND_2=yarn run other-stuff

Important: The cron runner we use isSupercronic and not the orginial "cron" that has a number of issues with containers.Even with Supercronic, the architecture of cron was never designed with Docker in mind (Cron is way older than Docker). It will run correctly onyour container. If at some point you want to scale and add more containers, it will run on all your containers.At that point, if you only want to run a Cron task once for your application (and not once per container), you mightwant to have a look at alternative solutions likeTasker or one of the manyother alternatives.

Launching commands on container startup

You can launch commands on container startup using theSTARTUP_COMMAND_XXX environment variables.This can be very helpful to install dependencies or apply database patches for instance:

STARTUP_COMMAND_1=yarn installSTARTUP_COMMAND_2=yarn run watch&

As an alternative, the images will look into the container for an executable file named/etc/container/startup.sh.

If such a file is mounted in the image, it will be executed on container startup.

docker run -it --rm --name my-running-script -v"$PWD":/usr/src/myapp -w /usr/src/myapp\       -v$PWD/my-startup-script.sh:/etc/container/startup.sh thecodingmachine/nodejs:v2-18-bullseye node your-script.js

Registering SSH private keys

If your NodeJS project as a dependency ona package stored in a private GIT repository,youryarn install commands will not work unless you register your private key in the container.

You have several options to do this.

Option 1: mount your keys in the container directly

This option is the easiest way to go if you are using the image on a development environment.

docker-compose.yml

version:'3'services:my_app:image:thecodingmachine/nodejs:v2-18-bullseyevolumes:      -~/.ssh:/home/docker/.ssh

Option 2: store the keys from environment variables or build arguments

Look at this option if you are building a Dockerfile from this image.

The first thing to do is to get the signature of the server you want to connect to.

$ ssh-keyscan myserver.com

Copy the output and put it in an environment variable. We assume the content is stored in$SSH_KNOWN_HOSTS.

Now, let's write a Dockerfile.

Dockerfile

FROM thecodingmachine/nodejs:node1°ARG SSH_PRIVATE_KEYARG SSH_KNOWN_HOSTS# Let's register the private keyRUN ssh-add <(echo "$SSH_PRIVATE_KEY")# Let's add the server to the list of known hosts.RUN echo "$SSH_KNOWN_HOSTS" >> ~/.ssh/known_hosts

Finally, when triggering the build, you must pass the 2 variables asbuild arguments:

$ docker build -t my_image --build-arg SSH_PRIVATE_KEY="$SSH_PRIVATE_KEY" --build-arg SSH_KNOWN_HOSTS="$SSH_KNOWN_HOSTS".

Usage in Kubernetes

If you plan to use this image in Kubernetes, please be aware that the image internally usessudo. This is because thedefault user (docker) needs to be able to edit NodeJS config files asroot.

Kubernetes has a security setting (allowPrivilegeEscalation) that can disallow the use ofsudo. The use of this flagbreaks the image and in the logs, you will find the message:

sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?

Please be sure that this option is never set to false:

apiVersion:v1kind:Pod# ...spec:containers:  -name:foobarimage:thecodingmachine/nodejs:v2-18-bullseyesecurityContext:allowPrivilegeEscalation:true# never use "false" here.

Contributing

The Dockerfiles are generated from a template usingOrbit.

If you want to modify a Dockerfile you should instead edit theutils/Dockerfile.blueprintand then run the command:

$ orbit run generate

This command will generate all the files from the "blueprint" templates.

You can then test your changes using thebuild-and-test.sh command:

TAG=v2-18-bullseye VARIANT=18-bullseye ./build-and-test.sh

About

A set of Node.js Docker images

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors7


[8]ページ先頭

©2009-2025 Movatter.jp