Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Demisto's Dockerfiles and Image Build Management

License

NotificationsYou must be signed in to change notification settings

demisto/dockerfiles

Repository files navigation

This repository'smaster branch tracks images pushed tothe official Demisto Docker Hub organization. Other branches` images are pushed todevdemisto.

Note: We generate nightly information about packages and OS dependencies used in each of Demisto's Docker images. Checkout thedockerfiles-info projectREADME for a full listing.

Contributing

Contributions are welcome and appreciated.

You can contribute in the following ways:

  • Create a new Docker Image for use in anXSOAR/XSIAM integration or script.

  • Update existing Docker Images: to fix issues or security concerns in ourDockerfiles.

  • Enhancements: such as optimizations, helper scripts, or improved configurations.

  • Documentation: Documentation is crucial. If you find gaps or errors in our documentation, please help us improve it. You can suggest clarifications or additions to make it more user-friendly.

Perquisites

Make sure you meet the following prerequisites:

  • An active GitHub account.
  • Afork of the repository cloned on local machine or in a Codespace.
  • Python 3,git, Docker Engine andpipenv orpoetry installed locally or in a Codespace.

In the cloned repository of the fork, create a new branch to hold the proposed work:

git checkout -b my_new_branch

Create a new Docker Image

To create a new Docker Image, you can either:

Use the helper scriptdocker/create_new_docker_image.py
❯ python docker/create_new_docker_image.py --helpusage: create_new_docker_image.py [-h] [-t {python,powershell}] [-p {two,three}] [-l {alpine,debian}] [--pkg PKG] nameCreate a new docker imagepositional arguments:  name                  The image name to use without the organization prefix. For example: ldap3. We use kebab-case naming convention.options:  -h, --help            show this help message and exit  -t {python,powershell}, --type {python,powershell}                        Specify type of image to create (default: python)  -p {two,three}, --python {two,three}                        Specify python version to use (default: three)  -l {alpine,debian}, --linux {alpine,debian}                        Specify linux distribution to use (default: alpine)  --pkg PKG             Specify a package to install. Can be specified multiple times. Each package needs to be specified with --pkg. For example: --pkg google-cloud-storage --pkg oath2client (default: None)

For example to create a new image namedldap using Python 3 and with the Python packageldap3 run the following:
./docker/create_new_docker_image.py -p three --pkg ldap3 ldap

The above command will create a directorydocker/ldap with all relevant files all setup for building a docker image. You can now build the image locally by following:Building Locally a Test Build.

Manually All image directories are located under thedocker directory. Each Docker image is managed in its own directory under thedocker directory. The image directory should be named the same as the image name (without the organization prefix). We use the kebab-case naming convention for image names. For example, to create a Docker image namedhello-world, you would create a new directory,docker/hello-world.

The image directory should contain aDockerfile which will be used for building the Docker image. When an image is built, it is tagged with the commit hash and version.

Building a Docker Image Locally

It is possible to run a local build to verify that the build process is working.

Requirements:

  • Local install of docker
  • Local install ofpipenv orpoetry (depends whether the image folder containsPipfile or apyproject.toml, respectively)

The scriptdocker/build_docker.sh is used to build all modified Docker images. The script detects modified directories usinggit by comparing againstorigin/master.

If you want to test how the script detects commit changes: Make sure you are working on a branch and the changes are committed. If you haven't committed the changes and want to run a local build you can run the script with a image name (which corresponds to a directory name) to the run the build on. For example:

./docker/build_docker.sh ldap

The above example will then run the build against theldap directory.

When running locally, the script will then use a docker organization ofdevtesting and will tag the image with atesting tag and a version which has a timestamp as a revision. If you would like to test with a different organization name set the env variable:DOCKER_ORG. If you would like to test the push functionality set the env variableDOCKERHUB_USER. It is also possible to setDOCKERHUB_PASSWORD to avoid being prompted for the password during the build process.

Additionally, if you are working on multiple folders and would like to test only a specific one the script supports an env var ofDOCKER_INCLUDE_GREP which will be used to do an extended grep to choose which directories to process.

Example for running with an org name ofmytest and agrep extended expression which will process only thepython dir (and notpython3 dir):

DOCKER_ORG=mytest DOCKER_INCLUDE_GREP=/python$ docker/build_docker.sh

CLA Licenses

All contributors are required to sign a contributor license agreement (CLA) to ensure that the community is free to use your contributions.

When opening a new pull request, a bot will evaluate whether you have signed the CLA. If required, the bot will comment on the Pull Request, including a link to accept the agreement. The CLA document is also available for review as aPDF.Visit ourFrequently Asked Questions article for CLA related issues.

After opening a Pull Request, and in order for the reviewer to understand the context, make sure to link to the corresponding Pull Request from theContent repo where this Docker image will be used.

Build configuration

The build script will check for abuild.conf file in the target image directory and will read from itname=value properties. Supported properties:

  • version: The version to use for tagging. Default:1.0.0. SeeDynamic Versioning for non-static versions. #Note: that additionally, the CI build number is always appended to the version as a revision (for example:1.0.0.15519) to create a unique version per build.
  • devonly: If set the image will be pushed only to thedevdemisto org in docker hub and will not be pushed to thedemisto org. Should be used for images which are for development purposes only (such as the image used in CI to build this project).
  • deprecated: If set the image will be listed as deprecated in the deprecated_images.json file and the image will be forbidden form using in the integrations/automations.
  • deprecated_reason: Free text that explain the deprecation reason.

Dynamic Versioning

It can be convenient to set the version of the docker image dynamically, instead of as an entry inbuild.conf.For example, if the Docker image is meant to track a particular package, the version of the image should always be the same as that package. Dependabot relocking the dependencies can cause the real package number and the entry in build.conf to fall out of sync.

As a solution to this, you can add adynamic_version.sh file to the image's folder. This will be run in the built docker container, and the result will be used to set the image's version in dockerhub. Seehere for an example.

Base Python Images

There are 2 base Python images which should be used when building a new image which is based upon Python:

Which image to choose as a base?

If you are using pure python dependencies then choose thealpine image with the proper python version which fits your needs (two or three). Thealpine-based images are smaller and recommended for use. If you require installing binaries or pre-compiled binary python dependencies (manylinux), you are probably best choosing the debian based images. See the following link:docker-library/docs#904 .

If you are using the pythoncryptography package we recommend usingdemisto/crypto as a base image. This base image takes care of properly installing thecryptography package. There is no need to include thecryptography package in thePipfile file when using this image as a base.

Adding averify.py script

As part of the build we support running averify.py script in the created image. This allows you to add logic which tests and checks that the docker image built is matching what you expect.

Adding this file ishighly recommended.

Simply create a file named:verify.py. It may contain any python code and all it needs is to exit with status 0 as a sign for success. Once the docker image is built, if the script is present it will be run within the image using the following command:

cat verify.py| docker run --rm -i<image_name> python'-'

Example of docker image with simpleverify.py script can be seenhere

PowerShell Images

We support building PowerShell Core docker images. To create the Dockerfile for a PowerShell image use thedocker/create_new_docker_image.py script with the-t or--type argument set to:powershell. For example:

./docker/create_new_docker_image.py -t powershell --pkg Az pwsh-azure

The above command will create a directorydocker/pwsh-azure with all relevant files setup for building a PowerShell docker image which imports the Az PowerShell module. You can now build the image locally by following:Building Locally a Test Build.

Naming Convention: To differentiate PowerShell images, name the images with a prefix of eitherpwsh- orpowershell-.

Base PowerShell Images

There are 3 base PowerShell images which should be used when building a new image which is based upon PowerShell:

We recommend using the default Alpine based image. The Debian and Ubuntu images are provided mainly for cases that there is need to install additional OS packages.

Adding averify.ps1 script

Similar to theverify.py script for Python images, you can add averify.ps1 script to test and check the image you created.

Once the docker image is built, if the script is present it will be run within the image using the following command:

cat verify.ps1| docker run --rm -i<image_name> pwsh -c'-'

Docker Image Deployment

When you first open a PR, adevelopment docker image is built under thedevdemisto docker organization. So for example if your image is namedldap3 an image with the namedevdemisto/ldap3 will be built.

If the PR is on a local branch of thedockerfiles github project (relevant only for members of the project with commit access), the image will be deployed to thedevdemisto docker hub organization. A bot will add a comment to the PR stating that the image has been deployed and available. You can then test the image out simply by doingdocker pull <image_name> (instructions will be included in the comment added to the PR).

If you are contributing (thank you!!) via an external fork, then the image built will not be deployed to docker hub. It will be available to download from the build artifacts, a comment with instructions will be posted on the PR. You can download the image and load it locally by running thedocker load command.

Once merged into master, It will run an additional build and create aproduction ready docker image which will be deployed at Docker Hub under thedemisto organization. A bot will add a comment to the original PR about the production deployment and the image will then be fully available for usage. An exampleproduction comment added to a PR can be seenhere.

Advanced

Support for Pipenv (Pipfile) and Poetry (pyproject.toml)

It is recommended to usePipenv orPoetry to manage python dependencies as they ensure that the build produces a deterministic list of python dependencies.

The standard for denoting the Python version within the context of thePipfile is presented in the X.Y format. for thepyproject.toml file, the convention is ~X.Y format. where X is the major python version and Y is the minor python version.

If aPipfile orpyproject.toml file is detected and a requirements.txt file is not present, the file will be used to generate a requirements.txt file before invokingdocker buildx build. The file is generated by running:pipenv lock for pipenv, orpoetry export -f requirements.txt --output requirements.txt --without-hashes for poetry. This allows the build process in the Dockerfile to simply install python dependencies via:

RUN pip install --no-cache-dir -r requirements.txt

If the requirements should'nt be generated before docker build, for example if you need system requirements installed in order to successfully install the dependencies, you can adddont_generate_requirements=true to the build.conf file, and the file will not be generated by the build.

Note: build will fail if aPipfile is detected without a correspondingPipfile.lock file orpyproject.toml file is found without a correspondingpoetry.lock.

Poetry quick start

If you want to use poetry, make sure you have poetry installed by runningpoetry --version or install it by runningcurl -sSL https://install.python-poetry.org | python3. Then Follow:

  • In the relevant folder initialize the poetry environment usingpoetry init.
  • Install dependencies using:poetry add <dependency>. For example:poetry add requests
  • Make sure to commit bothpyproject.toml andpoetry.lock files
  • To see the locked dependencies run:poetry export -f requirements.txt --output requirements.txt --without-hashes

pipenv quick start

The preferred tool to manage Python dependencies isPoetry. However, `pipenv`` is also supported.

If you want to usepipenv manually make sure you first meet the pre-requisites installed as specified in thePrerequisites section. Then follow:

  • In the relevant folder initialize the pipenv environment:

    PIPENV_MAX_DEPTH=1 pipenv --three
  • Install dependencies using:pipenv install <dependency>. For example:pipenv install requests

  • Make sure to commit bothPipfile andPipfile.lock files

  • To see the locked dependencies run:pipenv lock

Installing a Common Dependency

If you want to install a new common dependency in all python base images use the script:install_common_python_dep.sh. Usage:

Usage: ./docker/install_common_python_dep.sh [packages]Install a common python dependency in all docker python base images.Will use pipenv to install the dependency in each directory.Base images:   python   python3   python-deb   python3-debFor example: ./docker/install_common_python_dep.sh dateparser

Note: By default pipenv will install the specified dependency and also update all other dependencies if possible. If you want to only install a dependency and not update the existing dependencies run the script with the env variable:PIPENV_KEEP_OUTDATED. For example:

PIPENV_KEEP_OUTDATED=true ./docker/install_common_python_dep.sh dateparser

Automatic updates via Dependabot

We usedependabot for automated dependency updates. When a new image is added to the repository there is need to add the proper config to.github/dependabot.yml. If you used the./docker/create_new_python_image.py to create the docker image, then this config will be added automatically by the script. Otherwise, you will need to add the proper dependabot config. The build will fail without this config. You can add the dependabot config by running the script:

./docker/add_dependabot.sh<folder path to new docker image>

For example:

./docker/add_dependabot.sh docker/nmap

How to mark image as deprecated

To mark an image as deprecated please follow the following steps:

  1. Add the following two keys to the build.conf of the image.
  • deprecated=true

  • deprecated_reason=free text

    (i.e.:version=1.0.0deprecated=truedeprecated_reason="the image was merged into py3-tools")

  • 2- Build the docker by running the docker/build_docker.sh

    • (i.e. /home/users/dockerfiles$ docker/build_docker.sh emoji)By running the build script the image information will be added to the deprecated_images.json and 2 new environment variables will be introduced in the docker :
    • DEPRECATED_IMAGE=true
    • DEPRECATED_REASON="the same text as deprecated_reason key from the build.conf file"
  • 3- commit all changed files including the deprecated_image.json and create a new PR

The Native Image Docker Validator andnative image approved label

In the event that you've updated a native-image-supported dockerimage, you need to make sure that you take the necessary steps to ensure that native image is running smoothly in our build.If such docker is being updated, then the validation will fail to alarm the user that the native docker might need updates according to the changes done to the supported updated docker.For example, if you added a new package to the image, chances are you will need to add the same package to the native image.
The user should Check if the native image is already compatible with this change. If it is, great. Otherwise, the user should add compatibility, and add the relevant integration to the ignore conf. as necessary.After the required changes are done in this repository and in the content repository, the reviewer should add the 'native image approved' label which will re-trigger the workflow and pass the validation.


[8]ページ先頭

©2009-2025 Movatter.jp