Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
forked fromdcos/dcos

DC/OS - The Datacenter Operating System

License

NotificationsYou must be signed in to change notification settings

newrelic-forks/dcos

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

The easiest way to run microservices, big data, and containers in production.

What is DC/OS?

Like traditional operating systems, DC/OS is system software that manages computer hardware and software resources and provides common services for computer programs.

Unlike traditional operating systems, DC/OS spans multiple machines within a network, aggregating their resources to maximize utilization by distributed applications.

To learn more, see theDC/OS Overview.

How Do I...?

Releases

DC/OS releases are publicly available onhttp://dcos.io/releases/

Release artifacts are managed by Mesosphere on Amazon S3, using a CloudFront cache.

To find the git SHA of any given release, check the latest commit in the versioned branches on GitHub:https://github.com/dcos/dcos/branches/

Release TypeURL Pattern
Latest Stablehttps://downloads.dcos.io/dcos/stable/dcos_generate_config.sh
Latest Masterhttps://downloads.dcos.io/dcos/testing/master/dcos_generate_config.sh
Specific PR, Latest Buildhttps://downloads.dcos.io/dcos/testing/pull/<github-pr-number>/dcos_generate_config.sh

Development Environment

Linux is required for building and testing DC/OS.

  1. Linux distribution:
    • Docker doesn't have all the features needed on OS X or Windows
    • tar needs to be GNU tar for the set of flags used
    • unzip needs to be installed
  2. tox
  3. git 1.8.5+
  4. Docker 1.11+
    • Install Instructions for various distributions. Docker needs to be configured so your user can run docker containers. The commanddocker run alpine /bin/echo 'Hello, World!' when run at a new terminal as your user should just print"Hello, World!". If it says something like "Unable to find image 'alpine:latest' locally" then re-run and the message should go away.
  5. Python 3.6
    • Arch Linux:sudo pacman -S python
    • Fedora 23 Workstation: Already installed by default / no steps
    • Ubuntu 16.04 LTS:
      • pyenv-installer
      • Python dependencies:sudo apt-get install make build-essential libssl-dev zlib1g-dev libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-utils liblzma-dev python3-venv
      • Install Python 3.6.3:pyenv install 3.6.3
      • Create DC/OS virtualenv:pyenv virtualenv 3.6.3 dcos
      • Activate environment:pyenv activate dcos
  6. Over 10GB of free disk space and 8GB of RAM
    • The build makes use of hard links, so if you're using VirtualBox the disk space cannot be a synced folder.
  7. Optional pxz (speeds up package and bootstrap compression)
    • ArchLinux:pxz-git in the AUR. The pxz package corrupts tarballs fairly frequently.
    • Fedora 23:sudo dnf install pxz

Unit Tests

Unit tests can be run locally but require thedevelopment environment specified above.

tox

Tox is used to run the codebase unit tests, as well as coding standard checks. The config is intox.ini.

Integration Tests

Integration tests can be run on any deployed DC/OS cluster. For installation instructions, seehttps://dcos.io/install/.

Integration tests are installed via thedcos-integration-test Pkgpanda package.

Integration test files are stored on the DC/OS master node at/opt/mesosphere/active/dcos-integration-test.Therefore, in order to test changes to test files, move files frompackages/dcos-integration-test/extra/ in your checkout to/opt/mesosphere/active/dcos-integration-test on the master node.

The canonical source of the test suite's results is the continuous integration system.There may be differences between the results of running the integration tests as described in this document and the results given by the continuous integration system.In particular, some tests may pass on the continuous integration system and fail locally or vice versa.

Minimum Requirements

  • 1 master node
  • 2 private agent nodes
  • 1 public agent node
  • Task resource allocation is currently insignificantly small
  • DC/OS itself requires at least 2 (virtual) cpu cores on each node

Instructions

  1. SSH into a master nodeThe tests can be run via Pytest while SSH'd as root into a master node of the cluster to be tested.

  2. Switch to root

    sudo su -
  3. Add the test user

    dcos-shell python /opt/mesosphere/active/dcos-oauth/bin/dcos_add_user.py albert@bekstil.net

    Running the above mentioned command will result in an output

    User albert@bekstil.net successfully added

    This test user has a known login token with far future expiration. DO NOT USE IN PRODUCTION.After the test, remember to delete the test user.

    For more information, seeUser Management.

  4. Run the tests using pytest in the cluster.

    cd /opt/mesosphere/active/dcos-integration-testdcos-shell pytest

Using DC/OS Docker

One way to run the integration tests is to useDC/OS Docker.

  1. Setup DC/OS in containers usingDC/OS Docker.

One way to do this is to use theDC/OS Docker Quick Start tool.

  1. Runmake test.

Build

DC/OS can be built locally but requires thedevelopment environment specified above.

DC/OS builds are packaged as a self-extracting Docker image wrapped in a bash script calleddcos_generate_config.sh.

WARNING: Building a release from scratch the first time on a modern dev machine (4 cores / 8 hyper threads, SSD, reasonable internet bandwidth) takes about 1 hour.

Instructions

./build_local.sh

That will run a simple local build, and output the resulting DC/OS installers to $HOME/dcos-artifacts. You can run the created `dcos_generate_config.sh like so:

$ $HOME/dcos-artifacts/testing/`whoami`/dcos_generate_config.sh

Build Details

If you look inside of the bash scriptbuild_local.sh there are the commands with descriptions of each.

The general flow is to:

  1. Check the environment is reasonable
  2. Write arelease tool configuration if one doesn't exist
  3. Setup a python virtualenv where we can install the DC/OS python tools to in order to run them
  4. Install the DC/OS python tools to the virtualenv
  5. Build the release using therelease tool

These steps can all be done by hand and customized / tweaked like standard python projects. You can hand create a virtualenvironment, and then do an editable pip install (pip install -e) to have a "live" working environment (as you change code you can run the tool and see the results).

Release Tool Configuration

This release tool always loads the config indcos-release.config.yaml in the current directory.

The config isYAML. Inside it has two main sections.storage which contains a dictionary of different storage providers which the built artifacts should be sent to, andoptions which sets general DC/OS build configuration options.

Config values can either be specified directly, or you may use $ prefixed environment variables (the env variable must set the whole value).

Storage Providers

All the available storage providers are inrelease/storage. The configuration is a dictionary of a reference name for the storage provider (local, aws, my_azure), to the configuration.

Each storage provider (ex: aws.py) is an available kind prefix. The dictionaryfactories defines the suffix for a particular kind. For instancekind: aws_s3 would map to the S3StorageProvider.

The configuration options for a storage provider are the storage provider's constructor parameters.

Sample config storage that will save to my home directory (/home/cmaloney):

storage:local:kind:local_pathpath:/home/cmaloney/dcos-artifacts

Sample config that will store to a local archive path as well as AWS S3. To authenticate with AWS S3, reference theboto3 docs to learn how to configure access.

storage:aws:kind:aws_s3bucket:downloads.dcos.ioobject_prefix:dcosdownload_url:https://downloads.dcos.io/dcos/local:kind:local_pathpath:/mnt/big_artifact_store/dcos/

Repo Structure

DC/OS itself is composed of many individual components precisely configured to work together in concert.

This repo contains the release and package building tools necessary to produce installers for various on-premises and cloud platforms.

DirectoryContents
cloud_imagesBase OS image building tools
configRelease configuration
docsDocumentation
flake8_dcos_lintFlake8 plugin for testing code quality
dcos_installerBackend for Web, SSH, and some bits of the Advanced installer. Code is being cleaned up
genPython library for rendering yaml config files for various platforms into packages, with utilities to do things like make "late binding" config set by CloudFormation
packagesPackages which make up DC/OS (Mesos, Marathon, AdminRouter, etc). These packages are built by pkgpanda, and combined into a "bootstrap" tarball for deployment.
pkgpandaDC/OS baseline/host package management system. Tools for building, deploying, upgrading, and bundling packages together which live on the root filesystem of a machine / underneath Mesos.
releaseRelease tools for DC/OS. (Building releases, building installers for releases, promoting between channels)
sshAsyncIO based parallel ssh library used by the installer
test_utilvarious scripts, utilities to help with integration testing

Pull Requests Statuses

Pull requests automatically trigger a new DC/OS build and run several tests. These are the details on the various status checks against a DC/OS Pull Request.

Status CheckPurposeSource and Dependencies
continuous-integration/jenkins/pr-headAdmin Router Endpoint testsdcos/dcos/packages/adminrouter/extra/src/test-harness Docker Dependency:dcos/dcos/packages/adminrouter
mergebot/enterprise/build-status/aggregateEE Test EnforcementPrivatemesosphere/dcos-enterprise repo is tested against the SHA.
mergebot/enterprise/has_ship-itCode Review EnforcementPrivateMergebot service in prod cluster
mergebot/enterprise/review/approved/min_2Code Review EnforcementMergebot service in prod cluster
mergebot/has_ship-itCode Review EnforcementMergebot service in prod cluster
mergebot/review/approved/min_2Code Review EnforcementMergebot service in prod cluster
teamcity/dcos/build/dcosBuilds DCOS Image (dcos_generate_config.sh)gen/build_deploy/bash.py
teamcity/dcos/build/toxRuns check-style, unit-teststox.ini
teamcity/dcos/test/aws/cloudformation/simpleDeployment using single-master-cloudformation.json and runs integration testsgen/build_deploy/aws.py, Usesdcos-launch binary in CI
teamcity/dcos/test/aws/onprem/staticInstallation via dcos_generation_config.sh and runs Integration Testsgen/build_deploy/bash.py, Usesdcos-launch binary in CI
teamcity/dcos/test/azure/armDeployment using acs-1master.azuredeploy.json and runs integration tests.gen/build_deploy/azure.py, Usesdcos-launch binary in CI
teamcity/dcos/test/dockerExercises dcos-docker by launching dcos-docker against this PR and running integration tests within the docker cluster.dcos-docker repo
teamcity/dcos/test/docker/smokeExercises dcos-docker by launching dcos-docker against this PR and runningsmoke tests within the docker cluster.dcos-docker repo
teamcity/dcos/test/upgradeUpgrade from stable minor versionmesosphere/advanced-tests repo (transitively,dcos/dcos-test-utils , dcos/dcos-launch)
teamcity/dcos/test/upgrade-from-previous-majorUpgrade from previous major versionmesosphere/advanced-tests repo (transitively, dcos/dcos-test-utils, dcos/dcos-launch)
teamcity/dcos/test/upgrade-to-next-majorUpgrade to Next Major versionmesosphere/advanced-tests repo (transitively, dcos/dcos-test-utils, dcos/dcos-launch)

About

DC/OS - The Datacenter Operating System

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python78.1%
  • Lua11.0%
  • Shell5.0%
  • HTML4.9%
  • PowerShell0.9%
  • C0.1%

[8]ページ先頭

©2009-2025 Movatter.jp