- Notifications
You must be signed in to change notification settings - Fork5
Local and Production Setups
License
nimble-platform/docker_setup
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
The setup is split into to three configuration scenarios, whereas each scenario hasa dedicated directory with the same name. The directory structure is as follows:
jenkins_ci: setup for Jenkins CInginx: setup and configuration for the webserver (i.e. nginx)dev: deployment setup for local developmentstaging: deployment setup for staging environmentprod: deployment setup used in production
Each deployment setup is composed of infrastructure componententes andthe actual Microservices. A utility script with the namerun*.sh canbe found in the directories of each setup.
Top Level Components:
Nginx is used a reverse proxy for each component.
- Configuration:
nginx/docker-compose.yml - Nginx Configuration:
nginx/nginx.conf - Manual Deployment:
fab deploy logs -H <username>@<host>(fabricate
- Configuration:
Jenkins is used for continous integration.
- Configuration:
jenkins_ci/docker-compose.yml - Docker File:
jenkins_ci/Dockerfile
- Configuration:
- Location:
prod
In general the Platform is split into two different kind of components (1) infrastructure components (directoryinfra) and (2) microservice components (directoryservices).
These componentes are part of the virtual network with the namenimbleinfraprod_default. More information can be found bey executingdocker network inspect nimbleinfraprod_default on the Docker host.
- Marmotta
- Configuration:
prod/infra/docker-compose-marmotta.yml - Container Name: nimbleinfraprod_marmotta_1
- Configuration:
- Keycloak
- Configuration:
prod/keycloak/docker-compose-prod.yml - Container Name: nimbleinfraprod_keycloak_1
- Configuration:
- ELK Stack
- Configuration:
prod/elk-prod/docker-compose-elk.yml - Container Names: nimbleinfraprod_kibana_1, nimbleinfraprod_elasticsearch_1, nimbleinfraprod_logstash_1
- Configuration:
Defintion can be found inprod/services/docker-compose-prod.yml, which consists of the following components:
Config Server:
- ServiceID: config-server
- Container Name: nimbleinfraprod_config-server_1
Service Discovery:
- ServiceID: service-discovery
- Container Name: nimbleinfraprod_service-discovery_1
Gateway Proxy:
- ServiceID: gateway-proxy
- Container Name: nimbleinfraprod_gateway-proxy_1
Hystrix Dashboard (not used at the moment)
- ServiceID: hystrix-dashboard
Definition and configuration of the deployment can be found inprod/services/docker-compose-prod.yml and defines the follwing services:
- Identity Service
- ServiceID: identity-service
- Container Name: nimbleservicesprod_identity-service_1
- Business Process Service
- ServiceID: business-process-service
- Container Name: nimbleservicesprod_business-process-service_1
- Catalog Service
- ServiceID: catalog-service-srdc
- Container Name: nimbleservicesprod_catalog-service-srdc_1
- Frontend Service
- ServiceID: frontend-service
- Container Name: nimbleservicesprod_frontend-service_1
- Frontend Service Sidecar
- ServiceID: frontend-service-sidecar
- Container Name: nimbleservicesprod_frontend-service-sidecar_1
Configuration is done via environment variables, which are define inprod/services/env_vars. Secrets are stored inprod/services/env_vars-prod (this file is adapted on the hosting machine).
A small utility script can be found inrun-prod.sh, which provides the following functionalies:
run-prod.sh infra: starts all infrastructure componentsrun-prod.sh keycloak: starts the Keycloak containerrun-prod.sh marmotta: starts the Marmotta containerrun-prod.sh elk: start all ELK componentsrun-prod.sh services: starts all serivces (note: make sure the infastructue is set up appropriately)run-prod.sh infra-logs: print logs of all microservice components to stdoutrun-prod.sh services-logs: print logs of all services to stdoutrun-prod.sh restart-single <serviceID>: restart a single service
- Location:
staging
not yet active
This section provides detailed information on how to set up a local development deployment using Docker. Required files are located in thedev directory.
cd dev
Recommended System Requirements (for Docker)
- 16GB Memory
- 4 CPUs
Minimum System Requirements (for Docker)
- 10GB Memory / 2 CPUs
A utility script calledrun-dev.sh provides the following main commands:
run-dev.sh infrastructure: starts all microservice infrastructure componentsrun-dev.sh services: starts all nimble core services (note: make sure the infrastructue is running appropriately before)run-dev.sh start: starts infrastructure and services (not recommended at the first time)run-dev.sh stop: stop all services
It is recommended to start the infrastructure and the services in separate terminals for easier debugging.
./run-dev.sh infrastructure: log output will be shown in the terminal
Before continuing to start services, check the infrastructure components as follows:
docker psshould show 7 new containers up and running:
$ docker ps --format 'table {{.Names}}\t{{.Ports}}'NAMES PORTSnimbleinfra_gateway-proxy_1 0.0.0.0:80->80/tcpnimbleinfra_service-discovery_1 0.0.0.0:8761->8761/tcpnimbleinfra_keycloak_1 0.0.0.0:8080->8080/tcp, 0.0.0.0:8443->8443/tcpnimbleinfra_kafka_1 0.0.0.0:9092->9092/tcpnimbleinfra_keycloak-db_1 5432/tcpnimbleinfra_config-server_1 0.0.0.0:8888->8888/tcpnimbleinfra_zookeeper_1 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcpnimbleinfra_maildev_1 25/tcp, 0.0.0.0:8025->80/tcpnimbleinfra_solr_1 0.0.0.0:8983->8983/tcpIn case of port binding errors, the shown default port mappings can be adapted to local system requirements ininfra/docker-compose.yml.
The infrastructure services can be tested by the following http-requests:
- http://localhost:8888/env => list configuration properties from
nimbleinfra_config-server_1 - http://localhost:8761/ => list registered services from Eureka
nimbleinfra_service-discovery_1(only "gateway-proxy" in the beginning) - http://localhost/mappings => list of mappings provided by the
nimbleinfra_gateway-proxy_1 - http://localhost:8080,https://localhost:8443 => Administration console for managing identities and access control from
nimbleinfra_keycloak_1. Login withadminand passwordpassword
./run-dev.sh services: log output will be shown in the terminal
docker psshould show additional 16 containers up and running
$ docker ps --format 'table {{.Names}}\t{{.Ports}}'NAMES PORTSnimbleservices_business-process-service_1 0.0.0.0:8085->8085/tcpnimbleservices_catalog-service-srdc_1 0.0.0.0:10095->8095/tcpnimbleservices_identity-service_1 0.0.0.0:9096->9096/tcpnimbleservices_trust-service_1 9096/tcp, 0.0.0.0:9098->9098/tcpnimbleservices_marmotta_1 0.0.0.0:8082->8080/tcpnimbleservices_ubl-db_1 0.0.0.0:5436->5432/tcpnimbleservices_camunda-db_1 0.0.0.0:5435->5432/tcpnimbleservices_identity-service-db_1 0.0.0.0:5433->5432/tcpnimbleservices_frontend-service_1 0.0.0.0:8081->8080/tcpnimbleservices_business-process-service-db_1 0.0.0.0:5434->5432/tcpnimbleservices_trust-service-db_1 5432/tcpnimbleservices_frontend-service-sidecar_1 0.0.0.0:9097->9097/tcpnimbleservices_marmotta-db_1 0.0.0.0:5437->5432/tcpnimbleservices_category-db_1 5432/tcpnimbleservices_sync-db_1 5432/tcpnimbleservices_binary-content-db_1 0.0.0.0:5438->5432/tcpnimbleservices_indexing-service_1 0.0.0.0:9101->8080/tcp...Port mappings can be adapted inservices/docker-compose.yml.
Once the services are up, they should show up in the EUREKA Service Discovrery. Depending on available resources this will take a while.
If they are all up, they can be tested via the NIMBLE frontend at:
About
Local and Production Setups
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Uh oh!
There was an error while loading.Please reload this page.
Contributors11
Uh oh!
There was an error while loading.Please reload this page.
