- Notifications
You must be signed in to change notification settings - Fork0
coderadhika/LeetCode-Java-Solutions
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
This repo contains micros services including:
- Payout service (WIP)
- Payin service (WIP)
- Ledger service (WIP)
- Webhook service (WIP)
Thepayment-service
is a Kubernetes pod with two containers:
- The
web
container running gunicorn-uviron-fastapi asyncio web service wherefastapi
running asyncio supported web applicationuvicorn
running on top offastapi
web app as an ASGI web server and providing event loop implementationgunicorn
running as process manager spawning up multiple uvicorn workers for parallelism
- The
runtime
sidecar, used for configuration management
The following technologies/frameworks are used across the entire stack:
- helm for deployments and rollbacks
- docker as container engine
- kubernetes as container orchestrator
- python3.7 for application code
- fastapi as web framework for building asyncio web service
- uvicorn as ASGI web server / interface for contained fastapi service
- gunicorn as process manager of uvicorn worker
- pipenv for dependency management anddeterministic builds
- pytest for unit testing
- mypy for static type checking
- flake8 for linting
- black for code formatting
- pre-commit for pre-commit hooks
- DoorDash doorctl for images tagging/pushing
- DoorDash Pulse for semantic monitoring and system testing
- DoorDash Pressure for load testing
- DoorDash Runtime for configuration management
- Artifactory to store docker images
- Jenkins/Groovy for CICD
Only authorized personnel can access payment-service k8s deployment (i.e. execute kubectl command)
In order to access staging payment-service deployment, you need to request permission inpayments-team@ google group
- Differently than staging, you need to be inthis list
- After you are authorized in step 1:
$ ssh<YOUR USERNAME>@bastion.doordash.com$ kubeswitch payments# switch kube context to payment-service namespace
Payment-service usesddops to control releases and k8s deployment:
- Promote pipeline
- Release build pipeline
- CI pipeline
- Migration pipeline
- Dev CD pipeline (Always deploy merged code to staging only)
- All PRs running through CI steps defined inCI pipeline
- Approved PRs are merged to master branch,without being deployed to k8s cluster.
- Payment-servicedeploy pilots are authorized to perform regular deployment, hotfix, rollback, migration and bounce byddops commands through deploymentslack channel
- Deployment procedures
- Regular deployment:
/ddops cut-release payment-service
-> output:releaseTag
/ddops build payment-service <releaseTag>
/ddops migrate payment-service <releaseTag> --env=staging
# Migratestaging ONLY/ddops migrate payment-service <releaseTag>
# Migrateprod ONLY/ddops promote payment-service <releaseTag> --env=staging
# Promotestaging ONLY/ddops promote payment-service <releaseTag>
# Promoteprod ONLY
- Hotfix:
- Rollback:
/ddops rollback payment-service <rollback-to-release-tag> <rollback reason>
(example)
- Bounce:
/ddops bounce payment-service
# Rolling upgrade to bounce payment-service-web and payment-service-webhook with current live release
- Regular deployment:
As of 1/21/2020 Payment service adopted blue-green deployment for web and webhook k8s services. These are the facts to keep in mind
- As part of blue-green deployment, a k8s rollout resource was created on top of existing k8s service instead of k8s deployment resource. Therefore any blue-green enabled k8s service, use following command to get current rollout:
kubectl -n payment-service get rollouts
- Each blue-green deployment will set green (new replica set) to directly serve traffic besides blue (old replica set). After certain amount ofdelay, blue will be deleted.
- In case the green service behave unexpectedly during deployment, use
/ddops rollback payment-service <blue release tag>
will immediately remove green services and keep existing blue service unchanged. - Follow thisInstall Argo Rollouts Kubectl Plugin to install local argo-rollout kubectl plugin to get a friendly view of blue-green deployment (argo rollout) process. Example:
kubectl-argo-rollouts get rollout payment-service-web -n payment-serviceName: payment-service-webNamespace: payment-serviceStatus: ◌ ProgressingStrategy: BlueGreenImages: 611706558220.dkr.ecr.us-west-2.amazonaws.com/payment-service:2af851a235cc327f4de02a89a72f76c2acffb5c2 611706558220.dkr.ecr.us-west-2.amazonaws.com/payment-service:f5a565c05dc7756a9f0ac2f8b7e7372c17a6d2db (active) ddartifacts-docker.jfrog.io/runtime:latest (active)Replicas: Desired: 2 Current: 4 Updated: 2 Ready: 4 Available: 2NAME KIND STATUS AGE INFO⟳ payment-service-web Rollout ◌ Progressing 121m├──# revision:3│ └──⧉ payment-service-web-5b5bdcc678 ReplicaSet ✔ Healthy 75s active│ ├──□ payment-service-web-5b5bdcc678-gzr85 Pod ✔ Running 75s ready:2/2│ └──□ payment-service-web-5b5bdcc678-q826k Pod ✔ Running 75s ready:2/2,restarts:1├──# revision:2│ └──⧉ payment-service-web-7c855dcb5b ReplicaSet ✔ Healthy 59m│ ├──□ payment-service-web-7c855dcb5b-pbzlk Pod ✔ Running 59m ready:2/2│ └──□ payment-service-web-7c855dcb5b-vvvvj Pod ✔ Running 59m ready:2/2└──# revision:1 └──⧉ payment-service-web-5547bd48cf ReplicaSet • ScaledDown 121m
- Make sure you have followedNew eng setup guide to properly setup your PIP install indices by envionment variables
ARTIFACTORY_URL
,ARTIFACTORY_USERNAME
,ARTIFACTORY_PASSWORD
andPIP_EXTRA_INDEX_URL
- After step #1, also include
FURY_TOKEN
in your~/.bash_profile
by running:echo"export FURY_TOKEN=$(echo$PIP_EXTRA_INDEX_URL| sed -E -e's/.*\/([^/]+):@repo.fury.io\/doordash.*/\1/')">>~/.bash_profilesource~/.bash_profile
Install python specified in
Pipfile.lock
(v3.7) throughpyenv andpipenv into a newly created python virtual environment.brew install pyenv pipenvbrew upgrade pyenv pipenv# install all dependencies needed for development, including the ones installed with the --dev argument.make sync-pipenv
After step #1, a python virtual envionment should be created.
- To find where does environment locate, run
$ pipenv --venv
- To start a sub-shell within create python virtual environment, run:
pipenv shell# Try `pipenv shell --fancy` if you want to preserve your customized shell configuration from ~/.bashrc
- To go back to your original shell and deactivate python virtual env, simply run
$ exit
- To find where does environment locate, run
To test if everything works, you can try:
- Activate python virtual env:
$ pipenv shell
- running unit tests, linter and mypy:
$ make test
- Activate python virtual env:
Install
pre-commit
and initialize the pre-commit hooks; these will run before every commitbrew install pre-commitpre-commit install
Run Pre-Commit Hooks
If you want to manually run all pre-commit hooks on a repository, run
pre-commit run --all-files
. To run individual hooks usepre-commit run <hook_id>
.Pre-commit
<hook_id>
are defined in.pre-commit-config.yaml
. Example commands:pre-commit run --all-filespre-commit run black --all-files
pipenv shellmake local-server
- Local service will be running in development and debug mode.Any code changes will lead to a service reload and therefore be reflected in real-time.
- Confirming local server is up running:
curl localhost:8000/health# port is defined in Makefile::local-server
- Activate python virtual env:
$ pipenv shell
- Available test targets:
maketest# runs unit tests, linter, mypy (not pre-commit hooks)make test-unit# runs unit tests onlymake test-typing# runs mypy onlymake test-lint# runs linter onlymake test-hooks# runs pre-commit hooks only
make start-local-docker-server
- Similarly as
local-server
, ./app directory is binded as a volume under web docker container.Therefore live reload is also available here. - Confirming server is up running in docker:
curl localhost:8001/health# docker port mapping defined in docker-compose.yml web container
Refers torun tests locally
We categorize tests cases into 3 groups with in payment-service repo. (Pulse test will be covered separately).
Unit tests: Test cases should focus on test unit functionalitywithout any real connections to remote dependencies like database, redis or stripe. Any remote dependencies should be mocked out in unit test case.
- Make target:
make test-unit
- Directory: within each top level components (e.g. commons, payin, payout, ledger, middleware...), there is a
test_unit
folder where orresponding unit test files should be created.
- Make target:
Integration tests: Test cases should focus on validate integration contract and santity with remote dependencies. Alternatively, if you really have to wire up dependency components to test some integration behavior, you can create test cases as integration test.
- Make target:
make test-integration
: run all integration tests including test cases depending on remote dependencies we owned (e.g. DB) and external dependencies like stripe.make test-external
: only run test cases marked withpytest.mark.external
, which usually are tests depending on external dependencies like stripe.
- Directory: within each top level components (e.g. commons, payin, payout, ledger, middleware...), there is a
test_integration
folder where orresponding integration test files should be created. - Note:DO NOT place integrate tests outside of a
test_integration
folder, otherwise our test scripts won't startup corresponding dependencies for the test.
- Make target:
Thepayment-service
usespipenv
to ensure deterministic builds.To add or update a dependency, you can do the following:
- Add or update dependency in
Pipefile
via your text editor - Run following command to update
Pipefile.lock
and install from updated lock filemake update-pipenv
- After you are done, remember to open a PR to checkin the changes!
Payment-service integrated withNinoxas source of all secret configurations, such as DB credentials and Stripe private keys.
Make sure you are in google groupeng-payment@,otherwise please ask one of payment team member to add you in.
Fetch
okta-prod-payment-eng-user
Okta-aws profile via:okta-aws init
In case, you don't have
okta-aws
cli installed, followhereVerify you have successfully fetched aws profile by:
grep okta-prod-payment-eng-user~/.aws/credentials# Expected output>> [okta-prod-payment-eng-user]
Install Ninox:
brew install Ninox
If fails, ensure you have tapped intodoordash homebrew taps
Verify Ninox user role working:
cd YOUR_PAYMENT_SERVICE_REPOninox -s staging_user config
You should see output similar to following without errors:
Loading team staging_user{'backend':'dbd','ignore_entropy_check': False,'kms_key_alias':'alias/ninox/payment-service','prefix':'/ninox/payment-service/','profile':'okta-prod-payment-eng-user','region':'us-west-2','role':'arn:aws:iam::016116557778:role/ninox_payment-service_xacct_user_staging','session': Session(region_name='us-west-2'),'table':'ninox-payment-service-staging'}
Ninox secret create, update, retrieve from cli
As a payment engineer, I want to create or update secret
cd PAYMENT_REPOninox -s [staging_user| prod_user] [create| update]<secret_name_all_lower_case>
Note: as of 07/25/2019 Ninox cli hasn't support
ls
for user role yet.As a payment engineer, I want to see if the secret is created / updated as expected.
Really not a good practice!!, but if you really want you need to login to one of the staging or prod payment-service-web pod and do:
ninox -s [staging| prod] get<secret_name_all_lower_case>
Note: Once we have a better way to validate, will update.
- Setup your pycharm remote debugger by
Run
->Edit Configurations
->+
->Python Remote Debug
- Name it as
debug-server
and listen tolocalhost:9001 as following: - Start up your debugger from menu as
Run
->Debug...
->debug-server
- Same remote debugger can be used to live debug one of local server and test
- For local-server:
make local-server DEBUGGER=enabled
- For tests:
make test-unit DEBUGGER=enabled
- For local-server:
Here's a reference to all availablemake
commands:
make docker-build# use docker to build the service imagemake build-ci-container# build the docker container for CI, using docker-compose.ci.ymlmake run-ci-container# start the docker container for CI in daemon modemake local-server# run service on your local host within python virtual envmake local-docker-server# run service with docker-composemaketest# runs unit tests, linter, mypy (not pre-commit hooks)make test-unit# runs unit tests onlymake test-integration# runs integration tests (incl. database tests) onlymake test-external# runs external tests only (ie. that talk to an external dependency)make test-typing# runs mypy onlymake test-lint# runs linter onlymake test-install-hooks# installs pre-commit hooksmake test-hooks# runs pre-commit hooks only
About
Solutions to LeetCode Online Judge problems in Java
Resources
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Languages
- Java99.8%
- Python0.2%