Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit328caa5

Browse files
authored
Github workflows for building environments (#195)
* Run the correct installer file* Run the installer from the root directory* Try a self hosted github runner* Reduce logging for docker pull.* Adds quiet flag to docker pull command* Pull the images before expect to reduce run time* Install docker early in order to speed up install* Builds the right docker-compose file* Increase timeout for linux install expect script* Change timeout on expect script* Change the way expect watches the script* Expand the timeout when waiting for Elasticsearch* Search for more output in the expect script* Change the match for the dots in expect* Change the regex for matching dots* Change the output for catching dots* Add chrome to Dockerfile for selenium* Import selenium tests and run python tests* Activate venv when running tests* Correct path for venv in the container* Correct path for venv in the container* Running only linux tests* Adjust scripts to run as a non super user* Change the permissions on the output log to source for environment variables later* Check for output log* Make output log available to test instantiation* Change pytest cache dir to home for user* Change pytest cache dir to home for user* Change pytest cache dir permissions* Hide get-docker.sh from installs* Cleanup test files in workflow* Add the cluster workflow for github actions* Adds a cluster build* Run the test cluster in pwsh* Fail pipeline when commands fail* Catch the error from powershell* Remove duplicate run command* Set env vars explicitly* Modify the escape char for env vars* Try a different method of catching errors in pwsh script* Check failure of pwsh script* Test successful run of build_cluster* Test failure of script* Capture the output from the az commands* Continue on error condition* Simplify run command* Try catching failures in a new way.* Test failure capture* Setting error action to continue* Remove ErrorAction* Use docker-compose run instead* Capture exit code to fail step* Try propigating errors from pwsh* Capture external command exit code* Send lastexitcode* Don't exit right away* Disable immediate stop on exit* Run simple test for exit code* Cd to docker compose file* Catch exec exit code* Remove unneded flags from the command* Adds back in the build script* Adds an explicit exit for powershell script* Remove spaces after escape character* Escape the exitcode variable in the shell command* Remove extra exit from build_cluster.ps1* Add a passing command for build_cluster.ps1* Move to the install directory* Run setup testbed to get an error* Try to build a cluster with the build_cluster.ps1 script.* Check resource group variable* Set the resource group name differently* Build a cluster using the generated resource group* Make the paths relative in the build_cluster script* Move to the right directory to do an install* Destroy cluster on pipeline finish* Change the owner of the files to match the host in the development container* Su user to remove testing files* Run the docker-compose as root to clean up* Run as root to clean up containers* Build the cluster in azure* List the files in the current directory on exec* Run the files from the new path* Investigate more about the file environment* Update the envornment for building the cluster* Update the environment users before docker up* Try to start hung job* List all the files with their owners in the container* Escape the powershell commands* Check the paths and files with bash* Find the path we are on* Check powershell environment* Cd to home directory in powershell* Cd to home directory in powershell* Rebuild docker compose as the right user* Change directory to source directory for powershell* Change to proper directory for powershell* Build a full cluster in pipeline* Run the linux tests and check permissions of files* Change permissions on output file with sudo* Turn off cluster creation for speed* Comment out building cluster in steps* Only delete the resource group if it exists* Adds ability to get the public ip for fw rules* Put the tags in quotes when creating nsg rules* Output the command being run for nsg rules* Remove tags for nsg port definitions* Install lme on the cluster* Builds the full cluster install* Cleans up the useage of the environment variables in pipeline* Extract environment variables from the build script and use them in the GitHub workflow.* Do a minimal linux install* Fix the path for retrieving env vars* Check setting of github env* Source the env file and push it to github env* Print some debug information to the console* Check setting of each key in functions* Parse the output for the passwords better* Uses a unique id instead of run_id to make sure it is unique* Double quote the file name for sed in output.log* Changes the way we get passwords from output.log* Make sure key doesn't have newline* Escape dollar sign* Properly escape double quotes inside of docker-compose command* Escape all of the dollar signs in the compose command* Write the environment variables to the githut environment* Clean up debugging output* Remove more debugging output* Remove set e* Adds function to write passwords to a file for actions* List files in directory after writing passwords* Export the env vars in the github file* Fail the workflow if the environment is not set correctly* Clean up the environment vars for the container* Set the variables on run of the pwsh command* Run commands on the domain controller* Get the envrionment checker to pass* Update passing variables to remote script* Escape the powershell environment variables* Change the case of the resource group env var* Don't destroy cluster so we can manually test* Build the entire cluster to run commands against* Run a command on the linux machine* Run remote tests* Run minimal installs to debug tests* Fix escaping for test commands* Move to the correct directory for tests* Add continuation characters to the lines in the script* Remove nested double quotes* Uses the ip of LS1 to run the tests on* Put the cluster build command on one line* Destroy clusters at the end* Quote output log correctly on build* Run all api tests on cluster* Build full cluster and add verbose logging to pytest* Stop deleting the cluster in the destroy_cluster.ps1 script* Modify installer to use the new winlogbeat index pattern* Try to get the dns to resolve ls1* Add ls1 to the hosts file so it resolves always* Modify tests to pass on a working cluster* Skip the fragile test for mapping* Set up to run selenium tests on the cluster* Testing* Rerun build after rebasing to the right branch* Pass the minimal install flag to install lme* Build complete cluster and run all tests* Pull the images quietly if running without a terminal.* Run the simple tests on PR checkin and the longer ones when triggered* Build the linux docker container upon check in of a pr* Build lme container fresh before install* Runs an end to end build in docker and cluster* Print out the download feedback when pulling images* Build 1.4.0 branch* Build the cluster using the main branch of the repository* Allow passing branch to installers from the pipeline* Run tests from a different base branch* Remove the ampersand typo* Allow passing arguments to the installer scripts* Rearrange install arguments* Test passing arguments in install lme* Build lme without arguments* Install lme with no arguments* Run command as string in install_lme.ps1* Build by passing arguments* Run a complete build using arguments* Update the sources to allow for updating in the pipeline* Build the cluster using the latest branch* Set up the latest branch var* Runs an upgrade in the pipeline* Run the upgrade in the remote linux machine* Run upgrade on minimal install* Checks out the current branch to run an upgrade on linux* Capture the exit code of the upgrade script* Check the directories we are working in* Clone the git repository to run the upgrade* Checkout the proper branch from origin* Get the remote username and home dir for the remote server* Set the home directory for the az user* Use origin when checking out in the upgrade script* Revert the changes to deploy.sh* Set a dumb terminal to avoid terminal errors* Export the terminal variable correctly* Capture the output of the upgrade script to fail pipeline if it fails* Revert previous changes as they seemed to break upgrade* Use a different format for executing the pwsh script* Destroy the cluster when done* Output the upgrade information to the terminal* Try capturing the docker-compose output* Directly capture the output of the compose command* Fixes unbalanced quote* Build and run full cluster with an upgrade* Builds the current brand for the cluster* Add a unique id for the docker-compose so you can run multiple instances of the same docker-compose file* Adds upgrade.yml to gh workflows* Runs both a build and an upgrade* Adds upgrade to the gh workflows* Get gh to notice new workflow* Match build names to parent branch* Trigger gh to see the workflow* Get gh actions to trigger workflow* Update code to get gh to see the actions* Update code to use the new workflow module.* Trigger gh actions to run* Get gh to run workflows* Try to get gh to run workflows* Change upgrade branch pulling* Checking out branch for upgrade in a new way* Rename workflow for upgrade* Convert to docker compose* Run all three builds using docker compose and -p* Clean up docker containers* Build the docker containers fresh for the linux_only workflow* Adds readme and checks an upgrade where the upgrade version is the same as the current version* Fixes typo in the workflow file* Runs docker as sudo* Remove the privileged flag from the lme container* Try leaving the swarm on the host if running in non privileged environment* Leave the swarm on the host* Reset to run docker as privileged* Installs the current branch in linux only* Stop pruning system to see if elastc starts faster* Don't take down the docker containers to see why they aren't working* Removes the gh actions shell escape vulnerability* Remove the docker containers at end of run
1 parentfcb7199 commit328caa5

28 files changed

+1243
-727
lines changed

‎.github/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
See the readme in`testing/development` for more information about these workflows and how to develop for them.

‎.github/workflows/cluster.yml

Lines changed: 253 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,259 @@ name: Cluster Run
22

33
on:
44
workflow_dispatch:
5+
# pull_request:
6+
# branches:
7+
# - '*'
8+
59
jobs:
6-
manual-job:
7-
runs-on:ubuntu-latest
10+
build-and-test-cluster:
11+
runs-on:self-hosted
12+
env:
13+
UNIQUE_ID:
14+
IP_ADDRESS:
15+
LS1_IP:
16+
BRANCH_NAME:
17+
elastic:
818

919
steps:
10-
-name:Greet
11-
run:echo "This is a manually triggered workflow."
20+
-name:Checkout repository
21+
uses:actions/checkout@v4.1.1
22+
23+
-name:Setup environment variables
24+
run:|
25+
PUBLIC_IP=$(curl -s https://api.ipify.org)
26+
echo "IP_ADDRESS=$PUBLIC_IP" >> $GITHUB_ENV
27+
echo "UNIQUE_ID=$(openssl rand -hex 3 | head -c 6)" >> $GITHUB_ENV
28+
29+
-name:Get branch name
30+
shell:bash
31+
run:|
32+
if [ "${{ github.event_name }}" == "pull_request" ]; then
33+
echo "BRANCH_NAME=${{ github.head_ref }}" >> $GITHUB_ENV
34+
else
35+
echo "BRANCH_NAME=${GITHUB_REF##*/}" >> $GITHUB_ENV
36+
fi
37+
38+
-name:Set up Docker Compose
39+
run:|
40+
sudo curl -L "https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-$(uname -s)-$(uname -m)" \
41+
-o /usr/local/bin/docker-compose
42+
sudo chmod +x /usr/local/bin/docker-compose
43+
44+
-name:Set the environment for docker-compose
45+
run:|
46+
cd testing/development
47+
# Get the UID and GID of the current user
48+
echo "HOST_UID=$(id -u)" > .env
49+
echo "HOST_GID=$(id -g)" >> .env
50+
51+
# - name: Run Docker Compose Build to fix a user id issue in a prebuilt container
52+
# run: |
53+
# cd testing/development
54+
# docker compose -p ${{ env.UNIQUE_ID }} build --no-cache
55+
56+
-name:Run Docker Compose
57+
run:docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml up -d
58+
59+
-name:List docker containers to wait for them to start
60+
run:|
61+
docker ps
62+
63+
-name:List files in home directory
64+
run:|
65+
cd testing/development
66+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c "pwd && ls -la"
67+
68+
-name:Check powershell environment
69+
run:|
70+
set +e
71+
cd testing/development
72+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& {
73+
cd /home/admin.ackbar/LME; \
74+
ls -la; \
75+
exit \$LASTEXITCODE;
76+
}"
77+
EXIT_CODE=$?
78+
echo "Exit code: $EXIT_CODE"
79+
set -e
80+
if [ "$EXIT_CODE" -ne 0 ]; then
81+
exit $EXIT_CODE
82+
fi
83+
84+
-name:Build the cluster
85+
run:|
86+
set +e
87+
cd testing/development
88+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& {
89+
cd /home/admin.ackbar/LME/testing; \
90+
\$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \
91+
\$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \
92+
\$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \
93+
\$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \
94+
\$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \
95+
\$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \
96+
\$env:IP_ADDRESS='${{ env.IP_ADDRESS }}'; \
97+
./development/build_cluster.ps1 -IPAddress \$env:IP_ADDRESS; \
98+
exit \$LASTEXITCODE;
99+
}"
100+
EXIT_CODE=$?
101+
echo "Exit code: $EXIT_CODE"
102+
set -e
103+
if [ "$EXIT_CODE" -ne 0 ]; then
104+
exit $EXIT_CODE
105+
fi
106+
cd ..
107+
. configure/lib/functions.sh
108+
extract_ls1_ip 'LME-pipe-${{ env.UNIQUE_ID }}.cluster.output.log'
109+
echo "LS1_IP=$LS1_IP" >> $GITHUB_ENV
110+
111+
-name:Install lme on cluster
112+
run:|
113+
set +e
114+
cd testing/development
115+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& {
116+
cd /home/admin.ackbar/LME/testing; \
117+
\$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \
118+
\$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \
119+
\$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \
120+
\$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \
121+
\$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \
122+
\$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \
123+
./development/install_lme.ps1 -b '${{ env.BRANCH_NAME }}'; \
124+
exit \$LASTEXITCODE;
125+
}"
126+
EXIT_CODE=$?
127+
echo "Exit code: $EXIT_CODE"
128+
set -e
129+
if [ "$EXIT_CODE" -ne 0 ]; then
130+
exit $EXIT_CODE
131+
fi
132+
133+
-name:Set the environment passwords for other steps
134+
run:|
135+
cd testing/development
136+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c "
137+
cd /home/admin.ackbar/LME/testing \
138+
&& . configure/lib/functions.sh \
139+
&& extract_credentials 'LME-pipe-${{ env.UNIQUE_ID }}.password.txt' \
140+
&& write_credentials_to_file '${{ env.UNIQUE_ID }}.github_env.sh' \
141+
"
142+
. ../${{ env.UNIQUE_ID }}.github_env.sh
143+
rm ../${{ env.UNIQUE_ID }}.github_env.sh
144+
echo "elastic=$elastic" >> $GITHUB_ENV
145+
echo "kibana=$kibana" >> $GITHUB_ENV
146+
echo "logstash_system=$logstash_system" >> $GITHUB_ENV
147+
echo "logstash_writer=$logstash_writer" >> $GITHUB_ENV
148+
echo "dashboard_update=$dashboard_update" >> $GITHUB_ENV
149+
150+
-name:Check that the environment variables are set
151+
run:|
152+
cd testing/development
153+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c "
154+
if [ -z \"${{ env.elastic }}\" ]; then
155+
echo 'Error: env.elastic variable is not set' >&2
156+
exit 1
157+
else
158+
echo 'Elastic password is set'
159+
fi
160+
"
161+
162+
# - name: Run a command on the domain controller
163+
# run: |
164+
# set +e
165+
# cd testing/development
166+
# docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& {
167+
# cd /home/admin.ackbar/LME/testing; \
168+
# \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \
169+
# \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \
170+
# \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \
171+
# \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \
172+
# \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \
173+
# \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \
174+
# az login --service-principal -u \$env:AZURE_CLIENT_ID -p \$env:AZURE_SECRET --tenant \$env:AZURE_TENANT; \
175+
# az vm run-command invoke \
176+
# --command-id RunPowerShellScript \
177+
# --name DC1 \
178+
# --resource-group \$env:RESOURCE_GROUP \
179+
# --scripts 'ls C:\'; \
180+
# exit \$LASTEXITCODE;
181+
# }"
182+
# EXIT_CODE=$?
183+
# echo "Exit code: $EXIT_CODE"
184+
# set -e
185+
# if [ "$EXIT_CODE" -ne 0 ]; then
186+
# exit $EXIT_CODE
187+
# fi
188+
189+
-name:Run a command on the linux machine
190+
run:|
191+
set +e
192+
cd testing/development
193+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& {
194+
cd /home/admin.ackbar/LME/testing; \
195+
\$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \
196+
\$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \
197+
\$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \
198+
\$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \
199+
\$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \
200+
\$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \
201+
az login --service-principal -u \$env:AZURE_CLIENT_ID -p \$env:AZURE_SECRET --tenant \$env:AZURE_TENANT; \
202+
az vm run-command invoke \
203+
--command-id RunShellScript \
204+
--name LS1 \
205+
--resource-group \$env:RESOURCE_GROUP \
206+
--scripts 'ls -lan'; \
207+
exit \$LASTEXITCODE;
208+
}"
209+
EXIT_CODE=$?
210+
echo "Exit code: $EXIT_CODE"
211+
set -e
212+
if [ "$EXIT_CODE" -ne 0 ]; then
213+
exit $EXIT_CODE
214+
fi
215+
216+
# This only passes when you do a full install
217+
-name:Run api tests in container
218+
run:|
219+
set +e
220+
cd testing/development
221+
docker-compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " cd testing/tests \
222+
&& echo export elastic=${{ env.elastic }} > .env \
223+
&& echo export ES_HOST=${{ env.LS1_IP }} >> .env \
224+
&& cat .env \
225+
&& python3 -m venv /home/admin.ackbar/venv_test \
226+
&& . /home/admin.ackbar/venv_test/bin/activate \
227+
&& pip install -r requirements.txt \
228+
&& sudo chmod ugo+w /home/admin.ackbar/LME/ -R \
229+
&& pytest -v api_tests/"
230+
231+
-name:Run selenium tests in container
232+
run:|
233+
set +e
234+
cd testing/development
235+
docker compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " cd testing/tests \
236+
&& echo export ELASTIC_PASSWORD=${{ env.elastic }} > .env \
237+
&& . .env \
238+
&& python3 -m venv /home/admin.ackbar/venv_test \
239+
&& . /home/admin.ackbar/venv_test/bin/activate \
240+
&& pip install -r requirements.txt \
241+
&& sudo chmod ugo+w /home/admin.ackbar/LME/ -R \
242+
&& python selenium_tests.py --domain ${{ env.LS1_IP }} -v"
243+
244+
-name:Cleanup environment
245+
if:always()
246+
run:|
247+
cd testing/development
248+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& {
249+
cd /home/admin.ackbar/LME/testing; \
250+
\$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \
251+
\$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \
252+
\$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \
253+
\$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \
254+
\$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \
255+
\$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \
256+
./development/destroy_cluster.ps1; \
257+
exit \$LASTEXITCODE;
258+
}"
259+
docker compose -p ${{ env.UNIQUE_ID }} down
260+
docker system prune --force

‎.github/workflows/linux_only.yml

Lines changed: 96 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,101 @@
1-
name:Linux Only
1+
name:Linux Only
22

33
on:
44
workflow_dispatch:
5-
jobs:
6-
manual-job:
7-
runs-on:ubuntu-latest
5+
pull_request:
6+
branches:
7+
-'*'
88

9+
jobs:
10+
build-and-test-linux-only:
11+
# runs-on: ubuntu-latest
12+
runs-on:self-hosted
13+
14+
env:
15+
UNIQUE_ID:
16+
BRANCH_NAME:
17+
918
steps:
10-
-name:Greet
11-
run:echo "This is a manually triggered workflow."
19+
-name:Checkout repository
20+
uses:actions/checkout@v4.1.1
21+
22+
-name:Setup environment variables
23+
run:|
24+
echo "UNIQUE_ID=$(openssl rand -hex 3 | head -c 6)" >> $GITHUB_ENV
25+
26+
-name:Setup environment variables
27+
run:|
28+
echo "AZURE_CLIENT_ID=${{ secrets.AZURE_CLIENT_ID }}" >> $GITHUB_ENV
29+
echo "AZURE_SECRET=${{ secrets.AZURE_SECRET }}" >> $GITHUB_ENV
30+
echo "AZURE_CLIENT_SECRET=${{ secrets.AZURE_SECRET }}" >> $GITHUB_ENV
31+
echo "AZURE_TENANT=${{ secrets.AZURE_TENANT }}" >> $GITHUB_ENV
32+
echo "AZURE_SUBSCRIPTION_ID=${{ secrets.AZURE_SUBSCRIPTION_ID }}" >> $GITHUB_ENV
33+
34+
-name:Set Branch Name
35+
shell:bash
36+
env:
37+
EVENT_NAME:${{ github.event_name }}
38+
HEAD_REF:${{ github.head_ref }}
39+
GITHUB_REF:${{ github.ref }}
40+
run:|
41+
if [ "$EVENT_NAME" == "pull_request" ]; then
42+
echo "BRANCH_NAME=$HEAD_REF" >> $GITHUB_ENV
43+
else
44+
BRANCH_REF="${GITHUB_REF##*/}"
45+
echo "BRANCH_NAME=$BRANCH_REF" >> $GITHUB_ENV
46+
fi
47+
48+
-name:Set up Docker Compose
49+
run:|
50+
sudo curl -L "https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-$(uname -s)-$(uname -m)" \
51+
-o /usr/local/bin/docker-compose
52+
sudo chmod +x /usr/local/bin/docker-compose
53+
54+
-name:Set the environment for docker-compose
55+
run:|
56+
cd testing/development
57+
# Get the UID and GID of the current user
58+
echo "HOST_UID=$(id -u)" > .env
59+
echo "HOST_GID=$(id -g)" >> .env
60+
61+
-name:Run Docker Build
62+
run:docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml build lme --no-cache
63+
64+
-name:Run Docker Compose
65+
run:docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml up -d
66+
67+
-name:List docker containers to wait for them to start
68+
run:|
69+
docker ps
70+
71+
-name:Execute commands inside ubuntu container
72+
run:|
73+
cd testing/development
74+
docker compose -p ${{ env.UNIQUE_ID }} exec -T ubuntu bash -c "echo 'Ubuntu container built'"
75+
76+
-name:Install LME in container
77+
run:|
78+
set -x
79+
cd testing/development
80+
docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c "./testing/development/build_docker_lme_install.sh -b ${{ env.BRANCH_NAME }} \
81+
&& sudo chmod go+r /opt/lme/Chapter\ 3\ Files/output.log"
82+
83+
-name:Run api tests in container
84+
run:|
85+
cd testing/development
86+
docker compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c ". testing/configure/lib/functions.sh \
87+
&& sudo cp /opt/lme/Chapter\ 3\ Files/output.log . \
88+
&& extract_credentials output.log \
89+
&& sudo rm output.log \
90+
&& sudo docker ps \
91+
&& . /home/admin.ackbar/venv_test/bin/activate \
92+
&& sudo chmod ugo+w /home/admin.ackbar/LME/ \
93+
&& pytest testing/tests/api_tests/linux_only/ "
94+
95+
-name:Cleanup Docker Compose
96+
if:always()
97+
run:|
98+
cd testing/development
99+
docker compose -p ${{ env.UNIQUE_ID }} exec -T -u root lme bash -c "rm -rf /home/admin.ackbar/LME/.pytest_cache"
100+
docker compose -p ${{ env.UNIQUE_ID }} down
101+
docker system prune --force

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp