Table of Contents
gitlab
package)gitlab
command)A pipeline is a group of jobs executed by GitLab CI.
List pipelines for a project:
pipelines=project.pipelines.list(get_all=True)
Get a pipeline for a project:
pipeline=project.pipelines.get(pipeline_id)
Get variables of a pipeline:
variables=pipeline.variables.list(get_all=True)
Create a pipeline for a particular reference with custom variables:
pipeline=project.pipelines.create({'ref':'main','variables':[{'key':'MY_VARIABLE','value':'hello'}]})
Retry the failed builds for a pipeline:
pipeline.retry()
Cancel builds in a pipeline:
pipeline.cancel()
Delete a pipeline:
pipeline.delete()
Get latest pipeline:
project.pipelines.latest(ref="main")
Triggers provide a way to interact with the GitLab CI. Using a trigger a useror an application can run a new build/job for a specific commit.
List triggers:
triggers=project.triggers.list(get_all=True)
Get a trigger:
trigger=project.triggers.get(trigger_token)
Create a trigger:
trigger=project.triggers.create({'description':'mytrigger'})
Remove a trigger:
project.triggers.delete(trigger_token)# ortrigger.delete()
Full example with wait for finish:
defget_or_create_trigger(project):trigger_decription='my_trigger_id'fortinproject.triggers.list(iterator=True):ift.description==trigger_decription:returntreturnproject.triggers.create({'description':trigger_decription})trigger=get_or_create_trigger(project)pipeline=project.trigger_pipeline('main',trigger.token,variables={"DEPLOY_ZONE":"us-west1"})whilepipeline.finished_atisNone:pipeline.refresh()time.sleep(1)
You can trigger a pipeline using token authentication instead of userauthentication. To do so create an anonymous Gitlab instance and use lazyobjects to get the associated project:
gl=gitlab.Gitlab(URL)# no authenticationproject=gl.projects.get(project_id,lazy=True)# no API callproject.trigger_pipeline('main',trigger_token)
Reference:https://docs.gitlab.com/ci/triggers/#trigger-token
You can schedule pipeline runs using a cron-like syntax. Variables can beassociated with the scheduled pipelines.
v4 API
List pipeline schedules:
scheds=project.pipelineschedules.list(get_all=True)
Get a single schedule:
sched=project.pipelineschedules.get(schedule_id)
Create a new schedule:
sched=project.pipelineschedules.create({'ref':'main','description':'Daily test','cron':'0 1 * * *'})
Update a schedule:
sched.cron='1 2 * * *'sched.save()
Take ownership of a schedule:
sched.take_ownership()
Trigger a pipeline schedule immediately:
sched=projects.pipelineschedules.get(schedule_id)sched.play()
Delete a schedule:
sched.delete()
List schedule variables:
# note: you need to use get() to retrieve the schedule variables. The# attribute is not present in the response of a list() callsched=projects.pipelineschedules.get(schedule_id)vars=sched.attributes['variables']
Create a schedule variable:
var=sched.variables.create({'key':'foo','value':'bar'})
Edit a schedule variable:
var.value='new_value'var.save()
Delete a schedule variable:
var.delete()
List all pipelines triggered by a pipeline schedule:
pipelines=sched.pipelines.list(get_all=True)
Jobs are associated to projects, pipelines and commits. They provideinformation on the jobs that have been run, and methods to manipulatethem.
Jobs are usually automatically triggered, but you can explicitly trigger a newjob:
project.trigger_build('main',trigger_token,{'extra_var1':'foo','extra_var2':'bar'})
List jobs for the project:
jobs=project.jobs.list(get_all=True)
Get a single job:
project.jobs.get(job_id)
List the jobs of a pipeline:
project=gl.projects.get(project_id)pipeline=project.pipelines.get(pipeline_id)jobs=pipeline.jobs.list(get_all=True)
Note
Job methods (play, cancel, and so on) are not available onProjectPipelineJob
objects. To use these methods create aProjectJob
object:
pipeline_job=pipeline.jobs.list(get_all=False)[0]job=project.jobs.get(pipeline_job.id,lazy=True)job.retry()
Get the artifacts of a job:
build_or_job.artifacts()
Get the artifacts of a job by its name from the latest successful pipeline ofa branch or tag:
project.artifacts.download(ref_name='main',job='build')
Warning
Artifacts are entirely stored in memory in this example.
You can download artifacts as a stream. Provide a callable to handle thestream:
withopen("archive.zip","wb")asf:build_or_job.artifacts(streamed=True,action=f.write)
You can also directly stream the output into a file, and unzip it afterwards:
zipfn="___artifacts.zip"withopen(zipfn,"wb")asf:build_or_job.artifacts(streamed=True,action=f.write)subprocess.run(["unzip","-bo",zipfn])os.unlink(zipfn)
Or, you can also use the underlying response iterator directly:
artifact_bytes_iterator=build_or_job.artifacts(iterator=True)
This can be used with frameworks that expect an iterator (such as FastAPI/Starlette’sStreamingResponse
) to forward a download from GitLab without having to downloadthe entire content server-side first:
@app.get("/download_artifact")defdownload_artifact():artifact_bytes_iterator=build_or_job.artifacts(iterator=True)returnStreamingResponse(artifact_bytes_iterator,media_type="application/zip")
Delete all artifacts of a project that can be deleted:
project.artifacts.delete()
Get a single artifact file:
build_or_job.artifact('path/to/file')
Get a single artifact file by branch and job:
project.artifacts.raw('branch','path/to/file','job')
Mark a job artifact as kept when expiration is set:
build_or_job.keep_artifacts()
Delete the artifacts of a job:
build_or_job.delete_artifacts()
Get a job log file / trace:
build_or_job.trace()
Warning
Traces are entirely stored in memory unless you use the streaming feature.Seethe artifacts example.
Cancel/retry a job:
build_or_job.cancel()build_or_job.retry()
Play (trigger) a job:
build_or_job.play()
Erase a job (artifacts and trace):
build_or_job.erase()
Get a list of bridge jobs (including child pipelines) for a pipeline.
List bridges for the pipeline:
bridges=pipeline.bridges.list(get_all=True)
Get a pipeline’s complete test report.
Get the test report for a pipeline:
test_report=pipeline.test_report.get()
Get a pipeline’s test report summary.
Get the test report summary for a pipeline:
test_report_summary=pipeline.test_report_summary.get()