Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commitbb37973

Browse files
authored
S3 bucket permissions (#750)
* Update test reports and aws integrations* Update contentAdded xrefs and related links in all topics
1 parent4733b44 commitbb37973

File tree

11 files changed

+367
-75
lines changed

11 files changed

+367
-75
lines changed

‎_data/home-content.yml‎

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,9 @@
4646
-title:Docker Registries
4747
localurl:/docs/integrations/docker-registries/
4848
-title:Secret Storage
49-
localurl:/docs/integrations/secret-storage/
49+
localurl:/docs/integrations/secret-storage/
50+
-title:Cloud Storage
51+
localurl:/docs/integrations/cloud-storage/
5052
-title:Helm
5153
localurl:/docs/integrations/helm/
5254
-title:Argo CD

‎_data/nav.yml‎

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -254,6 +254,8 @@
254254
url:"/digital-ocean-container-registry"
255255
-title:Other Registries
256256
url:"/other-registries"
257+
-title:Cloud Storage
258+
url:"/cloud-storage"
257259
-title:Secret Storage
258260
url:"/secret-storage"
259261
-title:Hashicorp Vault

‎_docs/integrations/amazon-web-services.md‎

Lines changed: 39 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -7,22 +7,29 @@ toc: true
77

88
Codefresh has native support for AWS in the following areas:
99

10-
-[Connecting to Amazon registries]({{site.baseurl}}/docs/integrations/docker-registries/amazon-ec2-container-registry/)
11-
-[Deploying to Amazon EKS]({{site.baseurl}}/docs/integrations/kubernetes/#adding-eks-cluster)
12-
-[Using Amazon S3 for Test reports]({{site.baseurl}}/docs/testing/test-reports/#connecting-an-s3-bucket)
13-
-[Using Amazon S3 for Helm charts]({{site.baseurl}}/docs/deployments/helm/helm-charts-and-repositories/)
10+
-[Amazon container registries: ECR](#amazon-container-registries)
11+
-[Amazon Kubernetes clusters: EKS](amazon-kubernetes-clusters)
12+
- Amazon S3 buckets:
13+
-[For Test reports](#amazon-s3-bucket-for-test-reports)
14+
-[For Helm charts](#amazon-s3-bucket-for-helm-charts)
1415

16+
See also[other Amazon deployments](#other-amazon-deployments).
1517

16-
##UsingAmazonECR
18+
##AmazonContainer Registries
1719

18-
Amazon Container Registries are fully compliant with the Docker registry API that Codefresh follows. Follow the instruction under[Amazon EC2 Container Registry]({{site.baseurl}}/docs/integrations/docker-registries/amazon-ec2-container-registry/) to connect.
20+
Amazon Container Registries are fully compliant with the Docker registry API that Codefresh follows.
21+
22+
Codefresh supports integration with Amazon ECR.
23+
To connect, follow the instructions described in[Amazon EC2 Container Registry]({{site.baseurl}}/docs/integrations/docker-registries/amazon-ec2-container-registry/).
1924

2025
Once the registry is added, you can use the[standard push step]({{site.baseurl}}/docs/pipelines/steps/push/) in your pipelines. See[working with Docker registries]({{site.baseurl}}/docs/ci-cd-guides/working-with-docker-registries/) for more information.
2126

22-
##Deploying toAmazon Kubernetes
27+
##Amazon Kubernetes clusters
2328

24-
Codefresh has native support for connecting an EKS cluster in the[cluster configuration screen]({{site.baseurl}}/docs/integrations/kubernetes/#connect-a-kubernetes-cluster).
29+
Codefresh has native support for connecting an EKS cluster through the integration options for Kubernetes in Pipeline Integrations.
30+
See[Adding an EKS cluster]({{site.baseurl}}/docs/integrations/kubernetes/#adding-eks-cluster) in[Kubernetes pipeline integrations]({{site.baseurl}}/docs/integrations/kubernetes/).
2531

32+
<!-- ask Kostis which is correct?
2633
{%
2734
include image.html
2835
lightbox="true"
@@ -32,12 +39,24 @@ alt="Connecting an Amazon cluster"
3239
caption="Connecting a Amazon cluster"
3340
max-width="40%"
3441
%}
35-
42+
-->
43+
{%
44+
include image.html
45+
lightbox="true"
46+
file="/images/integrations/kubernetes/eks-cluster-option.png"
47+
url="/images/integrations/kubernetes/eks-cluster-option.png"
48+
alt="Connecting an Amazon EKS cluster"
49+
caption="Connecting a Amazon EKS cluster"
50+
max-width="40%"
51+
%}
3652
Once the cluster is connected, you can use any of the[available deployment options]({{site.baseurl}}/docs/deployments/kubernetes/) for Kubernetes clusters. You also get access to all other Kubernetes dashboards such as the[cluster dashboard]({{site.baseurl}}/docs/deployments/kubernetes/manage-kubernetes/) and the[environment dashboard]({{site.baseurl}}/docs/deployments/kubernetes/environment-dashboard/).
3753

38-
##Storing test reports in Amazon S3 bucket
54+
##Amazon S3 bucket for test reports
55+
56+
Codefresh has native support for storing test reports in different storage buckets, including Amazon's S3 storage bucket.
57+
You can connect an Amazon S3 bucket storage account to Codefresh through the Cloud Storage options in Pipeline Integrations.
58+
3959

40-
Codefresh has native support for test reports. You can store the reports on Amazon S3.
4160

4261
{% include
4362
image.html
@@ -49,11 +68,15 @@ caption="Amazon cloud storage"
4968
max-width="60%"
5069
%}
5170

52-
See the full documentation for[test reports]({{site.baseurl}}/docs/testing/test-reports/).
71+
For detailed instructions, to set up an integration with your S3 storage account in Amazon in Codefresh, see[Cloud storage integrations for pipelines]({{site.baseurl}}/docs/integrations/cloud-storage/), and to create and store test reports through Codefresh pipelines, see[Creating test reports]({{site.baseurl}}/docs/testing/test-reports/).
72+
73+
##Amazon S3 bucket for Helm charts
74+
75+
You can also connect an Amazon S3 bucket as a Helm repository through the Helm Repository integration options in Pipeline Integrations.
5376

54-
##Using Amazon S3 for storing Helm charts
77+
For detailed instructions, see[Helm charts and repositories]({{site.baseurl}}/docs/deployments/helm/helm-charts-and-repositories/).
78+
Once you connect your Helm repository, you can use it any[Codefresh pipeline with the Helm step]({{site.baseurl}}/docs/deployments/helm/using-helm-in-codefresh-pipeline/).
5579

56-
You can connect an Amazon S3 bucket as a Helm repository in the[integrations screen]({{site.baseurl}}/docs/deployments/helm/helm-charts-and-repositories/).
5780

5881
{% include
5982
image.html
@@ -65,12 +88,11 @@ caption="Using Amazon for Helm charts"
6588
max-width="80%"
6689
%}
6790

68-
Once you connect your Helm repository you can use it any[Codefresh pipeline with the Helm step]({{site.baseurl}}/docs/deployments/helm/using-helm-in-codefresh-pipeline/).
6991

7092

71-
##Traditional Amazon deployments
93+
##Other Amazon deployments
7294

73-
For any other Amazon deployment you can use the[Amazon CLI from a Docker image](https://hub.docker.com/r/amazon/aws-cli){:target="\_blank"} in a[freestyle step]({{site.baseurl}}/docs/pipelines/steps/freestyle/).
95+
For any other Amazon deployment, you can use the[Amazon CLI from a Docker image](https://hub.docker.com/r/amazon/aws-cli){:target="\_blank"} in a[freestyle step]({{site.baseurl}}/docs/pipelines/steps/freestyle/).
7496

7597
`YAML`
7698
{% highlight yaml %}

‎_docs/integrations/argocd.md‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ group: integrations
55
toc:true
66
---
77

8-
>Important:
8+
>**IMPORTANT**:
99
We are planning to deprecate the ArgoCD agent for Codefresh pipelines. It has now been replaced with the GitOps runtime, that offers a superset of the functionality of the agent, and is also better integrated
1010
with the Codefresh dashboards.
1111

Lines changed: 201 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,201 @@
1+
---
2+
title:"Cloud Storage pipeline integrations"
3+
description:"How to use Codefresh with Cloud Storage providers"
4+
group:integrations
5+
toc:true
6+
---
7+
8+
Codefresh integrations with cloud storage providers provide a convenient solution for storing test reports.
9+
With Codefresh, you can easily configure your pipelines to store test reports in your preferred Cloud Storage provider, such as Amazon S3, Google Cloud Storage, Azure, and MinIO.
10+
11+
For every cloud storage provider, you need to first create a storage bucket in your storage provider account, connect the account with Codefresh to create an integration, and configure your pipelines to[create and upload test reports]({{site.baseurl}}/docs/testing/test-reports/).
12+
13+
##Connecting your storage account to Codefresh
14+
15+
When you connect your storage provider account to Codefresh, Codefresh creates subfolders in the storage bucket for every build, with the build IDs as folder names. Test reports generated for a build are uploaded to the respective folder. The same bucket can store test reports from multiple pipeline builds.
16+
17+
1. In the Codefresh UI, on the toolbar, click the Settings icon, and then from the sidebar select**Pipeline Integrations**.
18+
1. Scroll down to**Cloud Storage**, and click**Configure**.
19+
20+
21+
{% include
22+
image.html
23+
lightbox="true"
24+
file="/images/pipeline/test-reports/cloud-storage-integrations.png"
25+
url="/images/pipeline/test-reports/cloud-storage-integrations.png"
26+
alt="Cloud storage Integrations"
27+
caption="Cloud storage Integrations"
28+
max-width="80%"
29+
%}
30+
31+
{:start="3"}
32+
1. Click**Add Cloud Storage**, and select your cloud provider for test report storage.
33+
1. Define settings for your cloud storage provider, as described in the sections that follow.
34+
35+
##Connecting a Google bucket
36+
37+
**In Google**
38+
39+
1. Create a bucket either from the Google cloud console or the`gsutil` command line tool.
40+
See the[official documentation](https://cloud.google.com/storage/docs/creating-buckets#storage-create-bucket-console){:target="\_blank"} for the exact details.
41+
42+
**In Codefresh**
43+
1.[Connect your storage account](#connecting-your-storage-account) and select**Google Cloud Storage**.
44+
45+
{% include
46+
image.html
47+
lightbox="true"
48+
file="/images/pipeline/test-reports/cloud-storage-google.png"
49+
url="/images/pipeline/test-reports/cloud-storage-google.png"
50+
alt="Google cloud storage"
51+
caption="Google cloud storage"
52+
max-width="80%"
53+
%}
54+
55+
{:start="2"}
56+
1. Define the settings:
57+
* Select**OAuth2** as the connection method, which is the easiest way.
58+
* Enter an arbitrary name for your integration.
59+
* Select**Allow access to read and write into storage** as Codefresh needs to both write to and read from the bucket.
60+
1. Click**Save**.
61+
1. When Codefresh asks for extra permissions from your Google account, accept the permissions.
62+
63+
The integration is ready. You will use the name of the integration as an environment variable in your Codefresh pipeline.
64+
65+
>**NOTE**:
66+
An alternative authentication method is to use**JSON Config** with a[Google service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey){:target="\_blank"}.
67+
In that case, download the JSON file locally and paste its contents in the**JSON config** field.
68+
For more information, see the[official documentation](https://cloud.google.com/iam/docs/creating-managing-service-account-keys){:target="\_blank"}.
69+
70+
##Connecting an Amazon S3 bucket
71+
72+
**Create an S3 bucket in AWS (Amazon Web Services)**
73+
74+
1. Create an S3 bucket in AWS.
75+
See the[official documentation](https://docs.aws.amazon.com/quickstarts/latest/s3backup/step-1-create-bucket.html){:target="\_blank"}, or use the[AWS CLI](https://docs.aws.amazon.com/cli/latest/reference/s3api/create-bucket.html){:target="\_blank"}.
76+
1. Define the necessary IAM (Identity and Access Management) policy settings.
77+
Here's an example IAM policy that you can use as a reference:
78+
```
79+
{
80+
"Version": "2012-10-17",
81+
"Statement": [
82+
{
83+
"Effect": "Allow",
84+
"Action": [
85+
"s3:ListBucket"
86+
],
87+
"Resource": [
88+
"arn:aws:s3:::cf-backup*"
89+
]
90+
},
91+
{
92+
"Effect": "Allow",
93+
"Action": [
94+
"s3:PutObject",
95+
"s3:GetObject",
96+
"s3:DeleteObject"
97+
],
98+
"Resource": [
99+
"arn:aws:s3:::cf-backup*/*"
100+
]
101+
}
102+
]
103+
}
104+
```
105+
106+
1. Note down the**Access** and**Secret** keys generated when you created the S3 bucket.
107+
108+
**Define S3 settings in Codefresh**
109+
1. Select**Amazon Cloud Storage** as your[Cloud Storage provider](#connecting-your-storage-account).
110+
1. Define the settings:
111+
* Enter an arbitrary name for your integration.
112+
* Paste the**AWS Access Key ID** and**AWS Secret Access Key**.
113+
1. Click**Save**.
114+
115+
{% include
116+
image.html
117+
lightbox="true"
118+
file="/images/pipeline/test-reports/cloud-storage-s3.png"
119+
url="/images/pipeline/test-reports/cloud-storage-s3.png"
120+
alt="S3 cloud storage"
121+
caption="S3 cloud storage"
122+
max-width="80%"
123+
%}
124+
125+
After setting up and verifying the S3 bucket integration, you can use:
126+
* The name of the integration as an environment variable in your Codefresh pipeline.
127+
* Any[external secrets that you have defined]({{site.baseurl}}/docs/integrations/secret-storage/) (such as Kubernetes secrets), as values, by clicking on the lock icon that appears next to field:
128+
* If you have already specified the resource field during secret definition, just enter the name of the secret directly in the text field, for example,`my-secret-key`.
129+
* If you didn't include a resource name during secret creation, enter the full name in the field, for example,`my-secret-resource@my-secret-key`.
130+
131+
##Connecting Azure Blob/File storage
132+
133+
**Create a storage account in Azure**
134+
135+
1. For Azure, create a storage account.
136+
See the[official documentation](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create){:target="\_blank"}.
137+
1. Find one of the[two access keys](https://docs.microsoft.com/en-us/azure/storage/common/storage-account-keys-manage){:target="\_blank"} already created.
138+
1. Note down the**Account Name** and**Access key for the account**.
139+
140+
**Define Azure settings in Codefresh**
141+
1. Select**Azure File/Blob Storage** as your[Cloud Storage provider](#connecting-your-storage-account).
142+
1. Define the settings:
143+
* Enter an arbitrary name for your integration.
144+
* Paste the**Azure Account Name** and**Azure Account Key**.
145+
1. Click**Save**.
146+
147+
148+
{% include
149+
image.html
150+
lightbox="true"
151+
file="/images/pipeline/test-reports/cloud-storage-azure.png"
152+
url="/images/pipeline/test-reports/cloud-storage-azure.png"
153+
alt="Azure cloud storage"
154+
caption="Azure cloud storage"
155+
max-width="60%"
156+
%}
157+
158+
After setting up and verifying the Azure File/Blob integration, you can use:
159+
* The name of the integration as an environment variable in your Codefresh pipeline.
160+
* Any[external secrets that you have defined]({{site.baseurl}}/docs/integrations/secret-storage/) (such as Kubernetes secrets), as values, by clicking on the lock icon that appears next to field:
161+
* If you have already specified the resource field during secret definition, just enter the name of the secret directly in the text field, for example,`my-secret-key`.
162+
* If you didn't include a resource name during secret creation, enter the full name in the field, for example,`my-secret-resource@my-secret-key`.
163+
164+
165+
##Connecting MinIO storage
166+
167+
**Create a storage account in MinIO**
168+
1. Configure the MinIO server.
169+
See the[official documentation](https://docs.min.io/docs/minio-quickstart-guide.html){:target="\_blank"}.
170+
1. Copy the Access and Secret keys.
171+
172+
**Set up a MinIO integration in Codefresh**
173+
174+
1. Select**MinIO Cloud Storage** as your[Cloud Storage provider](#connecting-your-storage-account).
175+
1. Define the settings:
176+
***NAME**: The name of the MinIO storage. Any name that is meaningful to you.
177+
***ENDPOINT**: The URL to the storage service object.
178+
***PORT**: Optional. The TCP/IP port number. If not defined, defaults to port`80` for HTTP, and`443` for HTTPS.
179+
***Minio Access Key**: The ID that uniquely identifies your account, similar to a user ID.
180+
***Secret Minio Key**: The password of your account.
181+
***Use SSL**: Select to enable secure HTTPS access. Not selected by default.
182+
1. Click**Save**.
183+
184+
{% include
185+
image.html
186+
lightbox="true"
187+
file="/images/pipeline/test-reports/cloud-storage-minio.png"
188+
url="/images/pipeline/test-reports/cloud-storage-minio.png"
189+
alt="MinIO cloud storage"
190+
caption="MinIO cloud storage"
191+
max-width="60%"
192+
%}
193+
194+
195+
##Related articles
196+
[Amazon Web Services (AWS) pipeline integration]({{site.baseurl}}/docs/integrations/amazon-web-services/)
197+
[Microsoft Azure pipeline integration]({{site.baseurl}}/docs/integrations/microsoft-azure/)
198+
[Google Cloud pipeline integration]({{site.baseurl}}/docs/integrations/google-cloud/)
199+
[Creating test reports]({{site.baseurl}}/docs/testing/test-reports/)
200+
[Codefresh YAML for pipeline definitions]({{site.baseurl}}/docs/pipelines/what-is-the-codefresh-yaml/)
201+
[Steps in pipelines]({{site.baseurl}}/docs/pipelines/steps/)

‎_docs/integrations/docker-registries/amazon-ec2-container-registry.md‎

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title:"AmazonEC2 Container Registry"
2+
title:"AmazonECR Container Registry"
33
description:"Use the Amazon Docker Registry for pipeline integrations"
44
group:integrations
55
sub_group:docker-registries
@@ -36,15 +36,17 @@ Codefresh makes sure to automatically refresh the AWS token for you.
3636

3737
For more information on how to obtain the needed tokens, read the[AWS documentation](http://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys){:target="_blank"}.
3838

39-
>Note:
39+
>**NOTE**:
4040
You must have an active registry set up in AWS.<br /><br />
4141
Amazon ECR push/pull operations are supported with two permission options: user-based and resource-based.
4242

4343

44-
* User-based permissions: User account must apply`AmazonEC2ContainerRegistryPowerUser` policy (or custom based on that policy).
44+
* Identity-based policies
45+
User account must apply`AmazonEC2ContainerRegistryPowerUser` policy (or custom based on that policy).
4546
For more information and examples, click[here](http://docs.aws.amazon.com/AmazonECR/latest/userguide/ecr_managed_policies.html){:target="_blank"}.
46-
* Resource-based permissions: Users with resource-based permissions must be allowed to call`ecr:GetAuthorizationToken` before they can authenticate to a registry, and push or pull any images from any Amazon ECR repository, than you need provide push/pull permissions to specific registry.
47-
For more information and examples, click[here](http://docs.aws.amazon.com/AmazonECR/latest/userguide/RepositoryPolicies.html){:target="_blank"}.
47+
* Resource-based policy
48+
Users with resource-based policies must be allowed to call`ecr:GetAuthorizationToken` before they can authenticate to a registry, and push or pull any images from any Amazon ECR repository, than you need provide push/pull permissions to specific registry.
49+
For more information and examples, click[here](http://docs.aws.amazon.com/AmazonECR/latest/userguide/RepositoryPolicies.html){:target="_blank"}.
4850

4951

5052
##Set up ECR integration for service account
@@ -168,7 +170,8 @@ max-width="40%"
168170
3. Click**Promote**.
169171

170172

171-
>It is possible to change the image name if you want, but make sure that the new name exists as a repository in ECR.
173+
>**NOTE**:
174+
It is possible to change the image name if you want, but make sure that the new name exists as a repository in ECR.
172175

173176

174177
##Related articles

‎_docs/integrations/google-cloud.md‎

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,9 @@ You also get access to all other Kubernetes dashboards such as the [cluster dash
5252

5353
##Storing test reports in Google Cloud storage
5454

55-
Codefresh has native support for test reports. You can store the reports on Google Cloud storage.
55+
Codefresh has native support for storing test reports in different storage buckets, including Google Cloud storage.
56+
You can connect your Google Cloud storage account to Codefresh through the Cloud Storage options in Pipeline Integrations.
57+
5658

5759
{% include
5860
image.html
@@ -64,7 +66,8 @@ caption="Google cloud storage"
6466
max-width="50%"
6567
%}
6668

67-
See the full documentation for[test reports]({{site.baseurl}}/docs/testing/test-reports/).
69+
For detailed instructions, to set up an integration with your Google Cloud storage account in Codefresh, see[Cloud storage integrations for pipelines]({{site.baseurl}}/docs/integrations/cloud-storage/), and to create and store test reports through Codefresh pipelines, see[Creating test reports]({{site.baseurl}}/docs/testing/test-reports/).
70+
6871

6972
##Using Google Storage for storing Helm charts
7073

0 commit comments

Comments
 (0)

[8]ページ先頭

©2009-2025 Movatter.jp