Google.Cloud.Vision.V1
Google.Cloud.Vision.V1 is a.NET client library for theGoogle Cloud Vision API.
Note:This documentation is for version3.8.0 of the library.Some samples may not work with other versions.
Installation
Install theGoogle.Cloud.Vision.V1 package from NuGet. Add it toyour project in the normal way (for example by right-clicking on theproject in Visual Studio and choosing "Manage NuGet Packages...").
Authentication
When running on Google Cloud, no action needs to be taken to authenticate.
Otherwise, the simplest way of authenticating your API calls is toset up Application Default Credentials.The credentials will automatically be used to authenticate. SeeSet up Application Default Credentials for more details.
Getting started
All operations are performed through the following client classes:
Create a client instance by calling the staticCreate orCreateAsync methods. Alternatively,use the builder class associated with each client class (e.g. ImageAnnotatorClientBuilder for ImageAnnotatorClient)as an easy way of specifying custom credentials, settings, or a custom endpoint. Clients are thread-safe,and we recommend using a single instance across your entire application unless you have a particular needto configure multiple client objects separately.
Using the REST (HTTP/1.1) transport
This library defaults to performing RPCs usinggRPC using the binaryProtocol Buffer wire format.However, it also supports HTTP/1.1 and JSON, for situations where gRPC doesn't work as desired. (This is typically due to an incompatible proxyor other network issue.) To create a client using HTTP/1.1, specify aRestGrpcAdapter reference for theGrpcAdapter property in the client builder.Sample code:
var client = new ImageAnnotatorClientBuilder{ GrpcAdapter = RestGrpcAdapter.Default}.Build();For more details, see thetransport selection page.
Working with single images
The "core" methodBatchAnnotateImagescan perform multiple (potentially different) annotations on multipleimages, but convenience methods are provided for common cases ofworking with a single image, and for performing a single annotationoperation on a single image.
Sample code
Constructing an Image object
There are various factory methods on theImage class to allowinstances to be constructed from files, streams, byte arrays and URIs.
Image image1 = Image.FromFile("Pictures/LocalImage.jpg");// Fetched locally by the client, then uploaded to the serverImage image2 = Image.FetchFromUri("https://cloud.google.com/images/devtools-icon-64x64.png");// Fetched by the Google Cloud Vision serverImage image3 = Image.FromUri("https://cloud.google.com/images/devtools-icon-64x64.png");// Google Cloud Storage URIImage image4 = Image.FromUri("gs://my-bucket/my-file");byte[] bytes = ReadImageData(); // For example, from a databaseImage image5 = Image.FromBytes(bytes);using (Stream stream = OpenImageStream()) // Any regular .NET stream{ Image image6 = Image.FromStream(stream);}All IO-related methods have async equivalents.
Detect faces in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();IReadOnlyList<FaceAnnotation> result = client.DetectFaces(image);foreach (FaceAnnotation face in result){ string poly = string.Join(" - ", face.BoundingPoly.Vertices.Select(v => $"({v.X}, {v.Y})")); Console.WriteLine($"Confidence: {(int)(face.DetectionConfidence * 100)}%; BoundingPoly: {poly}");}Detect text in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();IReadOnlyList<EntityAnnotation> textAnnotations = client.DetectText(image);foreach (EntityAnnotation text in textAnnotations){ Console.WriteLine($"Description: {text.Description}");}Detect document text in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();TextAnnotation text = client.DetectDocumentText(image);Console.WriteLine($"Text: {text.Text}");foreach (var page in text.Pages){ foreach (var block in page.Blocks) { string box = string.Join(" - ", block.BoundingBox.Vertices.Select(v => $"({v.X}, {v.Y})")); Console.WriteLine($"Block {block.BlockType} at {box}"); foreach (var paragraph in block.Paragraphs) { box = string.Join(" - ", paragraph.BoundingBox.Vertices.Select(v => $"({v.X}, {v.Y})")); Console.WriteLine($" Paragraph at {box}"); foreach (var word in paragraph.Words) { Console.WriteLine($" Word: {string.Join("", word.Symbols.Select(s => s.Text))}"); } } }}Detect labels in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();IReadOnlyList<EntityAnnotation> labels = client.DetectLabels(image);foreach (EntityAnnotation label in labels){ Console.WriteLine($"Score: {(int)(label.Score * 100)}%; Description: {label.Description}");}Detect landmarks in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();IReadOnlyList<EntityAnnotation> result = client.DetectLandmarks(image);foreach (EntityAnnotation landmark in result){ Console.WriteLine($"Score: {(int)(landmark.Score * 100)}%; Description: {landmark.Description}");}Detect logos in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();IReadOnlyList<EntityAnnotation> logos = client.DetectLogos(image);foreach (EntityAnnotation logo in logos){ Console.WriteLine($"Description: {logo.Description}");}Perform "safe search" processing on a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();SafeSearchAnnotation annotation = client.DetectSafeSearch(image);// Each category is classified as Very Unlikely, Unlikely, Possible, Likely or Very Likely.Console.WriteLine($"Adult? {annotation.Adult}");Console.WriteLine($"Spoof? {annotation.Spoof}");Console.WriteLine($"Violence? {annotation.Violence}");Console.WriteLine($"Medical? {annotation.Medical}");Perform image property processing on a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();ImageProperties properties = client.DetectImageProperties(image);ColorInfo dominantColor = properties.DominantColors.Colors.OrderByDescending(c => c.PixelFraction).First();Console.WriteLine($"Dominant color in image: {dominantColor}");Suggest crop hints for a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();CropHintsAnnotation cropHints = client.DetectCropHints(image);foreach (CropHint hint in cropHints.CropHints){ Console.WriteLine("Crop hint:"); string poly = string.Join(" - ", hint.BoundingPoly.Vertices.Select(v => $"({v.X}, {v.Y})")); Console.WriteLine($" Poly: {poly}"); Console.WriteLine($" Confidence: {hint.Confidence}"); Console.WriteLine($" Importance fraction: {hint.ImportanceFraction}");}Perform analysis for other web references on a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();WebDetection webDetection = client.DetectWebInformation(image);foreach (WebDetection.Types.WebImage webImage in webDetection.FullMatchingImages){ Console.WriteLine($"Full image: {webImage.Url} ({webImage.Score})");}foreach (WebDetection.Types.WebImage webImage in webDetection.PartialMatchingImages){ Console.WriteLine($"Partial image: {webImage.Url} ({webImage.Score})");}foreach (WebDetection.Types.WebPage webPage in webDetection.PagesWithMatchingImages){ Console.WriteLine($"Page with matching image: {webPage.Url} ({webPage.Score})");}foreach (WebDetection.Types.WebEntity entity in webDetection.WebEntities){ Console.WriteLine($"Web entity: {entity.EntityId} / {entity.Description} ({entity.Score})");}Detect localized objects in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();IReadOnlyList<LocalizedObjectAnnotation> annotations = client.DetectLocalizedObjects(image);foreach (LocalizedObjectAnnotation annotation in annotations){ string poly = string.Join(" - ", annotation.BoundingPoly.NormalizedVertices.Select(v => $"({v.X}, {v.Y})")); Console.WriteLine( $"Name: {annotation.Name}; ID: {annotation.Mid}; Score: {annotation.Score}; Bounding poly: {poly}");}Detect faces and landmarks in a single image
ImageAnnotatorClient client = ImageAnnotatorClient.Create();AnnotateImageRequest request = new AnnotateImageRequest{ Image = image, Features = { new Feature { Type = Feature.Types.Type.FaceDetection }, // By default, no limits are put on the number of results per annotation. // Use the MaxResults property to specify a limit. new Feature { Type = Feature.Types.Type.LandmarkDetection, MaxResults = 5 }, }};AnnotateImageResponse response = client.Annotate(request);Console.WriteLine("Faces:");foreach (FaceAnnotation face in response.FaceAnnotations){ string poly = string.Join(" - ", face.BoundingPoly.Vertices.Select(v => $"({v.X}, {v.Y})")); Console.WriteLine($" Confidence: {(int)(face.DetectionConfidence * 100)}%; BoundingPoly: {poly}");}Console.WriteLine("Landmarks:");foreach (EntityAnnotation landmark in response.LandmarkAnnotations){ Console.WriteLine($"Score: {(int)(landmark.Score * 100)}%; Description: {landmark.Description}");}if (response.Error != null){ Console.WriteLine($"Error detected: {response.Error}");}Detect faces in one image and logos in another
ImageAnnotatorClient client = ImageAnnotatorClient.Create();// Perform face recognition on one image, and logo recognition on another.AnnotateImageRequest request1 = new AnnotateImageRequest{ Image = image1, Features = { new Feature { Type = Feature.Types.Type.FaceDetection } }};AnnotateImageRequest request2 = new AnnotateImageRequest{ Image = image2, Features = { new Feature { Type = Feature.Types.Type.LogoDetection } }};BatchAnnotateImagesResponse response = client.BatchAnnotateImages(new[] { request1, request2 });Console.WriteLine("Faces in image 1:");foreach (FaceAnnotation face in response.Responses[0].FaceAnnotations){ string poly = string.Join(" - ", face.BoundingPoly.Vertices.Select(v => $"({v.X}, {v.Y})")); Console.WriteLine($" Confidence: {(int)(face.DetectionConfidence * 100)}%; BoundingPoly: {poly}");}Console.WriteLine("Logos in image 2:");foreach (EntityAnnotation logo in response.Responses[1].LogoAnnotations){ Console.WriteLine($"Description: {logo.Description}");}foreach (Status error in response.Responses.Select(r => r.Error)){ Console.WriteLine($"Error detected: error");}Product search
After creating and populating a product set, the products can bedetected within images.
ProductSetName productSetName = new ProductSetName(projectId, locationId, productSetId);ImageAnnotatorClient client = ImageAnnotatorClient.Create();ProductSearchParams searchParams = new ProductSearchParams{ ProductCategories = { "apparel" }, ProductSetAsProductSetName = productSetName,};ProductSearchResults results = client.DetectSimilarProducts(image, searchParams);foreach (var result in results.Results){ Console.WriteLine($"{result.Product.DisplayName}: {result.Score}");}A filter can be applied to the search, to match only products withspecific labels.
ProductSetName productSetName = new ProductSetName(projectId, locationId, productSetId);ImageAnnotatorClient client = ImageAnnotatorClient.Create();ProductSearchParams searchParams = new ProductSearchParams{ ProductCategories = { "apparel" }, ProductSetAsProductSetName = productSetName, Filter = "style=womens"};ProductSearchResults results = client.DetectSimilarProducts(image, searchParams);foreach (var result in results.Results){ Console.WriteLine($"{result.Product.DisplayName}: {result.Score}");}Error handling
All the methods which annotate a single image (and therefore have a single response) throwAnnotateImageException if the responsecontains an error.
// We create a request which passes simple validation, but isn't a valid image.Image image = Image.FromBytes(new byte[10]);ImageAnnotatorClient client = ImageAnnotatorClient.Create();try{ IReadOnlyList<EntityAnnotation> logos = client.DetectLogos(image); // Normally use logos here...}catch (AnnotateImageException e){ AnnotateImageResponse response = e.Response; Console.WriteLine(response.Error);}TheBatchAnnotateImagesmethod does not throw this exception, butBatchAnnotateImagesResponse.ThrowOnAnyError() checksall responses are successful, throwing an AggregateException if there are any errors.The AggregateException contains one AnnotateImageException for each response that contains an error.
// We create a request which passes simple validation, but isn't a valid image.Image image = Image.FromBytes(new byte[10]);// Just a single request in this example, but usually BatchAnnotateImages would be// used with multiple requests.var request = new AnnotateImageRequest{ Image = image, Features = { new Feature { Type = Feature.Types.Type.SafeSearchDetection } }};ImageAnnotatorClient client = ImageAnnotatorClient.Create();try{ BatchAnnotateImagesResponse response = client.BatchAnnotateImages(new[] { request }); // ThrowOnAnyError will throw if any individual response in response.Responses // contains an error. Other responses may still have useful results. // Errors can be detected manually by checking the Error property in each // individual response. response.ThrowOnAnyError();}catch (AggregateException e){ // Because a batch can have multiple errors, the exception thrown is AggregateException. // Each inner exception is an AnnotateImageException foreach (AnnotateImageException innerException in e.InnerExceptions) { Console.WriteLine(innerException.Response.Error); }}Except as otherwise noted, the content of this page is licensed under theCreative Commons Attribution 4.0 License, and code samples are licensed under theApache 2.0 License. For details, see theGoogle Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-11-06 UTC.