Movatterモバイル変換


[0]ホーム

URL:


Skip to main contentLinkMenuExpand(external link)DocumentSearchCopyCopied

Object detection with Faster RCNN Deep Learning in C#

The sample walks through how to run a pretrained Faster R-CNN object detection ONNX model using the ONNX Runtime C# API.

The source code for this sample is availablehere.

Contents

Prerequisites

To run this sample, you’ll need the following things:

  1. Install.NET Core 3.1 or higher for you OS (Mac, Windows or Linux).
  2. Download theFaster R-CNN ONNX model to your local system.
  3. Downloadthis demo image to test the model. You can also use any image you like.

Get started

Now we have everything set up, we can start adding code to run the model on the image. We’ll do this in the main method of the program for simplicity.

Read paths

Firstly, let’s read the path to the model, path to the image we want to test, and path to the output image:

stringmodelFilePath=args[0];stringimageFilePath=args[1];stringoutImageFilePath=args[2];

Read image

Next, we will read the image in using the cross-platform image libraryImageSharp:

usingImage<Rgb24>image=Image.Load<Rgb24>(imageFilePath,outIImageFormatformat);

Note, we’re specifically reading theRgb24 type so we can efficiently preprocess the image in a later step.

Resize image

Next, we will resize the image to the appropriate size that the model is expecting; it is recommended to resize the image such that both height and width are within the range of [800, 1333].

floatratio=800f/Math.Min(image.Width,image.Height);usingStreamimageStream=newMemoryStream();image.Mutate(x=>x.Resize((int)(ratio*image.Width),(int)(ratio*image.Height)));image.Save(imageStream,format);

Preprocess image

Next, we will preprocess the image according to therequirements of the model:

varpaddedHeight=(int)(Math.Ceiling(image.Height/32f)*32f);varpaddedWidth=(int)(Math.Ceiling(image.Width/32f)*32f);varmean=new[]{102.9801f,115.9465f,122.7717f};// Preprocessing image// We use DenseTensor for multi-dimensional accessDenseTensor<float>input=new(new[]{3,paddedHeight,paddedWidth});image.ProcessPixelRows(accessor=>{for(inty=paddedHeight-accessor.Height;y<accessor.Height;y++){Span<Rgb24>pixelSpan=accessor.GetRowSpan(y);for(intx=paddedWidth-accessor.Width;x<accessor.Width;x++){input[0,y,x]=pixelSpan[x].B-mean[0];input[1,y,x]=pixelSpan[x].G-mean[1];input[2,y,x]=pixelSpan[x].R-mean[2];}}});

Here, we’re creating a Tensor of the required size(channels, paddedHeight, paddedWidth), accessing the pixel values, preprocessing them and finally assigning them to the tensor at the appropriate indices.

Setup inputs

// Pin DenseTensor memory and use it directly in the OrtValue tensor // It will be unpinned on ortValue disposal

usingvarinputOrtValue=OrtValue.CreateTensorValueFromMemory(OrtMemoryInfo.DefaultInstance,input.Buffer,newlong[]{3,paddedHeight,paddedWidth});

Next, we will create the inputs to the model:

varinputs=newDictionary<string,OrtValue>{{"image",inputOrtValue}};

To check the input node names for an ONNX model, you can useNetron to visualize the model and see input/output names. In this case, this model hasimage as the input node name.

Run inference

Next, we will create an inference session and run the input through it:

usingvarsession=newInferenceSession(modelFilePath);usingvarrunOptions=newRunOptions();usingIDisposableReadOnlyCollection<OrtValue>results=session.Run(runOptions,inputs,session.OutputNames);

Postprocess output

Next, we will need to postprocess the output to get boxes and associated label and confidence scores for each box:

varboxesSpan=results[0].GetTensorDataAsSpan<float>();varlabelsSpan=results[1].GetTensorDataAsSpan<long>();varconfidencesSpan=results[2].GetTensorDataAsSpan<float>();constfloatminConfidence=0.7f;varpredictions=newList<Prediction>();for(inti=0;i<boxesSpan.Length-4;i+=4){varindex=i/4;if(confidencesSpan[index]>=minConfidence){predictions.Add(newPrediction{Box=newBox(boxesSpan[i],boxesSpan[i+1],boxesSpan[i+2],boxesSpan[i+3]),Label=LabelMap.Labels[labelsSpan[index]],Confidence=confidencesSpan[index]});}}

Note, we’re only taking boxes that have a confidence above 0.7 to remove false positives.

View prediction

Next, we’ll draw the boxes and associated labels and confidence scores on the image to see how the model went:

usingvaroutputImage=File.OpenWrite(outImageFilePath);Fontfont=SystemFonts.CreateFont("Arial",16);foreach(varpinpredictions){image.Mutate(x=>{x.DrawLines(Color.Red,2f,newPointF[]{newPointF(p.Box.Xmin,p.Box.Ymin),newPointF(p.Box.Xmax,p.Box.Ymin),newPointF(p.Box.Xmax,p.Box.Ymin),newPointF(p.Box.Xmax,p.Box.Ymax),newPointF(p.Box.Xmax,p.Box.Ymax),newPointF(p.Box.Xmin,p.Box.Ymax),newPointF(p.Box.Xmin,p.Box.Ymax),newPointF(p.Box.Xmin,p.Box.Ymin)});x.DrawText($"{p.Label},{p.Confidence:0.00}",font,Color.White,newPointF(p.Box.Xmin,p.Box.Ymin));});}image.Save(outputImage,format);

For each box prediction, we’re using ImageSharp to draw red lines to create the boxes, and drawing the label and confidence text.

Running the program

Now the program is created, we can run it will the following command:

dotnet run[path-to-model][path-to-image][path-to-output-image]

e.g. running:

dotnet run ~/Downloads/FasterRCNN-10.onnx ~/Downloads/demo.jpg ~/Downloads/out.jpg

detects the following objects in the image:



[8]ページ先頭

©2009-2025 Movatter.jp