Movatterモバイル変換


[0]ホーム

URL:


Title:Extracting Insights from Biological Images
Version:1.2.0
Description:Combines the 'magick' and 'imager' packages to streamline image analysis, focusing on feature extraction and quantification from biological images, especially microparticles. By providing high throughput pipelines and clustering capabilities, 'biopixR' facilitates efficient insight generation for researchers (Schneider J. et al. (2019) <doi:10.21037/jlpm.2019.04.05>).
License:LGPL (≥ 3)
VignetteBuilder:knitr
BuildVignettes:true
Depends:R (≥ 4.2.0), imager, magick
Imports:data.table, cluster
Suggests:tcltk, knitr, rmarkdown, doParallel, kohonen, imagerExtra,GPareto, foreach
Encoding:UTF-8
RoxygenNote:7.3.2
LazyData:true
LazyLoad:yes
NeedsCompilation:no
Language:en-US
URL:https://github.com/Brauckhoff/biopixR
BugReports:https://github.com/Brauckhoff/biopixR/issues
Packaged:2024-11-11 13:54:36 UTC; brauctim
Author:Tim BrauckhoffORCID iD [aut, cre], Stefan RoedigerORCID iD [ctb], Coline Kieffer [ctb]
Maintainer:Tim Brauckhoff <brauctile@disroot.org>
Repository:CRAN
Date/Publication:2024-11-11 14:20:05 UTC

Connects Line Ends with the nearest labeled region

Description

The function scans an increasing radius around a line end and connects itwith the nearest labeled region.

Usage

adaptiveInterpolation(  end_points_df,  diagonal_edges_df,  clean_lab_df,  img,  radius = 5)

Arguments

end_points_df

data.frame with the coordinates of all line ends(can be obtained by usingimage_morphology)

diagonal_edges_df

data.frame with coordinates of diagonal line ends(can also be obtained by usingimage_morphology)

clean_lab_df

data of typedata.frame, containing the x, y and valueinformation of every labeled region in an image (only the edges should belabeled)

img

image providing the dimensions of the output matrix(import byimportImage)

radius

maximal radius that should be scanned for another cluster

Details

This function is designed to be part of thefillLineGaps function, which performs the thresholdingand line end detection preprocessing. TheadaptiveInterpolation generates a matrix withdimensions matching those of the original image. Initially, the matrixcontains only background values (0) corresponding to a black image. Thefunction then searches for line ends and identifies the nearest labeledregion within a given radius of the line end. It should be noted that thecluster of the line end in question is not considered a nearest neighbor. Inthe event that another cluster is identified, theinterpolatePixels function is employed to connect theline end to the aforementioned cluster. This entails transforming thespecified pixels of the matrix to a foreground value of (1).It is important to highlight that diagonal line ends receive a specialtreatment, as they are always treated as a separate cluster by the labelingfunction. This makes it challenging to reconnect them. To address this issue,diagonal line ends not only ignore their own cluster but also that of theirdirect neighbor. Thereafter, the same procedure is repeated, with pixelvalues being changed according to theinterpolatePixels function.

Value

Binary matrix that can be applied as an overlay, for example withimager.combine to fill the gaps between line ends.

Examples

# Creating an artificial binary imagemat <- matrix(0, 8, 8)mat[3, 1:2] <- 1mat[4, 3] <- 1mat[7:8, 3] <- 1mat[5, 6:8] <- 1mat_cimg <- as.cimg(mat)plot(mat_cimg)# Preprocessing / LineEnd detection / labeling (done in fillLineGaps())mat_cimg_m <- mirror(mat_cimg, axis = "x")mat_magick <- cimg2magick(mat_cimg)lineends <- image_morphology(mat_magick, "HitAndMiss", "LineEnds")diagonalends <- image_morphology(mat_magick, "HitAndMiss", "LineEnds:2>")lineends_cimg <- magick2cimg(lineends)diagonalends_cimg <- magick2cimg(diagonalends)end_points <- which(lineends_cimg == TRUE, arr.ind = TRUE)end_points_df <- as.data.frame(end_points)colnames(end_points_df) <- c("x", "y", "dim3", "dim4")diagonal_edges <- which(diagonalends_cimg == TRUE, arr.ind = TRUE)diagonal_edges_df <- as.data.frame(diagonal_edges)colnames(diagonal_edges_df) <- c("x", "y", "dim3", "dim4")lab <- label(mat_cimg_m)df_lab <- as.data.frame(lab) |> subset(value > 0)alt_x <- list()alt_y <- list()alt_value <- list()for (g in seq_len(nrow(df_lab))) {  if (mat_cimg_m[df_lab$x[g], df_lab$y[g], 1, 1] == 1) {    alt_x[g] <- df_lab$x[g]    alt_y[g] <- df_lab$y[g]    alt_value[g] <- df_lab$value[g]  }}clean_lab_df <- data.frame(  x = unlist(alt_x),  y = unlist(alt_y),  value = unlist(alt_value))# Actual functionoverlay <- adaptiveInterpolation(  end_points_df,  diagonal_edges_df,  clean_lab_df,  mat_cimg)parmax(list(mat_cimg_m, as.cimg(overlay$overlay))) |> plot()

Image of microbeads

Description

This fluorescence image, formatted as 'cimg' with dimensions of117 x 138 pixels, shows microbeads. With a single color channel, the imageprovides an ideal example for in-depth analysis of microbead structures.

Usage

beads

Format

The image was imported using imager and is therefore of class:"cimg" "imager_array" "numeric"

Details

Dimensions: width - 117; height - 138; depth - 1; channel - 1

References

The image was provided by Coline Kieffer.

Examples

data(beads)plot(beads)

Image of microbeads

Description

This fluorescence image, formatted as 'cimg' with dimensions of492 x 376 pixels, shows microbeads. With a single color channel, the imageprovides an ideal example for in-depth analysis of microbead structures.The image's larger size encompasses a greater number of microbeads,offering a broader range of experimental outcomes for examination.

Usage

beads_large1

Format

The image was imported using imager and is therefore of class:"cimg" "imager_array" "numeric"

Details

Dimensions: width - 492; height - 376; depth - 1; channel - 1

References

The image was provided by Coline Kieffer.

Examples

data(beads_large1)plot(beads_large1)

Image of microbeads

Description

This fluorescence image, formatted as 'cimg' with dimensions of1384 x 1032 pixels, shows microbeads. With a single color channel, the imageprovides an ideal example for in-depth analysis of microbead structures.The image's larger size encompasses a greater number of microbeads,offering a broader range of experimental outcomes for examination.

Usage

beads_large2

Format

The image was imported using imager and is therefore of class:"cimg" "imager_array" "numeric"

Details

Dimensions: width - 1384; height - 1032; depth - 1; channel - 3

References

The image was provided by Coline Kieffer.

Examples

data(beads_large2)plot(beads_large2)

Change the color of pixels

Description

The function allows the user to alter the color of a specified set of pixelswithin an image. In order to achieve this, the coordinates of the pixels inquestion must be provided.

Usage

changePixelColor(img, coordinates, color = "purple", visualize = FALSE)

Arguments

img

image (import byimportImage)

coordinates

specifying which pixels to be colored (shouldbe a x|y data frame).

color

color to be applied to specified pixels:

  • color from the list of colors defined bycolors

  • object of class factor

visualize

if TRUE the resulting image gets plotted

Value

Object of class 'cimg' with changed colors at desired positions.

References

https://CRAN.R-project.org/package=countcolors

Examples

coordinates <-  objectDetection(beads,                  method = 'edge',                  alpha = 1,                  sigma = 0)changePixelColor(  beads,  coordinates$coordinates,  color = factor(coordinates$coordinates$value),  visualize = TRUE)

Image of microbeads in luminescence channel

Description

The image shows red fluorescence rhodamine microbeads measuring151 x 112 pixels. The fluorescence channel was used to obtain the image,resulting in identical dimensions and positions of the beads as in theoriginal image (droplets).

Usage

droplet_beads

Format

The image was imported using imager and is therefore of class:"cimg" "imager_array" "numeric"

Details

Dimensions: width - 151; height - 112; depth - 1; channel - 3

References

The image was provided by Coline Kieffer.

Examples

data(droplet_beads)plot(droplet_beads)

Droplets containing microbeads

Description

The image displays a water-oil emulsion with droplets observed throughbrightfield microscopy. It is formatted as 'cimg' and sized at 151 × 112pixels. The droplets vary in size, and some contain microbeads, which addscomplexity. Brightfield microscopy enhances the contrast between water andoil, revealing the droplet arrangement.

Usage

droplets

Format

The image was imported using imager and is therefore of class:"cimg" "imager_array" "numeric"

Details

Dimensions: width - 151; height - 112; depth - 1; channel - 1

References

The image was provided by Coline Kieffer.

Examples

data(droplets)plot(droplets)

Canny edge detector

Description

Adapted code from the 'imager'cannyEdges functionwithout the usage of 'dplyr' and 'purrr'. If the threshold parameters aremissing, they are determined automatically using a k-means heuristic. Usethe alpha parameter to adjust the automatic thresholds up or down. Thethresholds are returned as attributes. The edge detection is based on asmoothed image gradient with a degree of smoothing set by the sigmaparameter.

Usage

edgeDetection(img, t1, t2, alpha = 1, sigma = 2)

Arguments

img

image (import byimportImage)

t1

threshold for weak edges (if missing, both thresholds aredetermined automatically)

t2

threshold for strong edges

alpha

threshold adjustment factor (default 1)

sigma

smoothing (default 2)

Value

Object of class 'cimg', displaying detected edges.

References

https://CRAN.R-project.org/package=imager

Examples

edgeDetection(beads, alpha = 0.5, sigma = 0.5) |> plot()

Reconnecting discontinuous lines

Description

The function attempts to fill in edge discontinuities in order to enablenormal labeling and edge detection.

Usage

fillLineGaps(  contours,  objects = NULL,  threshold = "13%",  alpha = 1,  sigma = 2,  radius = 5,  iterations = 2,  visualize = TRUE)

Arguments

contours

image that contains discontinuous lines like edges orcontours

objects

image that contains objects that should be removed beforeapplying the fill algorithm

threshold

"in %" (fromthreshold)

alpha

threshold adjustment factor for edge detection(fromedgeDetection)

sigma

smoothing (fromedgeDetection)

radius

maximal radius that should be scanned for another cluster

iterations

how many times the algorithm should find line ends andreconnect them to their closest neighbor

visualize

if TRUE (default) a plot is displayed highlighting theadded pixels in the original image

Details

The function pre-processes the image in order to enable the implementationof theadaptiveInterpolation function. Thepre-processing stage encompasses a number of operations, includingthresholding, the optional removal of objects, the detection of line endsand diagonal line ends, and the labeling of pixels. The threshold should beset to allow for the retention of some "bridge" pixels between gaps, thusfacilitating the subsequent process of reconnection. For further detailsregarding the process of reconnection, please refer to the documentation onadaptiveInterpolation. The subsequent post-processingstage entails the reduction of line thickness in the image. With regard tothe possibility of object removal, the coordinates associated with theseobjects are collected using theobjectDetectionfunction. Subsequently, the pixels of the detected objects are set to nullin the original image, thus allowing the algorithm to proceed without theobjects.

Value

Image with continuous edges (closed gaps).

Examples

fillLineGaps(droplets)

k-medoids clustering of images according to the Haralick features

Description

This function performs k-medoids clustering on images using Haralickfeatures, which describe texture. By evaluating contrast, correlation,entropy, and homogeneity, it groups images into clusters with similartextures. K-medoids is chosen for its outlier resilience, using actualimages as cluster centers. This approach simplifies texture-based imageanalysis and classification.

Usage

haralickCluster(path)

Arguments

path

directory path to folder with images to be analyzed

Value

data.frame containing file names, md5sums and cluster number.

References

https://cran.r-project.org/package=radiomics

Examples

  path2dir <- system.file("images", package = 'biopixR')  result <- haralickCluster(path2dir)  print(result)

Image analysis pipeline

Description

This function serves as a pipeline that integrates tools for completestart-to-finish image analysis. It enables the handling of images fromdifferent channels, for example the analysis of dual-color micro particles.This approach simplifies the workflow, providing a straightforward method toanalyze complex image data.

Usage

imgPipe(  img1 = img,  color1 = "color1",  img2 = NULL,  color2 = "color2",  img3 = NULL,  color3 = "color3",  method = "edge",  alpha = 1,  sigma = 2,  sizeFilter = FALSE,  upperlimit = "auto",  lowerlimit = "auto",  proximityFilter = FALSE,  radius = "auto")

Arguments

img1

image (import byimportImage)

color1

name of color in img1

img2

image (import byimportImage)

color2

name of color in img2

img3

image (import byimportImage)

color3

name of color in img3

method

choose method for object detection ('edge' / 'threshold')(fromobjectDetection)

alpha

threshold adjustment factor (numeric / 'static' / 'interactive' / 'gaussian')(fromobjectDetection)

sigma

smoothing (numeric / 'static' / 'interactive' / 'gaussian')(fromobjectDetection)

sizeFilter

applyingsizeFilter function (default - FALSE)

upperlimit

highest accepted object size (numeric / 'auto')(only needed if sizeFilter = TRUE)

lowerlimit

smallest accepted object size (numeric / 'auto')(only needed if sizeFilter = TRUE)

proximityFilter

applyingproximityFilter function (default - FALSE)

radius

distance from one object in which no other centersare allowed (in pixels) (only needed if proximityFilter = TRUE)

Value

list of 2 to 3 objects:

See Also

objectDetection(),sizeFilter(),proximityFilter(),resultAnalytics()

Examples

result <- imgPipe(  beads,  alpha = 1,  sigma = 2,  sizeFilter = TRUE,  upperlimit = 150,  lowerlimit = 50  )# Highlight remaining microparticlesplot(beads)with(  result$detailed,  points(    result$detailed$x,    result$detailed$y,    col = "darkgreen",    pch = 19    )  )

Import an Image File

Description

This function is a wrapper to theload.image andimage_read functions, and imports an image file andreturns the image as a 'cimg' object. The following file formats aresupported: TIFF, PNG, JPG/JPEG, and BMP. In the event that the image inquestion contains an alpha channel, that channel is omitted.

Usage

importImage(path2file)

Arguments

path2file

path to file

Value

An image of class 'cimg'.

Examples

path2img <- system.file("images/beads_large1.bmp", package = 'biopixR')img <- importImage(path2img)img |> plot()path2img <- system.file("images/beads_large2.png", package = 'biopixR')img <- importImage(path2img)img |> plot()

Interactive object detection

Description

This function uses theobjectDetection function tovisualize the detected objects at varying input parameters.

Usage

interactive_objectDetection(img, resolution = 0.1, return_param = FALSE)

Arguments

img

image (import byimportImage)

resolution

resolution of slider

return_param

if TRUE the final parameter values for alpha andsigma are printed to the console (TRUE | FALSE)

Details

The function provides a graphical user interface (GUI) that allows users tointeractively adjust the parameters for object detection:

The GUI also includes a button to switch between two detection methods:

Value

Values of alpha, sigma and the applied method.

References

https://CRAN.R-project.org/package=magickGUI

Examples

if (interactive()) {  interactive_objectDetection(beads)  }

Pixel Interpolation

Description

Connects two points in a matrix, array, or an image.

Usage

interpolatePixels(row1, col1, row2, col2)

Arguments

row1

row index for the first point

col1

column index for the first point

row2

row index for the second point

col2

column index for the second point

Value

Matrix containing the coordinates to connect the two input points.

Examples

# Simulate two points in a matrixtest <- matrix(0, 4, 4)test[1, 1] <- 1test[3, 4] <- 1as.cimg(test) |> plot()# Connect them with each otherlink <- interpolatePixels(1, 1, 3, 4)test[link] <- 1as.cimg(test) |> plot()

Object detection

Description

This function identifies objects in an image using either edge detection orthresholding methods. It gathers the coordinates and centers of theidentified objects, highlighting the edges or overall coordinates for easyrecognition.

Usage

objectDetection(img, method = "edge", alpha = 1, sigma = 2, vis = TRUE)

Arguments

img

image (import byimportImage)

method

choose method for object detection ('edge' / 'threshold')

alpha

threshold adjustment factor (numeric / 'static' / 'interactive' / 'gaussian') (only needed for 'edge')

sigma

smoothing (numeric / 'static' / 'interactive' / 'gaussian') (only needed for 'edge')

vis

creates image were object edges/coordinates (purple) and detected centers (green) are highlighted (TRUE | FALSE)

Details

TheobjectDetection function provides several methodsfor calculating the alpha and sigma parameters, which are critical for edgedetection:

  1. Input of a Numeric Value:

    • Users can directly input numeric values for alpha and sigma, allowing for precise control over the edge detection parameters.

  2. Static Scanning:

    • When both alpha and sigma are set to "static", the function systematically tests all possible combinations of these parameters within the range (alpha: 0.1 - 1.5, sigma: 0 - 2). This exhaustive search helps identify the optimal parameter values for the given image. (Note: takes a lot of time)

  3. Interactive Selection:

    • Setting the alpha and sigma values to "interactive" initiates a Tcl/Tk graphical user interface (GUI). This interface allows users to adjust the parameters interactively, based on visual feedback. To achieve optimal results, the user must input the necessary adjustments to align the parameters with the specific requirements of the image. The user can also switch between the methods through the interface.

  4. Multi-Objective Optimization:

    • For advanced parameter optimization, the functioneasyGParetoptim will be utilized for multi-objective optimization using Gaussian process models. This method leverages the 'GPareto' package to perform the optimization. It involves building Gaussian Process models for each objective and running the optimization to find the best parameter values.

Value

list of 3 objects:

Examples

res_objectDetection <- objectDetection(beads,                                       method = 'edge',                                       alpha = 1,                                       sigma = 0)res_objectDetection$marked_objects |> plot()res_objectDetection <- objectDetection(beads,                                       method = 'threshold')res_objectDetection$marked_objects |> plot()

Proximity-based exclusion

Description

In order to identify objects within a specified proximity, it is essential tocalculate their respective centers, which serve to determine their proximity.Pairs that are in close proximity will be discarded.(Input can be obtained byobjectDetection function)

Usage

proximityFilter(centers, coordinates, radius = "auto", elongation = 2)

Arguments

centers

center coordinates of objects (mx|my|value data frame)

coordinates

all coordinates of the objects (x|y|value data frame)

radius

distance from one center in which no other centersare allowed (in pixels) (numeric / 'auto')

elongation

factor by which the radius should be multiplied to createthe area of exclusion (default 2)

Details

The automated radius calculation in theproximityFilterfunction is based on the presumption of circular-shaped objects. The radiusis calculated using the following formula:

\sqrt{\frac{A}{\pi}}

where A is the area of the detected objects. The function will excludeobjects that are too close by extending the calculated radius by one radiuslength beyond the assumed circle, effectively doubling the radius to createan exclusion zone. Therefore the elongation factor is set to 2 by default,with one radius covering the object and an additional radius creating thearea of exclusion.

Value

list of 2 objects:

Examples

res_objectDetection <- objectDetection(beads,                                       alpha = 1,                                       sigma = 0)res_proximityFilter <- proximityFilter(  res_objectDetection$centers,  res_objectDetection$coordinates,  radius = "auto"  )changePixelColor(  beads,  res_proximityFilter$coordinates,  color = "darkgreen",  visualize = TRUE  )

Result Calculation and Summary

Description

This function summarizes the data obtained by previous functions:objectDetection,proximityFilterorsizeFilter. Extracts information like amount,intensity, size and density of the objects present in the image.

Usage

resultAnalytics(img, coordinates, unfiltered = NULL)

Arguments

img

image (import byimportImage)

coordinates

all filtered coordinates of the objects (x|y|value data frame)

unfiltered

all coordinates from every object before applying filter functions

Details

TheresultAnalytics function provides comprehensivesummary of objects detected in an image:

  1. Summary

    • Generates a summary of all detected objects, including the total number of objects, their mean size, size standard deviation, mean intensity, intensity standard deviation, estimated rejected objects, and coverage.

  2. Detailed Object Information

    • Provides detailed information for each object, including size, mean intensity, intensity standard deviation, and coordinates.

Value

list of 2 objects:

See Also

objectDetection(),sizeFilter(),proximityFilter()

Examples

res_objectDetection <- objectDetection(beads,                                       alpha = 1,                                       sigma = 0)res_sizeFilter <- sizeFilter(  res_objectDetection$centers,  res_objectDetection$coordinates,  lowerlimit = 50, upperlimit = 150  )res_proximityFilter <- proximityFilter(  res_sizeFilter$centers,  res_objectDetection$coordinates,  radius = "auto"  )res_resultAnalytics <- resultAnalytics(  coordinates = res_proximityFilter$coordinates,  unfiltered = res_objectDetection$coordinates,  img = beads  )print(res_resultAnalytics$summary)plot(beads)with(  res_objectDetection$centers,  points(    res_objectDetection$centers$mx,    res_objectDetection$centers$my,    col = "red",    pch = 19    )  )with(  res_resultAnalytics$detailed,  points(    res_resultAnalytics$detailed$x,    res_resultAnalytics$detailed$y,    col = "darkgreen",    pch = 19    )  )

Scan Directory for Image Analysis

Description

This function scans a specified directory, imports images, and performs various analysesincluding object detection, size filtering, and proximity filtering. Optionally, it canperform these tasks in parallel and log the process.

Usage

scanDir(  path,  parallel = FALSE,  backend = "PSOCK",  cores = "auto",  method = "edge",  alpha = 1,  sigma = 2,  sizeFilter = FALSE,  upperlimit = "auto",  lowerlimit = "auto",  proximityFilter = FALSE,  radius = "auto",  Rlog = FALSE)

Arguments

path

directory path to folder with images to be analyzed

parallel

processing multiple images at the same time (default - FALSE)

backend

'PSOCK' or 'FORK' (seemakeCluster)

cores

number of cores for parallel processing (numeric / 'auto') ('auto' uses 75% of the available cores)

method

choose method for object detection ('edge' / 'threshold')(fromobjectDetection)

alpha

threshold adjustment factor (numeric / 'static' / 'interactive' / 'gaussian')(fromobjectDetection)

sigma

smoothing (numeric / 'static' / 'interactive' / 'gaussian')(fromobjectDetection)

sizeFilter

applyingsizeFilter function (default - FALSE)

upperlimit

highest accepted object size (only needed if sizeFilter = TRUE)

lowerlimit

smallest accepted object size (numeric / 'auto')

proximityFilter

applyingproximityFilter function (default - FALSE)

radius

distance from one center in which no other centersare allowed (in pixels) (only needed if proximityFilter = TRUE)

Rlog

creates a log markdown document, summarizing the results (default - FALSE)

Details

The function scans a specified directory for image files, imports them,and performs analysis using designated methods. The function is capable ofparallel processing, utilizing multiple cores to accelerate computation.Additionally, it is able to log the results into an R Markdown file.Duplicate images are identified through the use of MD5 sums. In addition avariety of filtering options are available to refine the analysis. Iflogging is enabled, the results can be saved and rendered into a report.WhenRlog = TRUE, an R Markdown file and a CSV file are generated in thecurrent directory. More detailed information on individual results,can be accessed through saved RDS files.

Value

data.frame summarizing each analyzed image, including details such as the number of objects, average size and intensity, estimated rejections, and coverage.

See Also

imgPipe(),objectDetection(),sizeFilter(),proximityFilter(),resultAnalytics()

Examples

if (interactive()) {  path2dir <- system.file("images", package = 'biopixR')  results <- scanDir(path2dir, alpha = 'interactive', sigma = 'interactive')  print(results)  }

Extraction of Shape Features

Description

This function analyzes the objects detected in an image and calculatesdistinct shape characteristics for each object, such as circularity,eccentricity, radius, and perimeter. The resulting shape attributes can thenbe grouped using a Self-Organizing Map (SOM) from the 'Kohonen' package.

Usage

shapeFeatures(  img,  alpha = 1,  sigma = 2,  xdim = 2,  ydim = 1,  SOM = FALSE,  visualize = FALSE)

Arguments

img

image (import byload.image)

alpha

threshold adjustment factor (numeric / 'static' / 'interactive' / 'gaussian')(fromobjectDetection)

sigma

smoothing (numeric / 'static' / 'interactive' / 'gaussian')(fromobjectDetection)

xdim

x-dimension for the SOM-grid (grid = hexagonal)

ydim

y-dimension for the SOM-grid (xdim * ydim = number of neurons)

SOM

if TRUE runs SOM algorithm on extracted shape features, groupingthe detected objects

visualize

visualizes the groups computed by SOM

Value

data.frame containing detailed information about every single object.

See Also

objectDetection(),resultAnalytics(),som

Examples

shapeFeatures(  beads,  alpha = 1,  sigma = 0,  SOM = TRUE,  visualize = TRUE)

Size-based exclusion

Description

Takes the size of the objects in an image and discards objects basedon a lower and an upper size limit.(Input can be obtained byobjectDetection function)

Usage

sizeFilter(centers, coordinates, lowerlimit = "auto", upperlimit = "auto")

Arguments

centers

center coordinates of objects (value|mx|my|size data frame)

coordinates

all coordinates of the objects (x|y|value data frame)

lowerlimit

smallest accepted object size (numeric / 'auto' / 'interactive')

upperlimit

highest accepted object size (numeric / 'auto' / 'interactive')

Details

ThesizeFilter function is designed to filterdetected objects based on their size, either through automated detection oruser-defined limits. The automated detection of size limits uses the 1.5*IQRmethod to identify and remove outliers. This approach is most effective whendealing with a large number of objects, (typically more than 50), and whenthe sizes of the objects are relatively uniform. For smaller samples or whenthe sizes of the objects vary significantly, the automated detection may notbe as accurate, and manual limit setting is recommended.

Value

list of 2 objects:

Examples

res_objectDetection <- objectDetection(  beads,  method = 'edge',  alpha = 1,  sigma = 0  )res_sizeFilter <- sizeFilter(  centers = res_objectDetection$centers,  coordinates = res_objectDetection$coordinates,  lowerlimit = 50, upperlimit = 150  )changePixelColor(  beads,  res_sizeFilter$coordinates,  color = "darkgreen",  visualize = TRUE  )

[8]ページ先頭

©2009-2025 Movatter.jp