Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Efficient Haskell Arrays featuring Parallel computation

License

NotificationsYou must be signed in to change notification settings

lehins/massiv

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

massiv is a Haskell library for array manipulation. Performance is one of its main goals, thus itis capable of seamless parallelization of most of the operations provided by the library

The name for this library comes from the Russian word Massiv (Масси́в), which means an Array.

Status

LanguageGithub ActionsCoverallsGitter.im
GitHub top languageGA-CICoverallsGitter
PackageHackageNightlyLTS
massivHackageNightlyLTS
massiv-testHackageNightlyLTS
haskell-schedulerHackageNightlyLTS

Introduction

Everything in the library revolves around anArray r ix e - a data family for anything that can bethought of as an array. The type variables, from the end, are:

  • e - element of an array.
  • ix - an index that will map to an actual element. The index must be an instance of theIndexclass with the default one being anIx n type family and an optional being tuples ofInts.
  • r - underlying representation. There are two main categories of representations described below.

Manifest

These are your classical arrays that are located in memory and allow constant time lookup ofelements. Another main property they share is that they have a mutable interface. AnArray withmanifest representation can be thawed into a mutableMArray and then frozen back into itsimmutable counterpart after some destructive operation is applied to the mutable copy. Thedifferences among representations below is in the way that elements are being accessed in memory:

  • P - Array with elements that are an instance ofPrim type class, i.e. common Haskellprimitive types:Int,Word,Char, etc. It is backed by unpinned memory and based onByteArray.
  • U - Unboxed arrays. The elements are instances of theUnboxtype class. Usually just as fast asP, but has a slightly wider range of data types that itcan work with. Notable data types that can be stored as elements areBool, tuples andIx n.
  • S - Storable arrays. Backed by pinned memory and based onForeignPtr, while elements areinstances of theStorable type class.
  • B - Boxed arrays that don't have restrictions on their elements, since they are representedas pointers to elements, thus making them the slowest type of array, but also the mostgeneral. Arrays of this representation are element strict, in other words its elements arekept in Weak-Head Normal Form (WHNF).
  • BN - Also boxed arrays, but unlike the other representationB, its elements are in NormalForm, i.e. in a fully evaluated state and no thunks or memory leaks are possible. It doesrequire anNFData instance for the elements though.
  • BL - Boxed lazy array. Just likeB andBN, except values are evaluated on demand.

Delayed

Main trait of delayed arrays is that they do not exist in memory and instead describe the contentsof an array as a function or a composition of functions. In fact all of the fusion capabilities inmassiv can be attributed to delayed arrays.

  • D - Delayed "pull" array is just a function from an index to an element:(ix -> e). Therefore indexing into this type of array is not possible, instead elements are evaluatedwith theevaluateM function each time when applied to an index. It gives us a nice ability tocompose functions together when applied to an array and possibly even fold over without everallocating intermediate manifest arrays.
  • DW - Delayed windowed array is very similar to the version above, except it has two functionsthat describe it, one for the near border elements and one for the interior, aka. thewindow. This is used forStencil computation and things that derive from it, such asconvolution, for instance.
  • DL - Delayed "push" array contains a monadic action that describes how an array can be loadedinto memory. This is most useful for composing arrays together.
  • DS - Delayed stream array is a sequence of elements, possibly even an infinite one. This ismost useful for situations when we don't know the size of our resulting array ahead of time,which is common in operations such asfilter,mapMaybe,unfold etc. Naturally, in the endwe can only load such an array into a flat vector.
  • DI - Is just likeD, except loading is interleaved and is useful for parallel loadingarrays with unbalanced computation, such as Mandelbrot set or ray tracing, for example.

Construct

Creating a delayed type of array allows us to fuse any future operations we decide to perform onit. Let's look at this example:

λ> importData.Massiv.Array asAλ> makeVectorRDSeq10idArrayDSeq (Sz110)  [0,1,2,3,4,5,6,7,8,9 ]

Here we created a delayed vector of size 10, which is in reality just anid function from itsindex to an element (see theComputation section for the meaning ofSeq). So let'sgo ahead and square its elements

λ> vec= makeVectorRDSeq10idλ> evaluateM vec44λ> vec2=A.map (^ (2::Int)) vecλ> evaluateM vec2416

It's not that exciting, since every time we callevaluateM it will recompute the element,everytime, therefore this function should be avoided at all costs! Instead we can use all of thefunctions that takeSource like arrays and then fuse that computation together by callingcompute, or a handycomputeAs function and only afterwards apply anindexM function or itspartial synonym:(!). Any delayed array can also be reduced using one of the folding functions,thus completely avoiding any memory allocation, or converted to a list, if that's what you need:

λ> vec2U= computeAsU vec2λ> vec2UArrayUSeq (Sz110)  [0,1,4,9,16,25,36,49,64,81> vec2U!416λ> toList vec2U[0,1,4,9,16,25,36,49,64,81>A.sum vec2U285

There is a whole multitude of ways to construct arrays:

  • by using one of many helper functions:makeArray,range,rangeStepFrom,enumFromN, etc.
  • through conversion: from lists, fromVectors invector library, fromByteStrings inbytestring;
  • with a mutable interface inPrimMonad (IO,ST, etc.), eg:makeMArray,generateArray,unfoldrPrim, etc.

It's worth noting that, in the next example, nested lists will be loaded into an unboxed manifestarray and the sum of its elements will be computed in parallel on all available cores.

λ>A.sum (fromLists'Par [[0,0,0,0,0],[0,1,2,3,4],[0,2,4,6,8]]::ArrayUIx2Double)30.0

The above wouldn't run in parallel in ghci of course, as the program would have to be compiled withghc using-threaded -with-rtsopts=-N flags in order to use all available cores. Alternatively wecould compile with the-threaded flag and then pass the number of capabilities directly to theruntime with+RTS -N<n>, where<n> is the number of cores you'd like to utilize.

Index

The mainIx n closed type family can be somewhat confusing, but there is no need to fullyunderstand how it works in order to start using it. GHC might ask you for theDataKinds languageextension ifIxN n is used in a type signature, but there are type and pattern synonyms for thefirst five dimensions:Ix1,Ix2,Ix3,Ix4 andIx5.

There are three distinguishable constructors for the index:

  • The first one is simply an int:Ix1 = Ix 1 = Int, therefore vectors can be indexed in a usual waywithout some extra wrapping data type, just as it was demonstrated in a previous section.
  • The second one isIx2 for operating on 2-dimensional arrays and has a constructor:.
λ> makeArrayRDSeq (Sz (3:.5)) (\ (i:. j)-> i* j)ArrayDSeq (Sz (3:.5))  [ [0,0,0,0,0 ]  , [0,1,2,3,4 ]  , [0,2,4,6,8 ]  ]
  • The third one isIxN n and is designed for working with N-dimensional arrays, and has a similarlooking constructor:>, except that it can be chained indefinitely on top of:.
λ> arr3= makeArrayRPSeq (Sz (3:>2:.5)) (\ (i:> j:. k)-> i* j+ k)λ>:t arr3arr3::ArrayP (IxN3)Intλ> arr3ArrayPSeq (Sz (3:>2:.5))  [ [ [0,1,2,3,4 ]    , [0,1,2,3,4 ]    ]  , [ [0,1,2,3,4 ]    , [1,2,3,4,5 ]    ]  , [ [0,1,2,3,4 ]    , [2,3,4,5,6 ]    ]  ]λ> arr3! (2:>1:.4)6λ> ix10=10:>9:>8:>7:>6:>5:>4:>3:>2:.1λ>:t ix10ix10::IxN10λ> ix10-- 10-dimensional index10:>9:>8:>7:>6:>5:>4:>3:>2:.1

Here is how we can construct a 4-dimensional array and sum its elements in constant memory:

λ> arr= makeArrayRDSeq (Sz (10:>20:>30:.40))$\ (i:> j:> k:. l)-> (i* j+ k)* k+>:t arr-- a 4-dimensional arrayarr::ArrayD (IxN4)Intλ>A.sum arr221890000

There are quite a few helper functions that can operate on indices, but these are only needed whenwriting functions that work for arrays of arbitrary dimension, as such they are scarcely used:

λ> pullOutDim' ix105(5,10:>9:>8:>7:>6:>4:>3:>2:.1> unconsDim ix10(10,9:>8:>7:>6:>5:>4:>3:>2:.1> unsnocDim ix10(10:>9:>8:>7:>6:>5:>4:>3:.2,1)

All of theIx n indices are instances ofNum so basic numeric operations are made easier:

λ> (1:>2:.3)+ (4:>5:.6)5:>7:.9λ>5::Ix45:>5:>5:.5

It is important to note that the size type is distinct from the index by the newtype wrapperSz ix. There is a constructorSz, which will make sure that none of the dimensions are negative:

λ>Sz (2:>3:.4)Sz (2:>3:.4>Sz (10:>2:>-30:.4)Sz (10:>2:>0:.4)

Same as with indices, there are helper pattern synonyms:Sz1,Sz2,Sz3,Sz4 andSz5.

λ>Sz3234Sz (2:>3:.4>Sz4102 (-30)4Sz (10:>2:>0:.4)

As well as theNum instance:

λ>4::Sz5Sz (4:>4:>4:>4:.4> (Sz212)+3Sz (4:.5> (Sz212)-3Sz (0:.0)

Alternatively tuples ofInts can be used for working with arrays, up to and including 5-tuples(type synonyms:Ix2T ..Ix5T), but since tuples are polymorphic it is necessary to restrict theresulting array type. Not all operations in the library support tuples, so it is advised to avoidthem for indexing.

λ> makeArraySeq (4,20) (uncurry(*))::ArrayPIx2TInt(ArrayPSeq ((4,20))  [ [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 ]  , [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19 ]  , [0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38 ]  , [0,3,6,9,12,15,18,21,24,27,30,33,36,39,42,45,48,51,54,57 ]  ])λ>:iIx2TtypeIx2T= (Int,Int)

There are helper functions that can go back and forth between tuples andIx n indices.

λ> fromIx4 (3:>4:>5:.6)(3,4,5,6> toIx5 (3,4,5,6,7)3:>4:>5:>6:.7

Slicing

In order to get a subsection of an array there is no need to recompute it, unless we want to free upthe no longer memory, of course. So, there are a few slicing, resizing and extraction operators thatcan do it all in constant time, modulo the index manipulation:

λ> arr= makeArrayRUSeq (Sz (4:>2:.6)) fromIx3λ> arr!>3!>1ArrayMSeq (Sz16)  [ (3,1,0), (3,1,1), (3,1,2), (3,1,3), (3,1,4), (3,1,5) ]

As you might suspect all of the slicing, indexing, extracting, resizing operations are partial, andthose are frowned upon in Haskell. So there are matching functions that can do the same operationssafely by usingMonadThrow and thus returningNothing,Left SomeException or throwing anexception in case ofIO on failure, for example:

λ> arr!?>3??>1ArrayMSeq (Sz16)  [ (3,1,0), (3,1,1), (3,1,2), (3,1,3), (3,1,4), (3,1,5) ]λ> arr!?>3??>1??0::Maybe (Int,Int,Int)Just (3,1,0)

In above examples we first take a slice at the 4th page (index 3, since we start at 0), then anotherone at the 2nd row (index 1). While in the last example we also take 1st element atposition 0. Pretty neat, huh? Naturally, by doing a slice we always reduce dimension by one. We cando slicing from the outside as well as from the inside:

λ>Ix11...9ArrayDSeq (Sz110)  [1,2,3,4,5,6,7,8,9> a<- resizeM (Sz (3:>2:.4))$Ix111...34λ> aArrayDSeq (Sz (3:>2:.4))  [ [ [11,12,13,14 ]    , [15,16,17,18 ]    ]  , [ [19,20,21,22 ]    , [23,24,25,26 ]    ]  , [ [27,28,29,30 ]    , [31,32,33,34 ]    ]  ]λ> a!>0ArrayDSeq (Sz (2:.4))  [ [11,12,13,14 ]  , [15,16,17,18 ]  ]λ> a<!0ArrayDSeq (Sz (3:.2))  [ [11,15 ]  , [19,23 ]  , [27,31 ]  ]

Or we can slice along any other available dimension:

λ> a<!> (Dim2,0)ArrayDSeq (Sz (3:.4))  [ [11,12,13,14 ]  , [19,20,21,22 ]  , [27,28,29,30 ]  ]

In order to extract sub-array while preserving dimensionality we can useextractM orextractFromToM.

λ> extractM (0:>1:.1) (Sz (3:>1:.2)) aArrayDSeq (Sz (3:>1:.2))  [ [ [16,17 ]    ]  , [ [24,25 ]    ]  , [ [32,33 ]    ]  ]λ> extractFromToM (1:>0:.1) (3:>2:.4) aArrayDSeq (Sz (2:>2:.3))  [ [ [20,21,22 ]    , [24,25,26 ]    ]  , [ [28,29,30 ]    , [32,33,34 ]    ]  ]

Computation and parallelism

There is a data typeComp that controls how elements will be computed when calling thecomputefunction. It has a few constructors, although most of the time eitherSeq orPar will besufficient:

  • Seq - computation will be done sequentially on one core (capability in ghc).
  • ParOn [Int] - perform computation in parallel while pinning the workers to particularcores. Providing an empty list will result in the computation being distributed over allavailable cores, or better known in Haskell as capabilities.
  • ParN Word16 - similar toParOn, except it simply specifies the number of cores touse, with0 meaning all cores.
  • Par - isn't really a constructor but apattern for constructingParOn [], whichwill result in Scheduler using all cores, thus should be used instead ofParOn.
  • Par' - similar toPar, except it usesParN 0 underneath.

Just to make sure a simple novice mistake is prevented, which I have seen in the past, make sureyour source code is compiled withghc -O2 -threaded -with-rtsopts=-N, otherwise no parallelizationand poor performance are waiting for you. Also a bit later you might notice the{-# INLINE funcName #-} pragma being used, oftentimes it is a good idea to do that, but not always required. It isworthwhile to benchmark and experiment.

Stencil

Instead of manually iterating over a multi-dimensional array and applying a function to each element,while reading its neighboring elements (as you would do in an imperative language) in a functionallanguage it is much more efficient to apply a stencil function and let the library take care of allof bounds checking and iterating in a cache friendly manner.

What's astencil? It is a declarative way ofspecifying a pattern for how elements of an array in a neighborhood will be used in order to updateeach element of the newly created array. In massiv aStencil is a function that can read theneighboring elements of the stencil'scenter (the zero index), and only those, and then outputs anew value for the center element.

stencil

Let's create a simple, but somewhat meaningful array and create an averaging stencil. There isnothing special about the array itself, but the averaging filter is a stencil that sums the elementsin aMoore neighborhood and divides the resultby 9, i.e. finds the average of a 3 by 3 square.

arrLightIx2::Comp->SzIx2->ArrayDIx2DoublearrLightIx2 comp arrSz= makeArray comp arrSz$\ (i:. j)->sin (fromIntegral (i* i+ j* j)){-#INLINE arrLightIx2 #-}average3x3Filter::Fractionala=>StencilIx2aaaverage3x3Filter= makeStencil (Sz (3:.3)) (1:.1)$\ get->  (  get (-1:.-1)+ get (-1:.0)+ get (-1:.1)+     get (0:.-1)+ get (0:.0)+ get (0:.1)+     get (1:.-1)+ get (1:.0)+ get (1:.1)   )/9{-#INLINE average3x3Filter #-}

Here is what it would look like in GHCi. We create a delayed array with some funky periodicfunction, and make sure it is computed prior to mapping an average stencil over it:

λ> arr= computeAsU$ arrLightIx2Par (Sz (600:.800))λ>:t arrarr::ArrayUIx2Doubleλ>:t mapStencilEdge average3x3Filter arrmapStencilEdge average3x3Filter arr::ArrayDWIx2Double

As you can see, that operation produced an array of the earlier mentioned representation DelayedWindowedDW. In its essenceDW is an array type that does no bounds checking in order to gainperformance, except when it's near the border, where it uses a border resolution technique suppliedby the user (Edge in the example above). Currently it is used only in stencils and not much elsecan be done to an array of this type besides further computing it into a manifest representation.

This example will be continued in the next section, but before that I would like to mention thatsome might notice that it looks very much like convolution, and in fact convolution can beimplemented with a stencil. There is a helper functionmakeConvolutionStencil that letsyou do just that. For the sake of example we'll do a sum of all neighbors by hand instead:

sum3x3Filter::Fractionala=>StencilIx2aasum3x3Filter= makeConvolutionStencil (Sz (3:.3)) (1:.1)$\ get->  get (-1:.-1)1. get (-1:.0)1. get (-1:.1)1.  get (0:.-1)1. get (0:.0)1. get (0:.1)1.  get (1:.-1)1. get (1:.0)1. get (1:.1)1{-#INLINE sum3x3Filter #-}

There is not a single plus or multiplication sign, that is because convolutions is actuallysummation of elements multiplied by a kernel element, so instead we have composition of functionsapplied to an offset index and a multiplier. After we map that stencil, we can further divide eachelement of the array by 9 in order to get the average. Yeah, I lied a bit,Array DW ix is aninstance ofFunctor class, so we can map functions over it, which will be fused as with a regularDelayed array:

computeAsU$fmap (/9)$ mapStencilEdge sum3x3Filter arr

If you are still confused of what a stencil is, but you are familiar withConway's Game ofLife this should hopefully clarify it abit more. The functionlife below is a single iteration of Game of Life:

lifeRules::Word8->Word8->Word8lifeRules03=1lifeRules12=1lifeRules13=1lifeRules _ _=0lifeStencil::StencilIx2Word8Word8lifeStencil= makeStencil (Sz (3:.3)) (1:.1)$\ get->  lifeRules (get (0:.0))$ get (-1:.-1)+ get (-1:.0)+ get (-1:.1)+                             get (0:.-1)+         get (0:.1)+                             get (1:.-1)+ get (1:.0)+ get (1:.1)life::ArraySIx2Word8->ArraySIx2Word8life= compute. mapStencilWrap lifeStencil

The full working example that uses GLUT and OpenGL is located inGameOfLife. You can run it if you have the GLUTdependencies installed:

$cd massiv-examples&& stack run GameOfLife

massiv-io

In order to do anything useful with arrays we often need to be able to read some data from afile. Considering that most common array-like files are images,massiv-io provides an interface to read, write and displayimages in common formats using Haskell native JuicyPixels and Netpbm packages.

Color package provides a variety of color spaces and conversionsbetween them, which are used bymassiv-io package as pixels during reading and writing images.

An earlier example wasn't particularly interesting, since we couldn't visualize what is actuallygoing on, so let's expand on it:

importData.Massiv.ArrayimportData.Massiv.Array.IOmain::IO()main=dolet arr= computeAsS$ arrLightIx2Par (600:.800)toImage::           (Functor (ArrayrIx2),LoadrIx2 (Pixel (Y'SRGB)Word8))=>ArrayrIx2Double->ImageS (Y'SRGB)Word8      toImage= computeAsS.fmap (PixelY'. toWord8)      lightPath="files/light.png"      lightImage= toImage$ delay arr      lightAvgPath="files/light_avg.png"      lightAvgImage= toImage$ mapStencilEdge (avgStencil3) arr      lightSumPath="files/light_sum.png"      lightSumImage= toImage$ mapStencilEdge (sumStencil3) arr  writeImage lightPath lightImageputStrLn$"written:"++ lightPath  writeImage lightAvgPath lightAvgImageputStrLn$"written:"++ lightAvgPath  writeImage lightSumPath lightSumImageputStrLn$"written:"++ lightSumPath  displayImageUsing defaultViewerTrue. computeAsS=<< concatM1 [lightAvgImage, lightImage, lightSumImage]

massiv-examples/vision/files/light.png:

Light

massiv-examples/vision/files/light_avg.png:

Light Average

The full example is in the examplevision package and if youhavestack installed you can run it as:

$cd massiv-examples&& stack run avg-sum

Other libraries

A natural question might come to mind: Why even bother with a new array library when we already havea few really good ones in the Haskell world? The main reasons for me are performance andusability. I personally felt like there was much room for improvement before I even started working onthis package, and it seems like it turned out to be true. For example, the most common goto libraryfor dealing with multidimensional arrays and parallel computation used to beRepa, which I personally was a big fan of for quite sometime, to the point that I even wrote aHaskell ImageProcessing library based on top of it.

Here is a quick summary of howmassiv is better thanRepa:

  • It is actively maintained.
  • Much more sophisticated scheduler. It is resumable and is capable of handling nested parallelcomputation.
  • Improved indexing data types.
  • Safe stencils for arbitrary dimensions, not only 2D convolution. Stencils are composable
  • Improved performance on almost all operations.
  • Structural parallel folds (i.e. left/right - direction is preserved)
  • Super easy slicing.
  • Extensive mutable interface
  • More fusion capabilities with delayed stream and push array representations.
  • Delayed arrays aren't indexable, only Manifest are (saving user from common pitfall in Repa oftrying to read elements of delayed array)

As far as usability of the library goes, it is very subjective, thus I'll let you be a judge ofthat. When talking about performance it is the facts that do matter. Thus, let's not continue thisdiscussion in pure abstract words, below is a glimpse into benchmarks against Repa library runningwith GHC 8.8.4 on Intel® Core™ i7-3740QM CPU @ 2.70GHz × 8

Matrix multiplication:

benchmarking Repa/MxM U Double - (500x800 X 800x500)/Partime                 120.5 ms   (115.0 ms .. 127.2 ms)                     0.998 R²   (0.996 R² .. 1.000 R²)mean                 124.1 ms   (121.2 ms .. 127.3 ms)std dev              5.212 ms   (2.422 ms .. 6.620 ms)variance introduced by outliers: 11% (moderately inflated)benchmarking Massiv/MxM U Double - (500x800 X 800x500)/Partime                 41.46 ms   (40.67 ms .. 42.45 ms)                     0.998 R²   (0.994 R² .. 0.999 R²)mean                 38.45 ms   (37.22 ms .. 39.68 ms)std dev              2.342 ms   (1.769 ms .. 3.010 ms)variance introduced by outliers: 19% (moderately inflated)

Sobel operator:

benchmarking Sobel/Par/Operator - Repatime                 17.82 ms   (17.30 ms .. 18.32 ms)                     0.997 R²   (0.994 R² .. 0.998 R²)mean                 17.42 ms   (17.21 ms .. 17.69 ms)std dev              593.0 μs   (478.1 μs .. 767.5 μs)variance introduced by outliers: 12% (moderately inflated)benchmarking Sobel/Par/Operator - Massivtime                 7.421 ms   (7.230 ms .. 7.619 ms)                     0.994 R²   (0.991 R² .. 0.997 R²)mean                 7.537 ms   (7.422 ms .. 7.635 ms)std dev              334.3 μs   (281.3 μs .. 389.9 μs)variance introduced by outliers: 20% (moderately inflated)

Sum all elements of a 2D array:

benchmarking Sum/Seq/Repatime                 539.7 ms   (523.2 ms .. 547.9 ms)                     1.000 R²   (1.000 R² .. 1.000 R²)mean                 540.1 ms   (535.7 ms .. 543.2 ms)std dev              4.727 ms   (2.208 ms .. 6.609 ms)variance introduced by outliers: 19% (moderately inflated)benchmarking Sum/Seq/Vectortime                 16.95 ms   (16.78 ms .. 17.07 ms)                     0.999 R²   (0.998 R² .. 1.000 R²)mean                 17.23 ms   (17.13 ms .. 17.43 ms)std dev              331.4 μs   (174.1 μs .. 490.0 μs)benchmarking Sum/Seq/Massivtime                 16.78 ms   (16.71 ms .. 16.85 ms)                     1.000 R²   (1.000 R² .. 1.000 R²)mean                 16.80 ms   (16.76 ms .. 16.88 ms)std dev              127.8 μs   (89.95 μs .. 186.2 μs)benchmarking Sum/Par/Repatime                 81.76 ms   (78.52 ms .. 84.37 ms)                     0.997 R²   (0.990 R² .. 1.000 R²)mean                 79.20 ms   (78.03 ms .. 80.91 ms)std dev              2.613 ms   (1.565 ms .. 3.736 ms)benchmarking Sum/Par/Massivtime                 8.102 ms   (7.971 ms .. 8.216 ms)                     0.999 R²   (0.998 R² .. 1.000 R²)mean                 7.967 ms   (7.852 ms .. 8.028 ms)std dev              236.4 μs   (168.4 μs .. 343.2 μs)variance introduced by outliers: 11% (moderately inflated)

Here is also a blog post that comparesPerformance of Haskell Array libraries through Canny edge detection

Further resources on learningmassiv:


[8]ページ先頭

©2009-2025 Movatter.jp