Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

A package for dealing with crowdsourced big data. Website:https://enriquegrodrigo.github.io/spark-crowd/

License

NotificationsYou must be signed in to change notification settings

enriquegrodrigo/spark-crowd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CircleCI branchCodecovMaven Centrallicense

spark-crowd

A package for dealing with crowdsourced big data.

Crowdsourced data introduces new problems that need to be addressed byMachine learning algorithms. This illustration exemplifies the main issuesof using this kind of data.

Crowdsourcing illustration

Installation

The package usessbt for building the project,so we recommend installing this tool if you do not yet have it installed.

The simplest way to use the package is adding the next dependency directlyinto thebuild.sbt file of your project.

libraryDependencies+="com.enriquegrodrigo"%%"spark-crowd"%"0.2.1"

If this is not a possibility, you can compile the project and create a.jar file or, you can publish the project to a local repository, asexplained below.

Creating a.jar file and adding it to a new project

In thespark-crowd folder, one should execute the command

> sbt package

to create a.jar file. Usually, it is located intarget/scala-2.11/spark-crowd_2.11-0.2.1.jar.

This.jar can be added to new projects using this library. Insbt onecan add.jar files to thelib folder.

Publishing to a local repository

In thespark-crowd folder, one should execute the command

> sbt publish-local

to publish the library to a local Ivy repository. Thelibrary can be added to thebuild.sbt file of a newproject with the following line:

    libraryDependencies+="com.enriquegrodrigo"%%"spark-crowd"%"0.2.1"

Usage

Running the examples

Docker Automated builDocker Build Statu

For running the examples of this package one can use our docker image with the latest version of spark-crowd. Let's see how to run theDawidSkeneExample.scala file:

docker run --rm -it -v $(pwd)/:/home/work/project enriquegrodrigo/spark-crowd DawidSkeneExample.scala

For running a spark-shell with the library pre-loaded, one can use:

docker run --rm -it -v $(pwd)/:/home/work/project enriquegrodrigo/spark-crowd

One can also generate a.jar file as seen previously and use it withspark-shell orspark-submit. For example, withspark-shell:spark-shell --jars spark-crowd_2.11-0.2.1.jar -i DawidSkeneExample.scala

Types

This package makes extensive use of SparkDataFrame andDataset APIs. The lattertakes advantage of typed rows which is beneficial for debugging purposes, among other things.As the annotations data sets usually have a fixed structure the package includes typesthree annotations data sets (binary, multiclass and real annotations), all of themwith the following structure:

exampleannotatorvalue
110
121
220
.........

So the user needs to provide the annotations using this typed data sets to apply the learning methods.This is usually simple if the user has all the information above in a Spark DataFrame:

  • Theexample variable should be in the range[0..number of Examples]
  • Theannotator variable should be in the range[0..number of Annotators]
  • Thevalue variable should be in the range[0..number of Classes]
importcom.enriquegrodrigo.spark.crowd.types.BinaryAnnotationvaldf= annotationDataFramevalconverted= df.map(x=>BinaryAnnotation(x.getLong(0), x.getLong(1), x.getInt(2)))                  .as[BinaryAnnotation]

The process is similar for the other types of annotation data. Theconverted Spark Dataset is ready to be use with the methods commented in theMethods subsection.

In the case of the feature dataset, the requisites are that:

  • Appart from the features, the data must have anexample and aclass columns.
  • The example must be of typeLong.
  • Class must be of typeInteger orDouble, depending on the type of class (discrete or continuous).
  • All features must be of typeDouble. For discrete features one can use indicator variables.

Methods

The methods implemented as well as the type of annotations that they support are summarisedin the following table:

MethodBinaryMulticlassRealReference
MajorityVoting
DawidSkeneJRSS
IBCCAISTATS
GLADNIPS
CGLADIDEAL
RaykarJMLR
CATDVLDB
PMSIGMOD
PMTIVLDB2

The algorithm name links to the documentation of the implemented method in our application. TheReference column contains a link to where the algorithm was published. As an example, thefollowing code shows how to use theDawidSkene method:

importcom.enriquegrodrigo.spark.crowd.methods.DawidSkene//Dataset of annotationsvaldf= annotationDataset.as[MulticlassAnnotation]//Parameters for the methodvaleMIters=10valeMThreshold=0.01//Algorithm executionresult=DawidSkene(df,eMIters, eMThreshold) annotatorReliability= result.params.pigroundTruth= result.dataset

The information obtained from each algorithm as well as about the parameters needed by them canbe found in thedocumentation.

Credits

Author:

  • Enrique G. Rodrigo

Contributors:

  • Juan A. Aledo
  • Jose A. Gamez

License

MIT License

Copyright (c) 2017 Enrique González Rodrigo

Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.


[8]ページ先頭

©2009-2025 Movatter.jp