- Notifications
You must be signed in to change notification settings - Fork6
A package for dealing with crowdsourced big data. Website:https://enriquegrodrigo.github.io/spark-crowd/
License
enriquegrodrigo/spark-crowd
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
A package for dealing with crowdsourced big data.
Crowdsourced data introduces new problems that need to be addressed byMachine learning algorithms. This illustration exemplifies the main issuesof using this kind of data.
The package usessbt for building the project,so we recommend installing this tool if you do not yet have it installed.
The simplest way to use the package is adding the next dependency directlyinto thebuild.sbt
file of your project.
libraryDependencies+="com.enriquegrodrigo"%%"spark-crowd"%"0.2.1"
If this is not a possibility, you can compile the project and create a.jar
file or, you can publish the project to a local repository, asexplained below.
In thespark-crowd
folder, one should execute the command
> sbt package
to create a.jar
file. Usually, it is located intarget/scala-2.11/spark-crowd_2.11-0.2.1.jar
.
This.jar
can be added to new projects using this library. Insbt
onecan add.jar
files to thelib
folder.
In thespark-crowd
folder, one should execute the command
> sbt publish-local
to publish the library to a local Ivy repository. Thelibrary can be added to thebuild.sbt
file of a newproject with the following line:
libraryDependencies+="com.enriquegrodrigo"%%"spark-crowd"%"0.2.1"
For running the examples of this package one can use our docker image with the latest version of spark-crowd. Let's see how to run theDawidSkeneExample.scala
file:
docker run --rm -it -v $(pwd)/:/home/work/project enriquegrodrigo/spark-crowd DawidSkeneExample.scala
For running a spark-shell with the library pre-loaded, one can use:
docker run --rm -it -v $(pwd)/:/home/work/project enriquegrodrigo/spark-crowd
One can also generate a.jar
file as seen previously and use it withspark-shell
orspark-submit
. For example, withspark-shell
:spark-shell --jars spark-crowd_2.11-0.2.1.jar -i DawidSkeneExample.scala
This package makes extensive use of SparkDataFrame andDataset APIs. The lattertakes advantage of typed rows which is beneficial for debugging purposes, among other things.As the annotations data sets usually have a fixed structure the package includes typesthree annotations data sets (binary, multiclass and real annotations), all of themwith the following structure:
example | annotator | value |
---|---|---|
1 | 1 | 0 |
1 | 2 | 1 |
2 | 2 | 0 |
... | ... | ... |
So the user needs to provide the annotations using this typed data sets to apply the learning methods.This is usually simple if the user has all the information above in a Spark DataFrame:
- The
example
variable should be in the range[0..number of Examples]
- The
annotator
variable should be in the range[0..number of Annotators]
- The
value
variable should be in the range[0..number of Classes]
importcom.enriquegrodrigo.spark.crowd.types.BinaryAnnotationvaldf= annotationDataFramevalconverted= df.map(x=>BinaryAnnotation(x.getLong(0), x.getLong(1), x.getInt(2))) .as[BinaryAnnotation]
The process is similar for the other types of annotation data. Theconverted
Spark Dataset is ready to be use with the methods commented in theMethods subsection.
In the case of the feature dataset, the requisites are that:
- Appart from the features, the data must have an
example
and aclass
columns. - The example must be of type
Long
. - Class must be of type
Integer
orDouble
, depending on the type of class (discrete or continuous). - All features must be of type
Double
. For discrete features one can use indicator variables.
The methods implemented as well as the type of annotations that they support are summarisedin the following table:
Method | Binary | Multiclass | Real | Reference |
---|---|---|---|---|
MajorityVoting | ✅ | ✅ | ✅ | |
DawidSkene | ✅ | ✅ | JRSS | |
IBCC | ✅ | ✅ | AISTATS | |
GLAD | ✅ | NIPS | ||
CGLAD | ✅ | IDEAL | ||
Raykar | ✅ | ✅ | ✅ | JMLR |
CATD | ✅ | VLDB | ||
PM | ✅ | SIGMOD | ||
PMTI | ✅ | VLDB2 |
The algorithm name links to the documentation of the implemented method in our application. TheReference column contains a link to where the algorithm was published. As an example, thefollowing code shows how to use theDawidSkene
method:
importcom.enriquegrodrigo.spark.crowd.methods.DawidSkene//Dataset of annotationsvaldf= annotationDataset.as[MulticlassAnnotation]//Parameters for the methodvaleMIters=10valeMThreshold=0.01//Algorithm executionresult=DawidSkene(df,eMIters, eMThreshold) annotatorReliability= result.params.pigroundTruth= result.dataset
The information obtained from each algorithm as well as about the parameters needed by them canbe found in thedocumentation.
Author:
- Enrique G. Rodrigo
Contributors:
- Juan A. Aledo
- Jose A. Gamez
MIT License
Copyright (c) 2017 Enrique González Rodrigo
Permission is hereby granted, free of charge, to any person obtaining a copyof this software and associated documentation files (the "Software"), to dealin the Software without restriction, including without limitation the rightsto use, copy, modify, merge, publish, distribute, sublicense, and/or sellcopies of the Software, and to permit persons to whom the Software isfurnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in allcopies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS ORIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THEAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHERLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THESOFTWARE.
About
A package for dealing with crowdsourced big data. Website:https://enriquegrodrigo.github.io/spark-crowd/