Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Simple and Distributed Machine Learning

License

NotificationsYou must be signed in to change notification settings

microsoft/SynapseML

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

SynapseML

Synapse Machine Learning

SynapseML (previously known as MMLSpark), is an open-source library that simplifies the creation of massively scalable machine learning (ML) pipelines. SynapseML provides simple, composable, and distributed APIs for a wide variety of different machine learning tasks such as text analytics, vision, anomaly detection, and many others. SynapseML is built on theApache Spark distributed computing framework and shares the same API as theSparkML/MLLib library, allowing you to seamlessly embed SynapseML models into existing Apache Spark workflows.

With SynapseML, you can build scalable and intelligent systems to solve challenges in domains such as anomaly detection, computer vision, deep learning, text analytics, and others. SynapseML can train and evaluate models on single-node, multi-node, and elastically resizable clusters of computers. This lets you scale your work without wasting resources. SynapseML is usable across Python, R, Scala, Java, and .NET. Furthermore, its API abstracts over a wide variety of databases, file systems, and cloud data stores to simplify experiments no matter where data is located.

SynapseML requires Scala 2.12, Spark 3.4+, and Python 3.8+.

TopicsLinks
BuildBuild StatuscodecovCode style: black
VersionVersionRelease NotesSnapshot Version
DocsWebsiteScala DocsPySpark DocsAcademic Paper
SupportGitterMail
BinderBinder
UsageDownloads
Table of Contents

Features

Vowpal Wabbit on SparkThe Cognitive Services for Big DataLightGBM on SparkSpark Serving
Fast, Sparse, and Effective Text AnalyticsLeverage the Microsoft Cognitive Services at Unprecedented Scales in your existing SparkML pipelinesTrain Gradient Boosted Machines with LightGBMServe any Spark Computation as a Web Service with Sub-Millisecond Latency
HTTP on SparkONNX on SparkResponsible AISpark Binding Autogeneration
An Integration Between Spark and the HTTP Protocol, enabling Distributed Microservice OrchestrationDistributed and Hardware Accelerated Model Inference on SparkUnderstand Opaque-box Models and Measure Dataset BiasesAutomatically Generate Spark bindings for PySpark and SparklyR
Isolation Forest on SparkCyberMLConditional KNN
Distributed Nonlinear Outlier DetectionMachine Learning Tools for Cyber SecurityScalable KNN Models with Conditional Queries

Documentation and Examples

For quickstarts, documentation, demos, and examples please see ourwebsite.

Setup and installation

First select the correct platform that you are installing SynapseML into:

Microsoft Fabric

In Microsoft Fabric notebooks SynapseML is already installed. To change the version please place the following in the first cell of your notebook.

%%configure -f{"name":"synapseml","conf": {"spark.jars.packages":"com.microsoft.azure:synapseml_2.12:<THE_SYNAPSEML_VERSION_YOU_WANT>","spark.jars.repositories":"https://mmlspark.azureedge.net/maven","spark.jars.excludes":"org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind","spark.yarn.user.classpath.first":"true","spark.sql.parquet.enableVectorizedReader":"false"  }}

Synapse Analytics

In Azure Synapse notebooks please place the following in the first cell of your notebook.

  • For Spark 3.5 Pools:
%%configure -f{"name":"synapseml","conf": {"spark.jars.packages":"com.microsoft.azure:synapseml_2.12:1.1.0","spark.jars.repositories":"https://mmlspark.azureedge.net/maven","spark.jars.excludes":"org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind","spark.yarn.user.classpath.first":"true","spark.sql.parquet.enableVectorizedReader":"false"  }}
  • For Spark 3.4 Pools:
%%configure -f{"name":"synapseml","conf": {"spark.jars.packages":"com.microsoft.azure:synapseml_2.12:1.0.15","spark.jars.repositories":"https://mmlspark.azureedge.net/maven","spark.jars.excludes":"org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind","spark.yarn.user.classpath.first":"true","spark.sql.parquet.enableVectorizedReader":"false"  }}
  • For Spark 3.3 Pools:
%%configure -f{"name":"synapseml","conf": {"spark.jars.packages":"com.microsoft.azure:synapseml_2.12:0.11.4-spark3.3","spark.jars.repositories":"https://mmlspark.azureedge.net/maven","spark.jars.excludes":"org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind","spark.yarn.user.classpath.first":"true","spark.sql.parquet.enableVectorizedReader":"false"  }}

To install at the pool level instead of the notebook leveladd the spark properties listed above to the pool configuration.

Databricks

To install SynapseML on theDatabrickscloud, create a newlibrary from Mavencoordinatesin your workspace.

For the coordinates use:com.microsoft.azure:synapseml_2.12:1.1.0with the resolver:https://mmlspark.azureedge.net/maven. Ensure this library isattached to your target cluster(s).

Finally, ensure that your Spark cluster has at least Spark 3.2 and Scala 2.12. If you encounter Netty dependency issues please use DBR 10.1.

You can use SynapseML in both your Scala and PySpark notebooks. To get started with our example notebooks import the following databricks archive:

https://mmlspark.blob.core.windows.net/dbcs/SynapseMLExamplesv1.1.0.dbc

Python Standalone

To try out SynapseML on a Python (or Conda) installation you can get Sparkinstalled via pip withpip install pyspark. You can then usepyspark as inthe above example, or from python:

importpysparkspark=pyspark.sql.SparkSession.builder.appName("MyApp") \            .config("spark.jars.packages","com.microsoft.azure:synapseml_2.12:1.1.0") \            .getOrCreate()importsynapse.ml

Spark Submit

SynapseML can be conveniently installed on existing Spark clusters via the--packages option, examples:

spark-shell --packages com.microsoft.azure:synapseml_2.12:1.1.0pyspark --packages com.microsoft.azure:synapseml_2.12:1.1.0spark-submit --packages com.microsoft.azure:synapseml_2.12:1.1.0 MyApp.jar

SBT

If you are building a Spark application in Scala, add the following lines toyourbuild.sbt:

libraryDependencies+="com.microsoft.azure"%"synapseml_2.12"%"1.1.0"

Apache Livy and HDInsight

To install SynapseML from within a Jupyter notebook served by Apache Livy the following configure magic can be used. You will need to start a new session after this configure cell is executed.

Excluding certain packages from the library may be necessary due to current issues with Livy 0.5.

%%configure -f{"name":"synapseml","conf": {"spark.jars.packages":"com.microsoft.azure:synapseml_2.12:1.1.0","spark.jars.excludes":"org.scala-lang:scala-reflect,org.apache.spark:spark-tags_2.12,org.scalactic:scalactic_2.12,org.scalatest:scalatest_2.12,com.fasterxml.jackson.core:jackson-databind"    }}

Docker

The easiest way to evaluate SynapseML is via our pre-built Docker container. Todo so, run the following command:

docker run -it -p 8888:8888 -e ACCEPT_EULA=yes mcr.microsoft.com/mmlspark/release jupyter notebook

Navigate tohttp://localhost:8888/ in your web browser to run the samplenotebooks. See thedocumentation for more on Docker use.

To read the EULA for using the docker image, rundocker run -it -p 8888:8888 mcr.microsoft.com/mmlspark/release eula

R

To try out SynapseML using the R autogenerated wrapperssee ourinstructions. Note: This feature is still under developmentand some necessary custom wrappers may be missing.

Building from source

SynapseML has recently transitioned to a new build infrastructure.For detailed developer docs please see theDeveloper Readme

If you are an existing synapsemldeveloper, you will need to reconfigure yourdevelopment setup. We now support platform independent development andbetter integrate with intellij and SBT.If you encounter issues please reach out to our support email!

Papers

Learn More

Contributing & feedback

This project has adopted theMicrosoft Open Source Code of Conduct. For moreinformation see theCode of Conduct FAQ or contactopencode@microsoft.com with any additionalquestions or comments.

SeeCONTRIBUTING.md for contribution guidelines.

To give feedback and/or report an issue, open aGitHubIssue.

Other relevant projects

Apache®, Apache Spark, and Spark® are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.


[8]ページ先頭

©2009-2025 Movatter.jp