Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Training neural models with structured signals.

License

NotificationsYou must be signed in to change notification settings

tensorflow/neural-structured-learning

Repository files navigation

Neural Structured Learning (NSL) is a new learning paradigm to train neuralnetworks by leveraging structured signals in addition to feature inputs.Structure can be explicit as represented by a graph [1,2,5] or implicit asinduced by adversarial perturbation [3,4].

Structured signals are commonly used to represent relations or similarity amongsamples that may be labeled or unlabeled. Leveraging these signals during neuralnetwork training harnesses both labeled and unlabeled data, which can improvemodel accuracy, particularly whenthe amount of labeled data is relativelysmall. Additionally, models trained with samples that are generated byadversarial perturbation have been shown to berobust against maliciousattacks, which are designed to mislead a model's prediction or classification.

NSL generalizes to Neural Graph Learning [1] as well as to Adversarial Learning[3]. The NSL framework in TensorFlow provides the following easy-to-use APIs andtools for developers to train models with structured signals:

  • Keras APIs to enable training with graphs (explicit structure) andadversarial perturbations (implicit structure).

  • TF ops and functions to enable training with structure when usinglower-level TensorFlow APIs

  • Tools to build graphs and construct graph inputs for training

The NSL framework is designed to be flexible and can be used to train any kindof neural network. For example, feed-forward, convolution, and recurrent neuralnetworks can all be trained using the NSL framework. In addition to supervisedand semi-supervised learning (a low amount of supervision), NSL can in theory begeneralized to unsupervised learning. Incorporating structured signals is doneonly during training, so the performance of the serving/inference workflowremains unchanged. Please check out our tutorials for a practical introductionto NSL.

Getting started

You can install the prebuilt NSL pip package by running:

pip install neural-structured-learning

For more detailed instructions on how to install NSL as a package or to build itfrom source in various environments, please see theinstallation guide

Note that NSL requires a TensorFlow version of 1.15 or higher. NSL also supportsTensorFlow 2.x with the exception of v2.1, which contains a bug that isincompatible with NSL.

Videos and Colab Tutorials

Get a jump-start on NSL by watching our video series on YouTube! It gives acomplete overview of the framework as well as discusses several aspects oflearning with structured signals.

Overall FrameworkNatural GraphsSynthetic GraphsAdversarial Learning

We've also created hands-on colab-based tutorials that will allow you tointeractively explore NSL. Here are a few:

You can find more examples and tutorials under theexamples directory.

Contributing to NSL

Contributions are welcome and highly appreciated - there are several ways tocontribute to TF Neural Structured Learning:

  • Case studies: If you are interested in applying NSL, consider wrapping upyour usage as a tutorial, a new dataset, or an example model that otherscould use for experiments and/or development. Theexamplesdirectory could be a good destination for such contributions.

  • Product excellence: If you are interested in improving NSL's productexcellence and developer experience, the best way is to clone this repo,make changes directly on the implementation in your local repo, and thensend us pull request to integrate your changes.

  • New algorithms: If you are interested in developing new algorithms for NSL,the best way is to study the implementations of NSL libraries, and to thinkof extensions to the existing implementation (or alternative approaches). Ifyou have a proposal for a new algorithm, we recommend starting by stagingyour project in theresearch directory and including a colabnotebook to showcase the new features. If you develop new algorithms in yourown repository, we would be happy to feature pointers to academicpublications and/or repositories using NSL from this repository.

Please be sure to review thecontribution guidelines.

Research

See ourresearch directory for research projects in NeuralStructured Learning:

Featured Usage

Please see theusage page to learn more about how NSL is beingdiscussed and used in the open source community.

Issues, Questions, and Feedback

Please useGitHub issuesto file issues, bugs, and feature requests. For questions, please direct them toStack Overflow with the"nsl" tag. For feedback,please fill thisform;we would love to hear from you.

Release Notes

Please see therelease notes for detailed version updates.

References

[1] T. Bui, S. Ravi and V. Ramavajjala. "Neural Graph Learning: Training NeuralNetworks Using Graphs." WSDM 2018

[2] T. Kipf and M. Welling. "Semi-supervised classification with graphconvolutional networks." ICLR 2017

[3] I. Goodfellow, J. Shlens and C. Szegedy. "Explaining and harnessingadversarial examples." ICLR 2015

[4] T. Miyato, S. Maeda, M. Koyama and S. Ishii. "Virtual Adversarial Training:a Regularization Method for Supervised and Semi-supervised Learning." ICLR2016

[5] D. Juan, C. Lu, Z. Li, F. Peng, A. Timofeev, Y. Chen, Y. Gao, T. Duerig, A.Tomkins and S. Ravi "Graph-RISE: Graph-Regularized Image Semantic Embedding."WSDM 2020


[8]ページ先頭

©2009-2025 Movatter.jp