Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings
#

distributed-deep-learning

Here are 37 public repositories matching this topic...

BigDL: Distributed TensorFlow, Keras and PyTorch on Apache Spark/Flink & Ray

  • UpdatedNov 19, 2025
  • Jupyter Notebook
dkeras

Learn applied deep learning from zero to deployment using TensorFlow 1.8+

  • UpdatedJul 31, 2018
  • Jupyter Notebook

A Portable C Library for Distributed CNN Inference on IoT Edge Clusters

  • UpdatedMar 18, 2020
  • C

Chimera: bidirectional pipeline parallelism for efficiently training large-scale models.

  • UpdatedMar 20, 2025
  • Python
sensAIdnn-distributed

Distributed training of DNNs • C++/MPI Proxies (GPT-2, GPT-3, CosmoFlow, DLRM)

  • UpdatedFeb 22, 2024
  • C++

SHADE: Enable Fundamental Cacheability for Distributed Deep Learning Training

  • UpdatedMar 1, 2023
  • Python

🚨 Prediction of the Resource Consumption of Distributed Deep Learning Systems

  • UpdatedFeb 6, 2023
  • Python

Intel® End-to-End AI Optimization Kit

  • UpdatedJul 18, 2024
  • Jupyter Notebook

Ok-Topk is a scheme for distributed training with sparse gradients. Ok-Topk integrates a novel sparse allreduce algorithm (less than 6k communication volume which is asymptotically optimal) with the decentralized parallel Stochastic Gradient Descent (SGD) optimizer, and its convergence is proved theoretically and empirically.

  • UpdatedDec 10, 2022
  • Python

TensorFlow (1.8+) Datasets, Feature Columns, Estimators and Distributed Training using Google Cloud Machine Learning Engine

  • UpdatedJul 24, 2018
  • Jupyter Notebook

Decentralized Asynchronous Training on Heterogeneous Devices

  • UpdatedNov 11, 2025
  • Python

Eager-SGD is a decentralized asynchronous SGD. It utilizes novel partial collectives operations to accumulate the gradients across all the processes.

  • UpdatedNov 18, 2021
  • Python

WAGMA-SGD is a decentralized asynchronous SGD based on wait-avoiding group model averaging. The synchronization is relaxed by making the collectives externally-triggerable, namely, a collective can be initiated without requiring that all the processes enter it. It partially reduces the data within non-overlapping groups of process, improving the…

  • UpdatedJun 30, 2021
  • Python

Scalable NLP model fine-tuning and batch inference with Ray and Anyscale

  • UpdatedApr 6, 2023
  • Jupyter Notebook

Distributed deep learning framework based on pytorch/numba/nccl and zeromq.

  • UpdatedAug 10, 2023
  • Python

This repository contains the implementation of a wide variety of Deep Learning Projects in different applications of computer vision, NLP, federated, and distributed learning. These projects include university projects and projects implemented due to interest in Deep Learning.

  • UpdatedSep 9, 2022
  • Jupyter Notebook

Collection of resources for automatic deployment of distributed deep learning jobs on a Kubernetes cluster

  • UpdatedSep 18, 2018
  • Python

Improve this page

Add a description, image, and links to thedistributed-deep-learning topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with thedistributed-deep-learning topic, visit your repo's landing page and select "manage topics."

Learn more


[8]ページ先頭

©2009-2025 Movatter.jp