Movatterモバイル変換


[0]ホーム

URL:


Zhou et al., 2019 - Google Patents

Scheduling-efficient framework for neural network on heterogeneous distributed systems and mobile edge computing systems

Zhou et al., 2019

ViewPDF
Document ID
18056356495227394993
Author
Zhou X
Zhang J
Wan J
Zhou L
Wei Z
Zhang J
Publication year
Publication venue
IEEE Access

External Links

Snippet

As the volume of machine learning training data sets and the quantity of model parameters continue to grow, the pattern in which machine learning models are trained alone can no longer accommodate large-scale data environments. However, distributed systems and …
Continue reading atieeexplore.ieee.org (PDF) (other versions)

Classifications

The classifications are assigned by a computer and are not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the classifications listed.
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Programme initiating; Programme switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Programme initiating; Programme switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/485Task life-cycle, e.g. stopping, restarting, resuming execution
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for programme control, e.g. control unit
    • G06F9/06Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogramme communication; Intertask communication
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5009Computer-aided design using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/30Information retrieval; Database structures therefor; File system structures therefor
    • G06F17/30286Information retrieval; Database structures therefor; File system structures therefor in structured data stores
    • G06F17/30289Database design, administration or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/30Monitoring
    • G06F11/34Recording or statistical evaluation of computer activity, e.g. of down time, of input/output operation; Recording or statistical evaluation of user activity, e.g. usability assessment
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F2201/00Indexing scheme relating to error detection, to error correction, and to monitoring
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/16Combinations of two or more digital computers each having at least an arithmetic unit, a programme unit and a register, e.g. for a simultaneous processing of several programmes
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06QDATA PROCESSING SYSTEMS OR METHODS, SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL, SUPERVISORY OR FORECASTING PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management

Similar Documents

PublicationPublication DateTitle
Crankshaw et al.InferLine: latency-aware provisioning and scaling for prediction serving pipelines
CN105956021B (en)A kind of automation task suitable for distributed machines study parallel method and its system
Jiang et al.Heterogeneity-aware distributed parameter servers
Gu et al.SHadoop: Improving MapReduce performance by optimizing job execution mechanism in Hadoop clusters
Zhang et al.An adaptive synchronous parallel strategy for distributed machine learning
CN111079921A (en)Efficient neural network training and scheduling method based on heterogeneous distributed system
Xie et al.Elan: Towards generic and efficient elastic training for deep learning
CN110705716A (en)Multi-model parallel training method
Geng et al.Interference-aware parallelization for deep learning workload in GPU cluster
Zhou et al.Scheduling-efficient framework for neural network on heterogeneous distributed systems and mobile edge computing systems
Shu et al.Resource demand prediction of cloud workloads using an attention-based GRU model
Cao et al.SAP-SGD: Accelerating distributed parallel training with high communication efficiency on heterogeneous clusters
Yu et al.Accelerating distributed training in heterogeneous clusters via a straggler-aware parameter server
Fan et al.Model aggregation method for data parallelism in distributed real-time machine learning of smart sensing equipment
Wang et al.OrientStream: A framework for dynamic resource allocation in distributed data stream management systems
AdefemiWhat Every Computer Scientist Needs To Know About Parallelization
Yang et al.Parameter communication consistency model for large-scale security monitoring based on mobile computing
Li et al.Scheduling distributed deep learning jobs in heterogeneous cluster with placement awareness
Brown et al.An Experiment in Knowledge-Based Signal Understanding Using Parallel Architectures.
Ji et al.EP4DDL: addressing straggler problem in heterogeneous distributed deep learning.
Koch et al.SMiPE: estimating the progress of recurring iterative distributed dataflows
Yao et al.LBB: load-balanced batching for efficient distributed learning on heterogeneous GPU cluster: F. Yao et al.
Li et al.Optimizing machine learning on apache spark in HPC environments
Wang et al.SingleCaffe: an efficient framework for deep learning on a single node
Nylander et al.Towards performance modeling of speculative execution for cloud applications

[8]
ページ先頭

©2009-2025 Movatter.jp