Categories |
DEEP LEARNING
NEURAL
ALGORITHMS
|
About |
Deep learning is a rapidly growing field of machine learning, and has proven successful in many domains, including computer vision, language translation, and speech recognition. The training of deep neural networks is resource intensive, requiring compute accelerators such as GPUs, as well as large amounts of storage and memory, and network bandwidth. Additionally, getting the training data ready requires a lot of tooling for data cleansing, data merging, ambiguity resolution, etc. Sophisticated middleware abstractions are needed to schedule resources, manage the distributed training job as well as visualize how well the training is progressing. Likewise, serving the large neural network models with low latency constraints can require middleware to manage model caching, selection, and refinement. |
Call for Papers |
This workshop solicits papers from both academia and industry on the state of practice and state of the art in deep learning infrastructures. Topics of interest include but are not limited to:
|
Summary |
DIDL 2020 : Fourth Workshop on Distributed Infrastructures for Deep Learning will take place in UC Davis, CA. It’s a 5 days event starting on Dec 09, 2020 (Wednesday) and will be winded up on Dec 13, 2020 (Sunday). DIDL 2020 falls under the following areas: DEEP LEARNING, NEURAL, ALGORITHMS, etc. Submissions for this Workshop can be made by Aug 31, 2020. Authors can expect the result of submission by Sep 28, 2020. Upon acceptance, authors should submit the final version of the manuscript on or before Oct 16, 2020 to the official website of the Workshop. Please check the official event website for possible changes before you make any travelling arrangements. Generally, events are strict with their deadlines. It is advisable to check the official website for all the deadlines. Other Details of the DIDL 2020
|
Credits and Sources |
[1] DIDL 2020 : Fourth Workshop on Distributed Infrastructures for Deep Learning |