語系:
繁體中文
English
說明(常見問題)
圖資館首頁
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Data orchestration in deep learning ...
~
Krishna, Tushar,
Data orchestration in deep learning accelerators
紀錄類型:
書目-電子資源 : Monograph/item
正題名/作者:
Data orchestration in deep learning acceleratorsTushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
作者:
Krishna, Tushar,
其他作者:
Kwon, Hyoukjun,
面頁冊數:
1 online resource (166 p.)
標題:
Neural networks (Computer science)
電子資源:
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
ISBN:
9781681738697
Data orchestration in deep learning accelerators
Krishna, Tushar,
Data orchestration in deep learning accelerators
[electronic resource] /Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar. - 1 online resource (166 p.) - Synthesis lectures on computer architecture ;52. - Synthesis lectures on computer architecture ;52..
Includes bibliographical references (pages 131-143).
Access restricted to authorized users and institutions.
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
Mode of access: World Wide Web.
ISBN: 9781681738697Subjects--Topical Terms:
181982
Neural networks (Computer science)
Index Terms--Genre/Form:
214472
Electronic books.
LC Class. No.: Q342
Dewey Class. No.: 006.3
Data orchestration in deep learning accelerators
LDR
:02284nmm a2200301 i 4500
001
603281
006
m eo d
007
cr cn |||m|||a
008
211117t20202020cau ob 000 0 eng d
020
$a
9781681738697
020
$a
9781681738703
020
$a
9781681738710
035
$a
MCPB0006576
040
$a
iG Publishing
$b
eng
$c
iG Publishing
$e
rda
050
0 0
$a
Q342
082
0 0
$a
006.3
100
1
$a
Krishna, Tushar,
$e
author.
$3
899511
245
1 0
$a
Data orchestration in deep learning accelerators
$h
[electronic resource] /
$c
Tushar Krishna, Hyoukjun Kwon, Angshuman Parashar, Michael Pellauer, Ananda Samajdar.
264
1
$a
San Rafael, California :
$b
Morgan & Claypool Publishers,
$c
2020.
264
4
$c
©2020
300
$a
1 online resource (166 p.)
336
$a
text
$b
txt
$2
rdacontent
337
$a
computer
$b
c
$2
rdamedia
338
$a
online resource
$b
cr
$2
rdacarrier
490
1
$a
Synthesis lectures on computer architecture ;
$v
52
504
$a
Includes bibliographical references (pages 131-143).
506
$a
Access restricted to authorized users and institutions.
520
3
$a
This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference
538
$a
Mode of access: World Wide Web.
650
0
$a
Neural networks (Computer science)
$3
181982
650
0
$a
Machine learning.
$3
188639
650
0
$a
Data flow computing.
$3
348292
655
4
$a
Electronic books.
$2
local.
$3
214472
700
1
$a
Kwon, Hyoukjun,
$e
author.
$3
899512
700
1
$a
Parashar, Angshuman,
$e
author.
$3
899513
700
1
$a
Pellauer, Michael,
$e
author.
$3
899514
700
1
$a
Samajdar, Ananda,
$e
author.
$3
899515
830
0
$a
Synthesis lectures on computer architecture ;
$v
52.
$3
899516
856
4 0
$u
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
筆 0 讀者評論
全部
電子館藏
館藏
1 筆 • 頁數 1 •
1
條碼號
館藏地
館藏流通類別
資料類型
索書號
使用類型
借閱狀態
預約狀態
備註欄
附件
000000202669
電子館藏
1圖書
電子書
EB Q342
一般使用(Normal)
在架
0
1 筆 • 頁數 1 •
1
多媒體
多媒體檔案
https://portal.igpublish.com/iglibrary/search/MCPB0006576.html
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入