Kartik Hegde
Kartik Hegde
Home
News
Publications
Projects
Posts
Talks
Contact
CV
Light
Dark
Automatic
Recent Publications
Type
Conference paper
Date
2021
2019
2018
Mind Mappings: Enabling Efficient Algorithm-Accelerator Mapping Space Search
Mind Mappings is a novel framework that enables first-order optimization with gradient descent for mapping space search, a core challenge in deploying efficient programmable accelerators.
Kartik Hegde
,
Po-An Tsai
,
Sitao Huang
,
Vikas Chandra
,
Angshuman Parashar
,
Christopher W Fletcher
PDF
Cite
DOI
ExTensor: An Accelerator for Sparse Tensor Algebra
ExTensor is an acceletor for Sparse Tensor Algebra, a key class of workloads that powers crucial areas such as deep learning. Key insight behind the design is to hierarchically eliminate ineffectual work that exists due to sparsity to demonstrate significant speed-ups.
Kartik Hegde
,
Hadi Asghari-Moghaddam
,
Michael Pellauer
,
Neal Crago
,
Aamer Jaleel
,
Edgar Solomonic
,
Joel Emer
,
Christopher W Fletcher
PDF
Cite
DOI
Buffets: An Efficient and Composable Storage Idiom for Explicit Decoupled Data Orchestration
A key issue in hardware accelerator design is re-usability of designs across different accelerators. With Buffets, we present a reusable, composable, and efficient storage idiom for programmable hardware accelerators.
Michael Pellauer
,
Yakun Sophia Shao
,
Jason Clemons
,
Neal Crago
,
Kartik Hegde
,
Rangharajan Ventakesan
,
Stephen W. Keckler
,
Christopher W Fletcher
,
Joel Emer
PDF
Cite
DOI
Morph: Flexible acceleration for 3d cnn-based video understanding
Morph is a flexible hardware accelerator for 3D-CNNs, a key workload used in video understanding. Key insight behind the design is to design a flexible hardware that allows high degrees of flexibility that allows different mappings of 3D-CNNs that maximizes performance for the given layer of 3D-CNN..
Kartik Hegde
,
Rohit Agrawal
,
Yulun Yao
,
Christopher W Fletcher
PDF
Cite
DOI
UCNN: Exploiting Computational Reuse in Deep Neural Networks via Weight Repetition
A result of reducing precision in deep neural networks is increasing repetition of parameters in DNN models. UCNN is a hardware-software codesign approach based on algebraic reassociation techniques to exploit weight repetition that significantly improves efficiency of DNN inference.
Kartik Hegde
,
Jiyong Yu
,
Rohit Agrawal
,
Mengjia Yan
,
Michael Pellauer
,
Christopher W Fletcher
PDF
Cite
DOI
Cite
×