Repository logo
Log In(current)
  1. Home
  2. Colleges & Schools
  3. Graduate School
  4. Doctoral Dissertations
  5. Efficient Deep Learning and Its Applications
Details

Efficient Deep Learning and Its Applications

Date Issued
May 1, 2022
Author(s)
Wang, Zi  
Advisor(s)
Husheng Li
Additional Advisor(s)
Dali Wang
Wenjun Zhou
Jens Gregor
Permanent URI
https://trace.tennessee.edu/handle/20.500.14382/28492
Abstract

Deep neural networks (DNNs) have achieved huge successes in various tasks such as object classification and detection, image synthesis, game-playing, and biological developmental system simulation. State-or-the-art performance on these tasks is usually achieved by designing deeper and wider DNNs with the cost of huge storage size and high computational complexity. However, the over-parameterization problem of DNNs constrains their deployment in resource-limited devices, such as drones and mobile phones.


With these concerns, many network compression approaches are developed, such as quantization, neural architecture search, network pruning, and knowledge distillation. These approaches reduce the sizes and computational costs of DNNs while maintaining their performance.

In this dissertation, we first focus on two of the most popular network compression schemes, i.e., network pruning and knowledge distillation. We aim to (1) develop more efficient network pruning approaches that can remove a large percentage of parameters/FLOPs from the DNNs while minimizing the performance degradation, and (2) train compact neural networks with the help of large, pre-trained networks under challenging scenarios in which limited information of the pre-trained networks are accessible. In the second part, we will develop efficient deep learning algorithms for a real-world application, i.e., modeling the biological cell migration process with deep reinforcement learning. The main contribution of this dissertation is summarized as follows.

We propose a novel network pruning approach, which removes filters based on the redundancy measurement in each layer. Different from existing works that prune the least important filters across all layers, we find that pruning filters from the layer with the most redundancy performs better.

We study knowledge distillation, which trains a compact network by mimicking the output of a pre-trained, over-parameterized network, under more challenging scenarios. In specific, we explore the possibility to learn from the pre-trained model when (1) the training set is not accessible and (2) the pre-trained model only returns top-1 index rather than probabilities.

We leverage efficient deep learning tools in the cell migration modeling with reinforcement learning, which helps reduce the training time. Therefore, novel biological mechanisms can be discovered within an acceptable period of time.

Subjects

efficient deep learni...

reinforcement learnin...

network compression

network pruning

knowledge distillatio...

Disciplines
Signal Processing
Degree
Doctor of Philosophy
Major
Computer Engineering
File(s)
Thumbnail Image
Name

PhD_Thesis__3_.pdf

Size

23.97 MB

Format

Adobe PDF

Checksum (MD5)

0169579a8b0113df8738fb4a4296fc78

Built with DSpace-CRIS software - Extension maintained and optimized by 4Science

  • Privacy policy
  • End User Agreement
  • Send Feedback
  • Contact
  • Libraries at University of Tennessee, Knoxville
Repository logo COAR Notify