Media
Publications:
Documents:
Videos:
Audio/Podcasts:
Biography
Dr. Zhang is a professor and director of computer engineering program at the Department of Electrical and Computer Engineering of Virginia Commonwealth University. He received his Ph.D. in Computer Science and Engineering in August 2003 from Pennsylvania State University. He worked as an assistant professor, and then as a tenured associate professor in the Electrical and Computer Engineering Department of Southern Illinois University Carbondale from 2003 to 2010 before he joined VCU in 2010.
Dr. Zhang’s research interests lie at the intersection of compilers and computer architectures, with a particular focus on real-time and high-performance embedded computing systems. Dr. Zhang has developed novel compiler analysis and optimization, architecture, and hybrid compiler/architectural techniques for real-time and embedded computing systems to meet the stringent constraints on time predictability, energy efficiency, reliability, and/or performance. Dr. Zhang’s research has been funded by both federal/state governments and industry, including numerous NSF grants and funding from IBM and Intel.
Industry Expertise (2)
Research
Education/Learning
Areas of Expertise (5)
Real-time and embedded computing systems
Compiler
Computer Architecture
Power-aware computing
Parallel Computing
Accomplishments (2)
Excellence Through Commitment Outstanding Scholar Award for the College of Engineering (professional)
Awarded by Southern Illinois University.
Excellence Through Commitment Undergraduate Teaching Enhancement Award (professional)
Awarded by Southern Illinois University.
Education (3)
Pennsylvania State University: Ph.D., Computer Science and Engineering 2003
Institute of Software, Chinese Academy of Sciences: M.S., Computer Science 2000
Peking University: B.S., Computer Science 1997
Selected Articles (3)
Time-Predictable L2 Cache Design for High-Performance Real-Time Systems
Proceedings of the 16th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications2010 Unified L2 caches can lead to runtime interferences between instructions and data, making it very hard, if not impossible, to perform timing analysis for real-time systems. This paper proposes a priority cache to achieve both time predictability and high performance for real-time systems. The priority cache allows both the instruction and data streams to share the aggregate L2 cache; however, instructions and data cannot replace each other to enable independent instruction cache and data cache timing analyses. Our performance evaluation shows that the instruction priority cache outperforms separate L2 caches, both of which can achieve time predictability. On average, the number of execution cycles of the instruction priority cache is only 1.1% more than that of a unified L2 cache.
Loop-Based Instruction Prefetching to Reduce the Worst-Case Execution Time
IEEE Transactions on Computers2010 Estimating and optimizing worst-case execution time (WCET) is critical for hard real-time systems to ensure that different tasks can meet their respective deadlines. Recent work has shown that simple prefetching techniques such as the Next-N-Line prefetching can enhance both the average-case and worst-case performance; however, the improvement on the worst-case execution time is rather limited and inefficient. This paper studies a loop-based instruction prefetching approach, which can exploit the program control-flow information to intelligently prefetch instructions that are most likely needed. Our evaluation indicates that the loop-based instruction prefetching outperforms the Next-N-Line prefetching in both the worst-case and the average-case performance for real-time applications.
Computing and Minimizing Cache Vulnerability to Transient Errors
IEEE Design & Test of Computers2009 Using a cache vulnerability factor to measure the susceptibility of cache memories to transient errors at the architecture level can help designers make appropriate cost and reliability trade-offs at early design cycles. Two early write-back strategies can also improve the reliability of write-back data caches without compromising performance.
Social