Reducing Address Translation Overheads with Virtual Caching

Reducing Address Translation Overheads with Virtual Caching
Author: Hongil Yoon
Publisher:
Total Pages: 126
Release: 2017
Genre:
ISBN:


Download Reducing Address Translation Overheads with Virtual Caching Book in PDF, Epub and Kindle

This dissertation research addresses overheads in supporting virtual memory, especially virtual-to-physical address translation overheads (i.e., performance, power, and energy) via a Translation Lookaside Buffer (TLB). To overcome the overheads, we revisit virtually indexed, virtually tagged caches. In practice, they have not been common in commercial microarchitecture designs, and the crux of the problem is the complications of dealing with virtual address synonyms. This thesis makes novel, empirical observations, based on real world applications, that show temporal properties of synonym accesses. By exploiting these observations, we propose a practical virtual cache design with dynamic synonym remapping (VC-DSR), which effectively reduces the design complications of virtual caches. The proposed approach (1) dynamically decides a unique virtual page number for all the synonymous virtual pages that map to the same physical page and (2) uses this unique page number to place and look up data in the virtual caches, while data from the physical page resides in the virtual caches. Accesses to this unique page number proceed without any intervention. Accesses to other synonymous pages are dynamically detected, and remapped to the corresponding unique virtual page number to correctly access data in the cache. Such remapping operations are rare, due to the temporal properties of synonyms, allowing our proposal to achieve most of the benefits (i.e., performance, power, and energy) of virtual caches, without software involvement. We evaluate the effectiveness of the proposed virtual cache design by integrating it into modern CPUs as well as GPUs in heterogeneous systems. For the proposed L1 virtual cache of CPUs, the experimental results show that our proposal saves about 92% of dynamic energy consumption for TLB lookups and achieves most of the latency benefit (about 99.4%) of ideal (but impractical) virtual caches. For the proposed entire GPU virtual cache hierarchy, we see an average of 77% performance benefits over the conventional GPU MMU.


Reducing Address Translation Overheads with Virtual Caching
Language: en
Pages: 126
Authors: Hongil Yoon
Categories:
Type: BOOK - Published: 2017 - Publisher:

GET EBOOK

This dissertation research addresses overheads in supporting virtual memory, especially virtual-to-physical address translation overheads (i.e., performance, po
Toward Efficient and Protected Address Translation in Memory Management
Language: en
Pages: 205
Authors: Xiaowan Dong
Categories:
Type: BOOK - Published: 2019 - Publisher:

GET EBOOK

"Virtual memory is widely employed in most computer systems to make programming easy and provide isolation among different applications. Since virtual-tophysica
Revisiting Virtual Memory
Language: en
Pages: 166
Authors:
Categories:
Type: BOOK - Published: 2013 - Publisher:

GET EBOOK

Page-based virtual memory (paging) is a crucial piece of memory management in today's computing systems. However, I find that need, purpose and design constrain
Efficient Fine-grained Virtual Memory
Language: en
Pages: 252
Authors: Tianhao Zheng (Ph. D.)
Categories:
Type: BOOK - Published: 2018 - Publisher:

GET EBOOK

Virtual memory in modern computer systems provides a single abstraction of the memory hierarchy. By hiding fragmentation and overlays of physical memory, virtua
Operating Systems
Language: en
Pages: 714
Authors: Remzi H. Arpaci-Dusseau
Categories: Operating systems (Computers)
Type: BOOK - Published: 2018-09 - Publisher: Createspace Independent Publishing Platform

GET EBOOK

"This book is organized around three concepts fundamental to OS construction: virtualization (of CPU and memory), concurrency (locks and condition variables), a