AI News, Selective memory

Selective memory

In a traditional computer, a microprocessor is mounted on a “package,” a small circuit board with a grid of electrical leads on its bottom.

As processors’ transistor counts have gone up, the relatively slow connection between the processor and main memory has become the chief impediment to improving computers’ performance.

So, in the past few years, chip manufacturers have started putting dynamic random-access memory — or DRAM, the type of memory traditionally used for main memory — right on the chip package.

At the recent IEEE/ACM International Symposium on Microarchitecture, researchers from MIT, Intel, and ETH Zurich presented a new cache-management scheme that improves the data rate of in-package DRAM caches by 33 to 50 percent.

So cache systems usually organize data using something called a “hash table.” When a processor seeks data with a particular tag, it first feeds the tag to a hash function, which processes it in a prescribed way to produce a new number.

That way, if a processor is relying heavily on data from a narrow range of addresses — if, for instance, it’s performing a complicated operation on one section of a large image — that data is spaced out across the cache so as not to cause a logjam at a single location.

So the processor would request the first tag stored at a given hash location and, if it’s a match, send a second request for the associated data.

So each core, or processing unit, in a chip usually has a table that maps the virtual addresses used by individual programs to the actual addresses of data stored in main memory.

One bit indicates whether the data at that virtual address can be found in the DRAM cache, and the other two indicate its location relative to any other data items with the same hash index.

Any request sent to either the DRAM cache or main memory by any core first passes through the tag buffer, which checks to see whether the requested tag is one whose location has been remapped.

And the researchers’ simulations show that the time required for one additional address lookup per memory access is trivial compared to the bandwidth savings Banshee affords.

Selective memory

In a traditional computer, a microprocessor is mounted on a “package,” a small circuit board with a grid of electrical leads on its bottom.

As processors’ transistor counts have gone up, the relatively slow connection between the processor and main memory has become the chief impediment to improving computers’ performance.

So, in the past few years, chip manufacturers have started putting dynamic random-access memory — or DRAM, the type of memory traditionally used for main memory — right on the chip package.

At the recent IEEE/ACM International Symposium on Microarchitecture, researchers from MIT, Intel, and ETH Zurich presented a new cache-management scheme that improves the data rate of in-package DRAM caches by 33 to 50 percent.

So cache systems usually organize data using something called a “hash table.” When a processor seeks data with a particular tag, it first feeds the tag to a hash function, which processes it in a prescribed way to produce a new number.

That way, if a processor is relying heavily on data from a narrow range of addresses — if, for instance, it’s performing a complicated operation on one section of a large image — that data is spaced out across the cache so as not to cause a logjam at a single location.

So the processor would request the first tag stored at a given hash location and, if it’s a match, send a second request for the associated data.

So each core, or processing unit, in a chip usually has a table that maps the virtual addresses used by individual programs to the actual addresses of data stored in main memory.

One bit indicates whether the data at that virtual address can be found in the DRAM cache, and the other two indicate its location relative to any other data items with the same hash index.

Any request sent to either the DRAM cache or main memory by any core first passes through the tag buffer, which checks to see whether the requested tag is one whose location has been remapped.

And the researchers’ simulations show that the time required for one additional address lookup per memory access is trivial compared to the bandwidth savings Banshee affords.

Set associative cache

Description of how a set associative cache works.

DEF CON 23 - Gerard Laygui - Forensic Artifacts From a Pass the Hash Attack

A pass the hash (PtH) attack is one of the most devastating attacks to execute on the systems in a Windows domain. Many system admins are unaware about ...

Lecture - 31 Memory Hierarchy : Virtual Memory

Lecture Series on Computer Architecture by Prof. Anshul Kumar, Department of Computer Science & Engineering ,IIT Delhi. For more details on NPTEL visit ...

Solving Android SDK Problems

Article for Video Here : In my last tutorial, I showed you how to install the Android development tools. In this tutorial, I'll show you how to solve ..

24. Topics in Algorithms Research

MIT 6.006 Introduction to Algorithms, Fall 2011 View the complete course: Instructor: Erik Demaine, Srini Devadas License: Creative ..

Google Developer Days Europe 2017 - Day 1 (Auditorium)

Check in to the livestream to watch day 1 of GDD Europe '17! This livestream will cover all sessions taking place on the Auditorium stage of the ICE Congress ...

Mod-06 Lec-26 Memory hierarchy

High Performance Computing by Prof. Matthew Jacob,Department of Computer Science and Automation,IISC Bangalore. For more details on NPTEL visit ...

Htc Desire 820 hard reset

You can also try dr.fone - Android Lock Screen Removal to bypass pattern, PIN, password & fingerprints for Android devices with NO DATA LOSS: ...

Aymeric Augustin about Debugging at Django: Under The Hood 2016

Slides: Django: Under The Hood: ..

NIPS 2011 Big Learning - Algorithms, Systems, & Tools Workshop: Vowpal Wabbit Tutorial

Big Learning Workshop: Algorithms, Systems, and Tools for Learning at Scale at NIPS 2011 Tutorial: Vowpal Wabbit by John Langford Abstract: We present a ...