My research targets the memory systems of shared-memory multiprocessors and high-performance uniprocessors. Memory system design is important, because it largely determines a computer's sustained performance. My work emphasizes quantitative analysis (often requiring new evaluation techniques) of system-level (not just hardware) performance.
Much of my recent work is part of the Wisconsin Wind Tunnel Project with Profs. Larus and David A. Wood and many students. The project expects most future massively-parallel computers will be built from workstation-like nodes and programmed in high-level parallel languages--like HPF--that support a shared address space in which processes uniformly reference data. Our research seeks to develop a consensus about the middle-level interface--below languages and compilers and above system software and hardware. We have recently proposed the Tempest interface that enables programmers, compilers, and program libraries to implement and use message passing, transparent shared memory, and hybrid combinations of the two. We are developing Tempest implementations on a Thinking Machines CM-5, a cluster of workstations (COW), and hypothetical hardware platforms. The Wisconsin Wind Tunnel project is so named because we use our tools to cull the design space of parallel supercomputers in a manner similar to how aeronautical engineers use conventional wind tunnels to design airplanes.
Other recent work with Madhu Talluri targets improving translation lookaside buffer (TLB) and page table performance by clustering aligned groups of base pages. Options require changes to hardware only (complete-subblocked TLBs), operating system only (clustered page tables), or both (superpages and partial-subblocked TLBs).