About
Our Mascot: The noble groundhog

The Prognostic Lab is an experimental systems research group in the Computer Science Department at the University of Pittsburgh led by Assoc. Prof. Jack Lange. Our work focuses on the design of core systems software capable of fully utilizing next generation hardware environments while at the same time being amenable to dynamic resource managers. Our research is based primarily in the context of high performance and extreme scale computing. To increase the applicability of high performance systems, we seek to provide unmodified applications transparent access to high performance resources. Our methods are based on the design, implementation and evaluation of experimental systems.


People
Former Members
Undergraduates
  • Charles Smith -- Built VM configuration framework for Palacios
  • Scott Whipkey -- Investigated Nested Virtualization I/O performance
Active Projects (Source code repositories are here)

    Leviathan Node Manager

    Leviathan is a intra-node management and information service for multi-enclave environments. It's goal is to explore the use of in memory databases to manage and integrate enclave instances, each running independent and isolated OS/Rs. Leviathan also serves to integrate many of our other projects under a common runtime API. At the heart of Leviathan is an information service built on a in-memory No-SQL database.
    (If you are looking for the Hobbes environment download this first, and run ./setup.sh)

    Pisces Co-kernel

    Pisces is a lightweight co-kernel architecture that is designed to allow multiple native Operating Systems to run concurrently on the same local compute node. Each Operating System instance provides an isolated enclave to a co-located workload while ensuring that it's performance not impacted by other workloads on the same local node. Pisces is primarily designed to support in-situ and composed HPC applications, which require strong performance isolation to prevent cross workload interference.

    Palacios VMM

    Palacios is an OS independent virtualization library, that provides VMM functionality to a host Operating System. Palacios is highly configurable and designed to be embedded into different host operating systems, such as Linux and the Kitten lightweight kernel. Palacios is a non-paravirtualized VMM that makes extensive use of the virtualization extensions in modern Intel and AMD x86 processors. Palacios is designed specifically for HPC environments, and has been used to virtualize the full range of hardware from desktop workstations to Top 500 ranked Cray supercomputers.

    XEMEM Shared Memory

    XEMEM is a cross enclave local shared memory architecture meant to allow applications to directly share memory even when they are deployed inside separate OS/R instances. XEMEM provides a common API that is portable across arbitrary enclave topologies and allows unmodified application binaries to be deployed to any OS/R instance based on runtime configuration decisions.

    Kitten Lightweight Kernel

    Kitten is a lightweight kernel (LWK) compute node operating system, similar to previous LWKs such as SUNMOS, Puma, Cougar, and Catamount. Kitten distinguishes itself from these prior LWKs by providing a Linux-compatible user environment, a more modern and extendable codebase, and a virtual machine monitor capability via Palacios that allows full-featured guest operating systems to be loaded on-demand.

Recent Publications (Full List can be found here)
CLUSTER [2016] (pdf) B. Kocoloski, L. Piga, W. Huang, I. Paul, and J. Lange,
A Case for Criticality Models in Exascale Systems,
Proceedings of the 18th International Conference on Cluster Computing, (CLUSTER 2016)
VEE [2016] (pdf) J. Ouyang, J. Lange, and H. Zheng,
Shoot4U: Using VMM Assists to Optimize TLB Operations on Preempted vCPUs,
Proceedings of the 12th International Conference on Virtual Execution Environments, (VEE 2016)
TPDS [2016] (pdf) B. Kocoloski and J. Lange,
Lightweight Memory Management for High Performance Applications in Consolidated Environments,
IEEE Transactions on Parallel and Distributed Systems, Volume 27, Issue 2, pages 468-480, Februrary 2016
HPDC [2015] (pdf) B. Kocoloski and J. Lange,
XEMEM: Efficient Shared Memory for Composed Applications on Multi-OS/R Exascale Systems,
Proceedings of the 24th International ACM Symposium on High Performance Parallel and Distributed Computing, (HPDC 2015)
ROSS [2015] (pdf) B. Kocoloski, J. Lange, H. Abbasi, D. Bernholdt, T. Jones, J. Dayal, N. Evans, M. Lang, J. Lofstead, K. Pedretti, P. Bridges,
System-Level Support for Composition of Applications,
Proceedings of the 5th International Workshop on Runtime and Operating Systems for Supercomputers, (ROSS 2015)
Completed Projects

    HPMMAP

    HPMMAP (High Performance Memory Mapping and Allocation Platform) is a lightweight memory manager for commodity operating systems. It provides a memory management stack that can support unmodified high performance computing (HPC) applications running on Linux.



All content and images © 2015 Jack Lange