Rice University

BCC BCC
BCC

Research and Computing Resources

RESEARCH RESOURCES

Office of Research Development and Infrastructure

The mission of the Office of Research Development and Infrastructure (ORDI) is to foster Rice faculty members' effectiveness in securing extramural support for their multi-investigatory, interdisciplinary, scholarly and research activities. ORDI works closely with several research support facilities and strives to enhance Rice's position as a major research university, and to garner national and international recognition for its research endeavors. For more information, please visit ORDI.

Fondren Library

 

COMPUTING RESOURCES

CTBP Cluster

This cluster is housed in the Rice computer operations center off-campus. It is essentially a smaller scale DAVinCI system, dedicated specifically to CTBP users, and is comprised of 12 System X IDataplex DX360 M4 SE “bricks” (Intel Xeon E5-2660 Sandy Bridge-EP 2.2GHz (3GHz Turbo Boost) 20MB L3 Cache LGA 2011 95W 8-Core Server Processor;146GB 15K 6GBPS SAS 2.5 SFF S; 64GB 1X8GB 2RX4 1.35V PC3L 8GB DIMMs).

Research Computing Support Group
The Research Computing Support Group (RCSG) at Rice University curerntly manages a collection of computational resources uses by faculty, students and staff affiliated with Rice University. For more information about the clusters, including how to get an account, which system to request, fees and general support, please visit Rice University IT DIY: Research Computing

Below is a list of shared computational clusters commonly used by CTBP Personnel managed by the Research Computing Support Group (RCSG).

BlueGene/P
Rice University and IBM have partnered to bring the first ever Bluegene Supercomputer to Texas.  The Rice Bluegene P is a massively parallel supercomputer featuring 24,576 Power PC 450 compute cores.  Each are 32 bit running at 850MHz.   The system has 4GB of RAM per node and 260TB of GPFS shared storage.  The system supports both HPC and HTC (High Throughput Computing/Serial) workloads via three different modes of kernel operation.  The minimum blocksize allowed is 128 compute nodes (512 cores). For more information, please visit Rice's Research Computing Support Group.

STIC
STIC stands for Shared Tightly-Integrated Cluster. As opposed to a loosely integrated “farm” (like Sugar), a tightly-integrated parallel cluster is designed to run large multi-node jobs over a fast interconnect. STIC consists of 170 Appro Greenblade E5530 nodes each with two quad-core 2.4GHz Intel Xeon (Nahalem) CPUs as well as 44 Appro Greenblade E5650 nodes with two six-core 2.6GHz Intel Xeon Westmere CPUs. This gives the system a total of 1920 compute cores. There is a maximum of 720 compute cores available to all users and is subject to change due to special projects, maintenance tasks, and so on. The remaining cores are part of a Research Computing Resort Condominium. Each node has 12GB of memory per node shared by all cores on the node.  The system also has three file systems.  An 11 TB Panasas volume ($SHARED_SCRATCH) provides fast I/O to run user applications while an NFS server provides 1 TB for home directories ($HOME) and another 2 TB for group-based allocation ($PROJECTS).  The inter-node message passing fabric is DDR Infiniband. For more information, please visit Rice's Research Computing Support Group.

Sugar
Sugar (Shared University Grid at Rice) is Rice’s oldest running shared compute cluster. Comprised of 134 Sun Microsystems SunFire x4150 nodes, Sugar is designed as a compute farm to run serial jobs and multi-core single-node jobs only. Each node contains two quad-core 2.83GHz Intel Xeon Harpertown CPUs with 16GB RAM per node. A 14 TB Panasas volume ($SHARED_SCRATCH) provides fast I/O to run user applications while an internal NFS server provides a 1 TB file system for user home directories, and another 2 TB for group allocations. For more information, please visit Rice's Research Computing Support Group.