Computing to Support Research
Research Computing at Stanford is a joint effort of the Dean of Research and IT Services to build and support a comprehensive program to advance computational research at Stanford. That includes traditional high performance computing (HPC) as well as high throughput and data – intensive computing.
One of the anchors of this new effort is the construction of a state-of-the art data center, the Stanford Research Computing Facility (SRCF). A Stanford building located on SLAC’s land, the SRCF provides a highly efficient hub for the physical hosting of high density compute and storage equipment, along with systems administration and support services. The SRCF opened for production use in November 2013. For more information on the new facility see the section below.
In addition, Research Computing currently hosts and provides system administration services in a smaller, secure, centrally-managed data center in Forsythe Hall (RCF). As equipment in the RCF is life-cycled, replacement servers will be housed at the SRCF, returning the Forsythe space to non-research computing use.
Contact Ruth Marinshaw, firstname.lastname@example.org , if you would like to explore hosting your new equipment at the SRCF and/or if you want to know more about our services and offerings.
The Stanford Research Computing Facility
The Stanford Research Computing Facility (SRCF) provides the campus research community with data center facilities designed specifically to host high performance computing equipment. Supplementing the renovated area of the Forsythe data center, the SRCF is intended to meet Stanford’s research computing needs for the coming years. A Stanford building located on the SLAC campus, the SRCF was completed in the fall of 2013, with production HPC services being offered as of November 2013. The facility and services therein are managed by the Stanford Research Computing Center (SRCC).
Space and Power: The SRCF has 3 megawatts of power and can host 150 racks. While this implies an average rack density of 20kW, the infrastructure can support higher-density compute racks with power consumption requirements from 20 to 100 kW each. Of the estimated 150 racks, 25 compute racks will be for SLAC, 50 for the School of Medicine, and 75 for Stanford’s non-formula schools.
The SRCF has a resilient but not redundant power infrastructure. The transmission grade power, delivered to SLAC and the SRCF, is UPS and generator protected, providing significant assurance should there be a regional power outage.
Cooling: The building’s design is non-traditional and especially energy efficient. The facility is cooled with ambient air fan systems for 90% of the year. For the hotter days and for equipment needing chilled water, high-efficiency air cooled chillers are available.
Network Connectivity: The SRCF has multiple redundant 10 gigabit networks linking it to the campus backbone, the Internet, Internet2 and other national research networks. Stanford is planning to deploy 100 gigabit network connectivity by 2015. That bandwidth, coupled with the use of OpenFlow communications protocol (developed at Stanford) will provide unprecedented flexibility and capability in meeting the network transport needs of the research communities using the facility.
Three service models are supported at the SRCF.
- Hosting: a researcher pays an annual hosting fee per rack or half-rack, based on the maximum power draw possible per rack; the researcher is responsible for the management and system administration of the equipment.
- Supported cluster: a researcher pays the Stanford Research Computing Center for system administration and support
- Shared cluster: researchers pool their funds with each other and, with contribution from the Provost, acquire a larger shared cluster resource. The Provost has provided capital funding in the amount of $1,400,000 for computing equipment to encourage faculty to use the shared SRCC cluster model. This incentive represents access to additional HPC resources beyond those funded by grants and may greatly expand researchers’ computing capacity. Researchers may pay fees for system administration and support.
Note that the SRCF has been designed for hosting high-density racks. Toward this end, vendor pre-racked equipment is the preferred method for deployment. Hosting preference will be given to those researchers with high density, full racks of equipment, in order to make best use of the resources.
SRCC Service and Facility Features
Assistance in specifying equipment, negotiating pricing, coordinating purchases and planning deployment into the data center
Technical specifications and boiler-plate facility descriptions for inclusion in proposals
Secured 24x7 entry
Monitored temperature and environmental control systems
Fire detection and fire suppression
For more information
Contact Ruth Marinshaw, email@example.com
The Stanford Research Computing Center (SRCC) partners with ICME to offer a variety of training opportunities around HPC technologies, methods, and tools. During 2014, we will offer training in SAP HANA, CUDA, GPU basics, Python, Introduction to Stanford HPC resources, among other topics.
For more information, contact us at firstname.lastname@example.org .
Stanford High Performance Computing Resources
Need access to compute resources beyond your desktop? There are a variety of compute clusters run by Stanford schools and departments. For example, the Stanford Research Computing Center (SRCC) manages HPC clusters for the Stanford Center for Genomics and Personalized Medicine, the Economics department, the Statistics department, the Army HPC Center, and the School of Humanities & Sciences, as well as for individual PIs or labs. If you are from one of those units, drop us a note at the email address above and we can get you started. If you are from another school or group at Stanford and need help, we can suggest options and talk with you about our services.
The SRCC also offers access to a small shared campus HPC resource, FarmShare. Open to anyone with a SUNet ID, FarmShare is intended to be a short-term, low-intensity computational resource for students, courses and researchers who are just getting started with computing.
See https://www.stanford.edu/group/farmshare/cgi-bin/wiki/index.php/Main_Page for the details on how to try it out.
External High Performance Computing Resources
Stanford’s Research Computing service has access to the NSF-sponsored XSEDE high-end compute and storage resources. If your computing needs require resources at a scale beyond what can be met on campus, drop us a note and we can introduce you to XSEDE. We have access to most of the national resources so you can try them out to see if they will meet your needs. If they do, we can work with you to help craft an allocation proposal.
Facility Language for your Grant Proposal
Many program announcements for grant proposals require you to provide a description of local compute capabilities and facilities. We can help you out! Until we get that information posted, drop us a note at email@example.com and we can provide the needed text, tailored for your specific proposal.