This policy articulates security practices and guidelines for research computing systems.
Research projects often process large amounts of electronic data both in conducting experiments and disseminating the results. Unlike other data processing systems, research systems are not performing administrative functions nor do they provide critical services for administrative systems. Research systems are also more likely to have unconventional configurations of hardware and software which may evolve rapidly in response to feedback from experiment and analysis in the course of the project.
Because the information processing world contains hostile actors, researchers need to responsibly safeguard data. In addition, since a significant risk posed by research systems is their unauthorized and malicious use to attack other machines, systems running with reduced defenses should employ commensurately enhanced misuse detection and mitigation measures.
2. Security Principles
Four principles guide the security practices and guidelines for research computing systems: safety, confidentiality, integrity, and availability.
Safety is the principle of “do no harm.” Safety is important because it defines what imposed security measures are seeking to prevent. Although a compromise might not directly impinge on research itself, it might lead to abuse of resources to attack others. Since the speculative and unpolished nature of research systems makes breaches more likely, research system safety often involves taking extra measures to reduce, detect, and ameliorate compromises.
Confidentiality is the property that information is not made available or disclosed to unauthorized individuals, entities, or processes. Confidentiality is important not only for raw research data, but also notes and metadata. Insufficiently protecting confidentiality can jeopardize the work and privacy of others. Controlling and auditing access is fundamental to confidentiality.
Integrity means maintaining and assuring the accuracy and completeness of data over its entire life cycle. Integrity is important because if data is compromised or deleted, it can corrupt the scientific method and the validity of research results. Research systems should have procedures to maintain the integrity of the data they store and to detect modification.
Availability means that the system and its data can be used and accessed by authorized users when needed. Availability is important because without it, the systems cannot be used in research: systems that become inaccessible, need to be rebooted, reinstalled, or taken off the network are all examples of reduced availability.
3. Security Guidelines
The principal investigator (PI) is the primary person responsible for information handling on the project. The PI is responsible for managing information security in accordance with the principles above. Research computing systems must be overseen by a full-time information technology (IT) professional. The level of care required for research computing systems depends solely on the highest designated risk classification of any of the project data.
For projects involving only Low Risk data and systems, research computing systems need to be patched as quickly as possible, and conform to the security policies of their IT professionals. Systems with public IP addresses need to send security logs to a centralized logging system.
For other risk classifications, research projects may fulfill their security responsibilities under the aegis of the Dean of Research in one of two ways:
- adhering to the information security policies found in Chapter 6 of the Administrative Guide;
- conducting a risk assessment then collaborating with local IT professionals and consulting with the Information Security Office (ISO) when needed to document the security policies for the project.
The Dean of Research has the final authority on the interpretation of this policy.