Artificial Intelligence (AI) applications handling data must comply with existing data classification, handling, and access policies.
Interim Guidelines for the Use of AI in University Operations
These interim guidelines are intended to provide a framework for using artificial intelligence in administrative operations at the Colorado State University System campuses. They are not intended to address the use of AI in teaching and learning or research. They are to be considered provisional will be reviewed and updated regularly, keeping University personnel informed as the landscape of AI and higher education evolves.
The Colorado State University System recognizes that AI may improve administrative operations’ efficiency and effectiveness. As we proactively engage with AI, CSU employees should center our institutional missions and our commitment to ethical conduct and public accountability.
The guidelines below are heavily informed by the Artificial Intelligence Risk Management Framework (National Institute of Standards and Technology, 2023) and the Principles for Trustworthy AI (Organisation for Economic Co-operation and Development, 2024) as well as the progressive AI governance work at the University of Wisconsin-Madison and Michigan State University
Value | Guidelines |
---|---|
Beneficence | Foremost, the use of AI should do no harm. AI should be used to maximize individual and organizational well-being while mitigating risk to the greatest extent possible. |
Equity and Fairness | Intentional efforts should be made to identify and mitigate bias and inaccuracies. These efforts necessitate a thorough understanding of the source data on which the AI is based. AI tools should be equally accessible to all members of the campus community. AI tools should not be used in decision-making processes where groups of people are treated differently. |
Transparency and Documentation | The use of AI should be disclosed to students, faculty, and staff clearly and concisely. Documentation about data sources, limitations, generalizability, reliability, etc., should be made available. |
Security and Privacy | Tolerance for security and privacy risks in the use of generative AI is extremely low. |
• Do not use the same or similar password as your NetID password to register and/or access AI tools. | |
• Do not enter or request entry of “restricted” or “private” data, as defined in the IT Security policy, without consultation with the Chief Information Security Officer (or their designee) to ensure appropriate data security. | |
• All data provided by or to AI (“AI Data”) is subject to state and federal legal authority. Understand that all collected AI Data related to students is considered part of their educational record and is, therefore, FERPA protected. | |
• Ensure that access to AI Data is limited to authorized personnel with approval from the appropriate Data Steward. | |
• AI use should be undertaken in a controlled environment in a way that does not feed CSU data and information back to a central tool for training purposes unless otherwise approved by the Chief Information Security Officer or their designee. | |
• Before evaluating additional AI tools, consider AI tools that are already contained in platforms subject to existing contracts with CSU. For example, the AI chat/recording summarization tools available in Zoom and Teams. No unapproved third-party AI tools should be used. | |
Training and Education | In partnership with the Division of IT, ongoing training should be made available to the campus community about which tools are in use, where they are in use, and the rationale for their use. The campus should continually engage in education about the ethical use of AI and the steps taken to mitigate risk while maximizing benefits. |
Human Interaction | Audit procedures with human oversight should monitor misinformation, appropriate data use, output clarity, etc., and processes should exist for challenging and overriding generative AI outputs, as necessary. Human judgment and decision-making are still required when AI is in use. |
Reliability and Accuracy | Prior to using generative AI, staff with expert knowledge should perform pilot testing to validate that the source data and outputs are correct and replicable. Ongoing validation should be standard practice. |
The CSU AI Taskforce and Data Governance Committee will review these interim guidelines. The landscape is changing rapidly and understanding appropriate applications of AI and systemic risk mitigation will continue to be monitored.
Existing Policies and Regulations
These interim guidelines complement existing policies and regulations that should be considered in our use of AI.