Last Updated: 9/22/2025
Purpose and Scope
This document provides current guidelines for the safe, ethical, and responsible use of institutional data and artificial intelligence (AI) tools within Colorado State University (CSU). These guidelines apply to all individuals handling institutional data or using AI tools in administrative operations, research, clinical, and educational activities across the CSU system.
These guidelines are designed to evolve alongside technological and regulatory developments, ensuring continuous improvement of the university’s responsible information technology ecosystem.
Institutional Commitment
Colorado State University is committed to empowering innovation while protecting institutional data and maintaining transparency in data use by following applicable federal, state, local, industry, and university policies and standards. As a cutting-edge technology, AI has the potential to enhance productivity, creativity, and efficiency when used responsibly within a framework of strong data governance
Core Value Guidelines
Beneficence
The use of AI should do no harm. AI should be used to maximize individual and organizational well-being while mitigating risk to the greatest extent possible.
Equity and Fairness
Intentional efforts should be made to identify and mitigate bias and inaccuracies. AI tools should be equally accessible to all members of the campus community and should not be used in decision-making processes where groups of people are treated differently.
Transparency and Documentation
The use of AI should be disclosed clearly and concisely. Documentation about data sources, limitations, generalizability, security, and reliability should be made available.
Security and Privacy
Tolerance for security and privacy risks in the use of AI at CSU is extremely low. All data protection measures must be rigorously followed.
1. Data Classification and Governance
The Colorado State University System (CSUS) has authority over the use of its Data as well as the Data of its Institutions and is the legal custodian of all those data. System and Institution Data are valuable assets, the use of which must be aligned with the System’s strategic goals. The System established a Data Governance Program to formally manage its data assets across all its institutions.
The CSU System Data Governance Policy governs any information, regardless of electronic or printed form or location, which is created, acquired, processed, transmitted, or stored on behalf of the System or an Institution on an Information Technology Resource. This includes data created, acquired, processed, transmitted, or stored by the Institution in environments in which the Institution does not own or operate the technology infrastructure. AI data, generated for official university business, is included in the scope of that policy. Research Data requires specific and unique data management according to relevant laws, policies, regulations, standards, and specific funding agency requirements. Therefore, Research Data as defined in the policy are exempt from the Data Governance Program. Guidance for the governance of research data is provided by the Institutional Review Board and the Office of the Vice President for Research
The CSU system uses four data classification levels per the which are based on access, privacy, security standards, and the risks associated with improper use. Data must be classified at the highest level based on the risk of improper use and may have only one classification, as levels are mutually exclusive. All data users are responsible for the use and protection of CSU System and institutional data throughout its lifecycle.
1.1 AI Use Requirements
Level 1 (Public)
- AI Use Requirements: Verify data classification and accuracy of AI output before use
Level 2 (Internal)
- AI Use Requirements: Verify data classification before use, adhere to approved CSU data and AI policies, use only CSU-approved tools; AI outputs should not be shared externally without prior approval
Level 3 (Confidential)
- AI Use Requirements: Verify data classification before use, adhere to all applicable policies and guidelines, use only CSU-approved tools with specific contractual data protection guarantees
Level 4 (Restricted)
- AI Use Requirements: Verify data classification before use, adhere to all applicable policies and guidelines, use only CSU-approved tools with the highest level of contractual data protection guarantees; requires additional approval for any AI use
2. AI Use Principles
2.1 Ethical Use
- User Responsibility: Users are responsible for ethical use and consequences, ensuring alignment with CSU’s ethical standards and values.
- Harm Prevention: Avoid using AI for purposes that may harm individuals, groups, or society. Ensure AI does not discriminate based on protected characteristics.
- Bias Mitigation: Be aware of potential biases in AI outputs and actively work to identify and correct harmful biases. Report suspected bias to appropriate CSU offices.
2.2 Appropriate Use
- Productivity Enhancement: Use AI tools to enhance productivity, creativity, and efficiency while consulting with relevant stakeholders about permissibility.
- Accuracy Verification: Do not assume AI-generated content is accurate. Verify that critical information is correct and complete.
- Human Oversight: Avoid using AI for tasks requiring human judgment, emotional intelligence, or critical decision-making. AI may inform decisions but must not be the sole basis for high-stakes determinations.
- Prohibited Uses: Do not use AI to generate misleading, harmful, or offensive content. Use of AI for illegal activities is prohibited.
- API and Automation: External tool connections via APIs require formal security review. Autonomous AI agents must be limited to CSU-approved tools.
2.3 Transparency
- Communication Standards: Maintain transparency about AI use in internal and external communications:
-
- Notify stakeholders when they interact with AI systems
- Notify clients/users before AI tool usage and respect decisions to decline
- Credit AI use in documents, presentations, and academic work
- Content Labeling: Clearly label AI-generated content to distinguish it from human-generated content.
2.4 Accountability
- Policy Compliance: Users are responsible for ensuring their AI use complies with all CSU policies and regulations.
- Incident Reporting: Report misuse or unintended consequences in a timely manner to appropriate CSU offices.
3. Compliance and Legal Considerations
3.1 Regulatory Compliance
- Research Review: All research involving human or animal subjects must undergo appropriate CSU regulatory review (IRB, IACUC, CRB, IBC).
- Legal Awareness: Users are responsible for understanding legal and contractual requirements relevant to their role and ensuring compliance.
3.2 Intellectual Property
- Rights Protection: Respect intellectual property rights and avoid AI-generated content that infringes on others’ rights.
- Ownership Understanding: Understand policies on AI-generated content ownership, often dictated by AI tool terms and conditions.
4. Human Interaction and Reliability
4.1 Human Oversight
Audit procedures with human oversight should monitor misinformation, appropriate data use, and output clarity. Processes must exist for challenging and overriding AI outputs when necessary.
4.2 Reliability and Accuracy
Prior to using AI tools, staff with expert knowledge should perform pilot testing to validate source data and outputs. Ongoing validation should be standard practice.
5. Training and Awareness
- Regular Training: Participate in training on safe and responsible AI use covering data privacy, security, ethical considerations, and compliance.
- Continuous Learning: Stay updated with guideline changes and visit designated CSU AI resources for current information.
- Cultural Development: Encourage responsible AI use culture throughout CSU.
- Sustainability: Consider sustainable AI use practices when possible.
Contacts and Support
- Data Stewardship: Contact appropriate CSU Data Stewards for data-related questions
- Security: Contact CSU Chief Information Security Officer for security-related questions
- AI Guidance: Contact CSU AI Taskforce and Data Governance Committee for AI-related questions
Existing Policies and Regulations
These guidelines complement existing policies including:
- CSU Acceptable Use for Computing and Networking Resources Policy
- CSU Accessibility of Electronic Information and Technologies Policy
- CSU Central Administrative Data Governance Policy
- CSU Information Collection and Personal Records Privacy Policy
- CSU Information Technology Security Policy
- Health Insurance Portability and Accountability Act (HIPAA)
- Family Educational Rights and Privacy Act (FERPA)
- Americans with Disabilities Act (ADA)
- Colorado Open Records Act (CORA)
- State of Colorado SB 24-205
Conclusion
The responsible use of institutional data and AI is crucial for advancing CSU’s mission while safeguarding privacy and minimizing risks. By adhering to these guidelines, users can ensure data is handled securely and AI tools are used safely, ethically, and transparently. Regular review and updates will ensure these guidelines remain current with technological and regulatory developments.