In the Privacy and Security Guidance for using Microsoft 365 Copilot article, we introduced the S.A.F.E-A.I. principles — a way to help you engage with AI technologies responsibly and confidently. In this article, we’ll take a closer look at each principle, explore what they mean in practice and highlight how they’re being applied at SFU. By understanding and applying these principles, you’ll build the confidence and critical thinking skills needed to evaluate AI tools thoughtfully, ask the right questions and make responsible choices in your work.
What's Included
Secure(Privacy & Security)
One of the most pressing aspects of responsible AI is addressing privacy and security concerns. As AI systems, particularly generative AI, become more powerful, they introduce new risks to data security, personal privacy and even broader cybersecurity. There is a lot to consider in this section, so we'll outline some of the key concerns and challenges.
Data Privacy and Unintended Exposure
AI systems often train on large data sets that include personal or proprietary information. A major concern is unintended data leakage – models might inadvertently expose sensitive information that was in the training data. For instance, if employees input confidential text into a generative AI service, that data might get used to further train the model and later emerge in another user’s output.
Generative AI models can “remember” fragments of their training data, raising the risk that private information (like a person's contact info or proprietary code) could be regurgitated in responses. As a precaution, many organizations limit what data can be fed into public AI tools and AI providers are working on techniques to better filter or forget sensitive training data.
NOTE: Copilot is is an institutionally reviewed AI solution that includes security and privacy guardrails to safeguard data; it's safe to use for work, study and research at SFU. Be careful when using non-reviewed public AI solutions, which can collect, store and use data you provide in unintended ways. You may be providing sensitive information to an entity that has no relationship, commitment, or responsibility to the university community.
Intellectual Property and Copyright
Generative AI raises questions about intellectual property (IP). Models trained on vast internet data might output content that unintentionally copycats copyrighted material. For example, a generative model might produce a passage of text almost identical to something in its training set (e.g., from a book or article), or an image model might generate something that resembles a specific artist’s work. This blurring of originality complicates who owns AI-generated content and whether using certain training data infringes copyrights. Organizations deploying generative AI must be careful to avoid legal issues from AI output. Responsible AI use entails respecting IP rights, perhaps by curating training data or allowing authors to opt out and by guiding users to use AI outputs ethically (not passing off AI-generated art as human-made if that violates terms, for instance).
NOTE: Copilot is included in
Microsoft's Copyright Commitment which assumes legal responsibility for copyright challenges
within reason. Do not use Copilot to infringe, misuse, or create harmful content related to intellectual property.
Regulatory Compliance
Privacy laws like FIPPA (in B.C., Canada) or GDPR (in Europe) and various data protection regulations worldwide impose requirements on automated decision-making and data handling. Organizations using AI need to ensure compliance, such as providing explanations for decisions, obtaining consent for personal data processing and enabling individuals to opt out of purely AI-based decisions. Future AI-specific regulations (like the proposed Canadian Artificial Intelligence Data Act) will likely enforce certain privacy and transparency standards. Staying ahead of these governance requirements is a part of responsible AI strategy for any institution.
SFU Guidance |
- Ensure that personal information entered into AI systems is also done in accordance with the Collection Notice under which that information was collected. If any personal information is entered, please ensure that it is handled responsibly, in accordance with any Freedom of Information and Protection of Privacy Act (RSBC 1996, c.165) and university policy related obligations you may have.
- If you are sharing and supporting an AI solution (such as a Copilot Agent), regularly review and ensure it continues to meet legal requirements applicable at SFU. Where appropriate, consult with the SFU Privacy Office to assess legal updates and changes.
|
Accountable
Accountable means that organizations and people remain responsible for the outcomes of AI systems. If an AI system causes harm or makes a mistake, there should be a clear path to address it and a person or team responsible for it. The idea is that AI should not be an excuse to “pass the buck” – those who use AI must ensure it is used correctly and take responsibility for its impacts.
SFU Guidance |
- You remain accountable for content used and generated by AI systems you use, including the impacts of its use elsewhere.
- You remain accountable for the arrival, execution and transparency of decisions made by AI systems. AI systems cannot be held responsible for any decision making.
|
Fair
AI systems should treat individuals and groups fairly, avoiding discrimination. This means the AI’s decisions or outputs should not be prejudiced against people on the basis of characteristics like race, gender, or other protected attributes. A known challenge is that AI models can inherit biases present in their training data. Mitigating bias requires careful data set curation and ongoing evaluation. Techniques include using diverse training data, testing outcomes for disparate impact and correcting any biased decision rules.
SFU Guidance |
AI solutions may not weigh or consider biases when responding or making decisions. For example, if you’re using AI to predict if a student is eligible for a program and gender is not a deciding factor, but historically more men have been accepted, then an AI solution should not introduce any unfair advantage or disadvantage to a specific group.
- Ensure the content you curate while using an AI does not amplify biases or violate human rights, accessibility, or fairness obligations you have at the university.
|
Explainable
There is a need for transparency in how AI systems make decisions or produce outputs. For users and organizations to trust AI, they should be able to get some insight into what data was used and how the model works. This is difficult with complex models (the “black box” problem), but efforts like Explainable AI (XAI) aim to provide human-understandable explanations for AI decisions.
Transparency also involves being clear when content is generated or consumed by AI. Being transparent helps ensure accountability and allows individuals to contest or question an AI outcome.
NOTE: Some AI solutions may not detail how an answer is being reached. As part of your question you can ask for an explanation to be included in the answer. For example, "What is the best time to visit Vancouver? Explain your reasoning".
SFU Guidance |
Responses from AI solutions can be inaccurate or misleading but convincing at the same time. When interacting with AI, it’s important to always verify results, check for accuracy and have transparency.
AI systems also excel at impersonating human-like behaviours, so it’s important to be transparent when AI is being used. This includes being transparent when something is generated by AI or making it clear to someone how their data is being used, retained, or accessed when an AI system is being used autonomously.
- Responses from AI generated content must not be treated as a source of authority.
- You are responsible for verifying responses provided by AI. If you are unable to verify and explain results, then consider not using them.
- You remain responsible for the arrival, execution and transparency of decisions informed or made by AI solutions.
- Ensure AI generated content doesn't falsely impersonate or misrepresent you, others, or any other additional commitments you have (such as copyrighted content).
- Be transparent when others are interacting with AI generated content created by you vs. human generated content.
|
Auditable(Safety & Reliability)
AI should be reliable and safe, meaning it performs as intended under expected conditions and fails gracefully under unexpected conditions. For instance, if an AI system controls a vehicle or provides medical advice, it must undergo rigorous testing and auditing to handle edge cases and avoid dangerous failures.
Safety also involves the AI not causing harm through its output. In generative AI, safety can mean preventing the AI from generating violent or illegal instructions, hate speech, or disinformation.
SFU Guidance |
- If you are using an AI solution, ensure results or content it generates adheres to legal commitments, code of ethics, or other responsibilities you have in your role at the university. It will be incumbent on you to audit your use of AI in a reliable and safe manner. AI itself cannot take responsibility.
- When developing and supporting AI systems that will be used by others, understand your responsibility to maintain them in a reliable and safe manner. This can include implementing regular reviews of an AI system using a set of standards (such as the NIST AI-100 Risk Management Framework).
|
Inclusive
AI should be designed to empower as many people as possible and avoid marginalizing any group. This principle emphasizes user-centric design – making AI accessible and useful to people with diverse backgrounds and abilities. For example, inclusive AI might involve ensuring that a voice assistant works for different accents and speech patterns, or that generative AI tools are available in many languages. In the context of responsible AI, inclusivity overlaps with fairness, but also encourages broad stakeholder input when designing AI (so that many perspectives are considered).
SFU Guidance |
- Ensure AI solutions do not marginalize groups or individuals.
|