Responsible AI (RAI) while using Copilot

In the Privacy and Security Guidance for using Microsoft 365 Copilot article, we introduced the S.A.F.E-A.I. principles — a way to help you engage with AI technologies responsibly and confidently. In this article, we’ll take a closer look at each principle, explore what they mean in practice and highlight how they’re being applied at SFU. By understanding and applying these principles, you’ll build the confidence and critical thinking skills needed to evaluate AI tools thoughtfully, ask the right questions and make responsible choices in your work.

 

What's Included

 

Secure

One of the most pressing aspects of responsible AI is addressing privacy and security concerns. As AI systems, particularly generative AI, become more powerful, they introduce new risks to data security, personal privacy and even broader cybersecurity. There is a lot to consider in this section, so we'll outline some of the key concerns and challenges.

 

Data Privacy and Unintended Exposure

AI systems often train on large data sets that include personal or proprietary information. A major concern is unintended data leakage – models might inadvertently expose sensitive information that was in the training data. For instance, if employees input confidential text into a generative AI service, that data might get used to further train the model and later emerge in another user’s output. 

Generative AI models can “remember” fragments of their training data, raising the risk that private information (like a person's contact info or proprietary code) could be regurgitated in responses. As a precaution, many organizations limit what data can be fed into public AI tools and AI providers are working on techniques to better filter or forget sensitive training data.

 

Intellectual Property and Copyright

Generative AI raises questions about intellectual property (IP). Models trained on vast internet data might output content that unintentionally copycats copyrighted material. For example, a generative model might produce a passage of text almost identical to something in its training set (e.g., from a book or article), or an image model might generate something that resembles a specific artist’s work. This blurring of originality complicates who owns AI-generated content and whether using certain training data infringes copyrights. Organizations deploying generative AI must be careful to avoid legal issues from AI output. Responsible AI use entails respecting IP rights, perhaps by curating training data or allowing authors to opt out and by guiding users to use AI outputs ethically (not passing off AI-generated art as human-made if that violates terms, for instance).

 

Regulatory Compliance

Privacy laws like FIPPA (in B.C., Canada) or GDPR (in Europe) and various data protection regulations worldwide impose requirements on automated decision-making and data handling. Organizations using AI need to ensure compliance, such as providing explanations for decisions, obtaining consent for personal data processing and enabling individuals to opt out of purely AI-based decisions. Future AI-specific regulations (like the proposed Canadian Artificial Intelligence Data Act) will likely enforce certain privacy and transparency standards. Staying ahead of these governance requirements is a part of responsible AI strategy for any institution.

SFU Guidance

AI systems must protect personal data and be built with strong safeguards against unauthorized access. Security ensures that information is handled responsibly throughout the system’s lifecycle and that robust security controls are in place to prevent abuse.

  • Use only university-reviewed systems when entering personal information into AI tools, to ensure compliance with institutional privacy and security standards.
  • Conduct Privacy Impact Assessments (PIAs) for new AI solutions, and update PIAs if they have significantly changed.
  • Apply security safeguards proportional to data sensitivity.
  • While evaluating AI solutions, choose privacy-protective technologies.

 

Accountable

Accountable means that organizations and people remain responsible for the outcomes of AI systems. If an AI system causes harm or makes a mistake, there should be a clear path to address it and a person or team responsible for it. The idea is that AI should not be an excuse to “pass the buck” – those who use AI must ensure it is used correctly and take responsibility for its impacts.
SFU Guidance

Organizations must remain answerable for how AI systems are developed and used. This includes ensuring decisions can be traced, data is used legally, and people—not machines—are ultimately responsible.

  • You remain accountable for content generated by AI solutions you use, including the impacts of its use elsewhere.
  • Responses from AI generated content must not be treated as a source of authority.
  • If you are an SFU employee, do not use AI to collect personal information from public sources (such as websites, social media...etc.) except where specifically authorized and only after informing the individuals whose data is being collected.
  • Understand that when AI generates information about a specific person—even if it's guessed or inferred—it still counts as personal data and must be handled according to privacy laws.
  • Ensure that any personal information entered into AI systems is handled in accordance with the original Collection Notice and relevant privacy obligations, including the Freedom of Information and Protection of Privacy Act. or applicable university policies for your area.
  • When considering AI solutions identify the impacts they could have on individuals or groups. Evaluate if it will be necessary for your purpose and not merely convenient.

 

Fair

AI systems should treat individuals and groups fairly, avoiding discrimination. This means the AI’s decisions or outputs should not be prejudiced against people on the basis of characteristics like race, gender, or other protected attributes. A known challenge is that AI models can inherit biases present in their training data. Mitigating bias requires careful data set curation and ongoing evaluation. Techniques include using diverse training data, testing outcomes for disparate impact and correcting any biased decision rules.

SFU Guidance

AI should treat individuals equitably and avoid bias in outcomes. Fairness means ensuring that no group is unfairly advantaged or disadvantaged by automated decisions.

  • Ensure the content you curate while using AI does not amplify biases or violate human rights, accessibility, or fairness obligations you have at the university.

 

Explainable (Transparency)

There is a need for transparency in how AI systems make decisions or produce outputs. For users and organizations to trust AI, they should be able to get some insight into what data was used and how the model works. This is difficult with complex models (the “black box” problem), but efforts like Explainable AI (XAI) aim to provide human-understandable explanations for AI decisions.

Transparency also involves being clear when content is generated or consumed by AI. Being transparent helps ensure accountability and allows individuals to contest or question an AI outcome.

SFU Guidance

Transparency in AI builds trust and enables scrutiny. Users must be able to understand how AI systems work, verify their outputs, and justify decisions informed by AI.

  • You are responsible for verifying responses provided by AI. If you are unable to verify and explain results, then consider not using them.
  • You remain responsible for the execution and transparency of decisions informed or made by AI Solutions.
  • Ensure AI generated content doesn't falsely impersonate or misrepresent you, others, or any other additional commitments you have (such as copyrighted content).
  • Be transparent when others are interacting with AI generated content created vs. human generated content.
  • Mark outputs with significant impact as AI-generated.
  • Clearly inform people when AI is used in decision-making, with recourse options available.

 

Auditable(Safety & Reliability)

AI should be reliable and safe, meaning it performs as intended under expected conditions and fails gracefully under unexpected conditions. For instance, if an AI system controls a vehicle or provides medical advice, it must undergo rigorous testing and auditing to handle edge cases and avoid dangerous failures.

Safety also involves the AI not causing harm through its output. In generative AI, safety can mean preventing the AI from generating violent or illegal instructions, hate speech, or disinformation.

SFU Guidance

AI systems must be regularly reviewed to ensure reliability and safety. Auditing allows issues to be identified early and supports responsible oversight and continuous improvement.

  • If you are using an AI solution, ensure results or content it generates adheres to legal commitments, code of ethics, or other responsibilities you have in your role at the university. It will be incumbent on you to audit your use of AI in a reliable and safe manner. AI itself cannot take responsibility.
  • When developing and supporting AI solutions that will be used by others, understand your responsibility to maintain them in a reliable and safe manner. Establish regular audits and testing to ensure your AI solution does not cause harm (such as disinformation, hate speech, or violent/illegal instructions).
  • Remain informed about the limitations of AI solutions you use and assess whether they are suitable for your use. AI systems can provide inaccurate results if they aren't built for your context.

 

Inclusive

AI should be designed to empower as many people as possible and avoid marginalizing any group. This principle emphasizes user-centric design – making AI accessible and useful to people with diverse backgrounds and abilities. For example, inclusive AI might involve ensuring that a voice assistant works for different accents and speech patterns, or that generative AI tools are available in many languages. In the context of responsible AI, inclusivity overlaps with fairness, but also encourages broad stakeholder input when designing AI (so that many perspectives are considered).

SFU Guidance

AI systems should be designed to serve people of all backgrounds and abilities. Inclusivity means considering diverse needs to promote accessibility, representation, and equal participation.

  • Prioritize AI solutions that are designed to be accessible and supportive of people with diverse abilities, ensuring everyone can use and benefit from them.
  • Avoid AI marginalizing groups or individuals; ensure datasets and outputs reflect diversity and equity.