Privacy and Security Guidance for using Microsoft 365 Copilot

What's Included in this Article?

 

How your data is protected while using Microsoft 365 Copilot

Microsoft 365 Copilot and Copilot Chat are covered by same the enterprise data protection (EDP) that protects your emails and documents in OneDrive/SharePoint, so you can confidently work using the privacy and security features that safeguard your data.

While using Copilot:

  • Your privacy is preserved: Your data is only used to support your work, never to train AI models. Privacy protections align with global standards like GDPR and ISO/IEC 27018.

  • SFU organizational policies apply: Copilot respects your existing access controls (such as sensitivity labels and sharing permissions) so you're in control of the content you want share with it. If you haven't provided access to something, Copilot won't have access to it either.

  • You're guarded against AI risks: Protections are in place to help prevent harmful content, prompt injections, and copyright issues, so you can use Copilot confidently.

  • Your data powers your experience, not the AI model: Prompts, responses, and data are accessed through the same mechanism (Microsoft Graph) that protects your emails and content to generate relevant answers for you, but are not used to train foundation models.

 

Responsible Use of Copilot at SFU

As you use Copilot (or any AI solution) it's important to keep in mind that these tools are designed to be very human-like but are not human. They may not consider the same ethical implications or commitments that you might have and do not have the same responsibilities.

During your interactions, you should remain critical of results while understanding the strengths, weaknesses, and limitations of these tools. Below, are several guiding principals from an SFU perspective that can help you responsibly engage with AI. Also see our deep-dive into these topics so you can use AI in a confident and thoughtful way.

 

S.A.F.E-A.I.

Secure

AI systems must protect personal data and be built with strong safeguards against unauthorized access. Security ensures that information is handled responsibly throughout the system’s lifecycle and that robust security controls are in place to prevent abuse.

  • Use only university-reviewed systems when entering personal information into AI tools, to ensure compliance with institutional privacy and security standards.
  • Conduct Privacy Impact Assessments (PIAs) for new AI solutions, and update PIAs if they have significantly changed.
  • Apply security safeguards proportional to data sensitivity.
  • While evaluating AI solutions, choose privacy-protective technologies.

Accountable

Organizations must remain answerable for how AI systems are developed and used. This includes ensuring decisions can be traced, data is used legally, and people — not machines — are ultimately responsible.

  • You remain accountable for content generated by AI solutions you use, including the impacts of its use elsewhere.
  • Responses from AI generated content must not be treated as a source of authority.
  • If you are an SFU employee, do not use AI to collect personal information from public sources (such as websites, social media...etc.) except where specifically authorized and only after informing the individuals whose data is being collected.
  • Understand that when AI generates information about a specific person—even if it's guessed or inferred—it still counts as personal data and must be handled according to privacy laws.
  • Ensure that any personal information entered into AI systems is handled in accordance with the original Collection Notice and relevant privacy obligations, including the Freedom of Information and Protection of Privacy ActLinks to an external site. or applicable university policies for your area.
  • When considering AI solutions identify the impacts they could have on individuals or groups. Evaluate if it will be necessary for your purpose and not merely convenient.

Fair

AI should treat individuals equitably and avoid bias in outcomes. Fairness means ensuring that no group is unfairly advantaged or disadvantaged by automated decisions.

  • Ensure the content you curate while using AI does not amplify biases or violate human rights, accessibility, or fairness obligations you have at the university.

Explainable (Transparency)

Transparency in AI builds trust and enables scrutiny. Users must be able to understand how AI systems work, verify their outputs, and justify decisions informed by AI.

  • You are responsible for verifying responses provided by AI. If you are unable to verify and explain results, then consider not using them.
  • You remain responsible for the execution and transparency of decisions informed or made by AI Solutions.
  • Ensure AI generated content doesn't falsely impersonate or misrepresent you, others, or any other additional commitments you have (such as copyrighted content).
  • Be transparent when others are interacting with AI generated content created vs. human generated content.
  • Mark outputs with significant impact as AI-generated.
  • Clearly inform people when AI is used in decision-making, with recourse options available.

Auditable (Safety & Reliability)

AI systems must be regularly reviewed to ensure reliability and safety. Auditing allows issues to be identified early and supports responsible oversight and continuous improvement.

  • If you are using an AI solution, ensure results or content it generates adheres to legal commitments, code of ethics, or other responsibilities you have in your role at the university. It will be incumbent on you to audit your use of AI in a reliable and safe manner. AI itself cannot take responsibility.
  • When developing and supporting AI solutions that will be used by others, understand your responsibility to maintain them in a reliable and safe manner. Establish regular audits and testing to ensure your AI solution does not cause harm (such as disinformation, hate speech, or violent/illegal instructions).
  • Remain informed about the limitations of AI solutions you use and assess whether they are suitable for your use. AI systems can provide inaccurate results if they aren't built for your context.

Inclusive

AI systems should be designed to serve people of all backgrounds and abilities. Inclusivity means considering diverse needs to promote accessibility, representation, and equal participation.

  • Prioritize AI solutions that are designed to be accessible and supportive of people with diverse abilities, ensuring everyone can use and benefit from them.
  • Avoid AI marginalizing groups or individuals; ensure datasets and outputs reflect diversity and equity.

 

Data hygiene while using Copilot

When you're using Microsoft 365 and tools like Copilot, it's easy to forget how much information you're actually sharing and storing. That’s why it’s important to keep your data clean and organized. By being mindful about how you share files, label sensitive content, and manage old documents, you can make sure you're working safely and smartly. Here are a few simple ways to stay on top of your data hygiene while using Microsoft 365.

 

Guarding Against Oversharing

Microsoft 365 is a great platform for collaboration and productivity, however, content owners may inadvertently share more content than they need (often called "Oversharing"). It will be important to consider the following when sharing content on the Microsoft 365 platform:

  • Pick the right level of access for each link you make when you share a document. For example, you can share your work with anyone, only with people in your organization, or only the people you choose.
  • If you create "Anyone" links that can be shared publicly, set an expiry date on the link. This means that your links will stop working after at time you define. This can help ensure that content isn't shared forever and reduces the burden of managing links overall.
  • Regularly review your links using the "My Content" feature in the M365 portal. Under the "Shared" tab, you can filter through all the links you've made and who can use them. You can also change or delete any links that you don't need or want anymore.

For an in-depth overview of how each link works while sharing information see: How shareable links work in OneDrive and SharePoint in Microsoft 365.

 

Classifying Sensitive Data

As an AI companion, Copilot Chat will be able to interact with documents and content that you share with it or have the permission to view while using "Work" mode with the Microsoft 365 Copilot version. As a result, it may review documents that contain sensitive information such as:

  • Personal Information: Such as names, addresses, phone numbers, personal email addresses, social security numbers, etc.
  • Confidential or Sensitive Content: Including proprietary business information, financial details, legal documents, or any other information intended to be kept private

For documents that contain sensitive information, practice SFU's guidelines for data. This can include:

  • Data minimization: Where documents and content only contain the amount of data needed for a specific task.
  • Principal of least privilege: Where data is only shared with other SFU employees in alignment with their role and duties.

 

Practicing Data Lifecycle Management

Regularly review and remove documents or content that are no longer needed in Microsoft 365. For documents that need to be archived and retained in the long term for legal reasons, store them in the appropriate location identified by your department. For example, store the final version of Word document in a network file-share or other system that has been identified as the archive space for your area.

For more guidance about file lifecycle management in Microsoft 365, see the following recommendations from the SFU Archives and Records Management department:

 

Advisory Notice from the SFU Privacy Office on the use of Microsoft 365 Copilot

In using this service, you agree to limit the upload, prompt, submission, or provision of access to documents or information that contains or includes personal (non-business contact information) or confidential information to this Microsoft 365 Copilot AI companion that is not strictly necessary. We urge all users to exercise caution and ensure that any personal information disclosed or used within the Microsoft 365 Copilot AI is done so in strict accordance with the Collection Notice under which the information was originally collected.

  • For greater clarity Personal Information: may include student names, private non-business addresses, non-work related phone numbers, non-work related email addresses, social security numbers, employment or education history, etc. For a more expansive list of personal information data elements please see: https://www.sfu.ca/content/dam/sfu/policies/files/information_policies/I10-11/I10.11 Schedule 1 - May 29 2021.pdf
  • For greater clarity Confidential or Sensitive Content: Including regulated data, proprietary business information, financial details, legal documents, or any other information that is confidential in nature regarding the university.
     

Please avoid relying on any responses from this Microsoft 365 Copilot AI companion to make decisions concerning individuals unless you have verified the accuracy of the information provided to, and statements provided by, this Microsoft 365 Copilot AI companion.

Should you have any inquiries or concerns regarding the appropriate use of this platform, please contact the SFU Microsoft 365 team via the ITS Service Hub at https://servicehub.sfu.ca/.


We would like to caution users about the responsible use of Microsoft 365 Copilot AI. While AI enhances search and productivity capabilities, and user experience, it is essential to be mindful of its potential consequences. Please consider the following when providing prompts to the AI :

  1. Ensure that personal information entered into prompts is done in accordance with the Collection Notice under which that information was collected. If any personal information is entered, please ensure that it is handled responsibly, in accordance with any Freedom of Information and Protection of Privacy Act (RSBC 1996, c.165) and university policy related obligations you may have.
  2. As AI can inadvertently amplify misinformation. Users should critically evaluate information and cross-reference information generated with reliable sources to verify the accuracy of search results.
  3. AI algorithms may unintentionally reflect biases. Be aware that search results can carry inherent biases, and it's crucial to critically evaluate information generated by AI.
  4. Please be advised that all prompts entered into Microsoft 365 Copilot AI products, as well as the responses generated, are stored within the system regardless of how these responses are subsequently used or handled. 


It is your responsibility to adhere to these guidelines to maintain the privacy and security of personal information you have access to.

Print Article

Related Articles (3)

Resources and learning paths that can help you get started with Microsoft 365 Copilot.
Get familiar with the basics of constructing good prompts and learn about a few prompt engineering techniques so you get the most out of your questions to Copilot.
Learn more about Microsoft 365 Copilot (a personalized productivity AI assistant) at SFU.