Microsoft Copilot and general guidelines for using generative AI tools

McGill offers a secure version of Microsoft Copilot that protects our data, known as the Commercial Data Protection version. Microsoft Copilot is a powerful generative AI tool where an AI chatbot can help answer questions, generate content, and more! Microsoft Copilot can be accessed via Microsoft Edge.  

It is crucial to use the Commercial Data Protection (secure) version for McGill-related tasks. A public version of Copilot is available, but it does not conform to the University’s privacy and data protection standards and should not be used for McGill-related purposes.  

Here’s how the Commercial Data Protection version works: when you are logged into it with your McGill credentials, this version ensures that data stays secure and is not shared with anyone. However, even though users are logged in with their McGill credentials, they should not share any private, sensitive, Personally Identifiable Information (PII), Personal Health Information (PHI), Payment Card Industry (PCI) data, or other confidential information.    

What is generative AI? 

Generative AI is a type of artificial intelligence that uses machine learning to create new types of media, including text, images, sound, and video. Users write short “prompts” to guide the AI in creating this content.

However, when displaying content on McGill’s public websites - including content created with generative AI, it must align with McGill’s Digital Standards. Visit the Web Services website for details about McGill’s Digital Standards and your responsibilities under McGill’s Web Governance Framework

In addition, the Canadian Treasury Board Secretariat (TSB) has developed the following AI usage guidelines under the acronym FASTER: .Fair, Accountable, Secure, Transparent, Educated, Relevant. 

Let’s look at each element of FASTER in more detail. 

Fair 

Human biases may skew the data that was used to train an AI tool, or the algorithms behind those tools, resulting in content that reflects or amplifies those biases.  

In addition to being aware of biases, the TSB also suggests ensuring that before using an AI tool, you ensure it “complies with human rights, accessibility, and procedural and substantive fairness obligations.” 

Accountable 

Generative AI tools are not infallible. In addition to the possibility of biases outlined above, AI can produce answers that are incorrect or misleading. 

Because of this, it is the user’s responsibility to diligently review any AI-produced responses for accuracy and potential biases. It’s up to you to validate the veracity of results—and to not rely on AI as the last word in decision-making.  

If you cannot confirm the validity or appropriateness of AI results, do not use them.  

Secure 

Data protection is a huge issue when using any cloud-based solution. Generative AI tools are of particular concern because any data entered in a generative AI tool is added to its body of information.  

This means that not only can the data you enter be added to this body of information, but it could be inadvertently revealed to other users who should not see it. It’s important to know how a generative AI tool will use your data before you consider using the tool. 

The Commercial Data Protection (secure) version of Microsoft Copilot has undergone a thorough data security and risk assessment. There is a public version of Microsoft Copilot, but McGill chose the Commercial Data Protection (secure) version because it conforms to the University’s privacy and data protection standards. The public version of Microsoft Copilot does not conform to the University’s privacy and data protection standards.  

This means that when logged in using McGill credentials, data used is not shared with either Microsoft or McGill. However, even though users are logged in with their McGill credentials, we do not recommend sharing any confidential, personal or proprietary information. 

When possible, you can further mitigate potential privacy concerns by removing personally identifying information (e.g., names, email addresses, phone numbers. For example, when writing a prompt to draft an email to Joe Smith, replace “Joe Smith” with “XYZ.” 

In short, do not assume that an AI tool is safe to use just because it’s widely available (or even free). In fact, unless formally reviewed and approved by McGill, it is prohibited to use any generative AI tool for McGill-related purposes due to the risk of exposing personal, confidential and university sensitive information to the Internet. 

Read McGill’s Cloud 101 document for more details about the different levels of data sensitivity. 

Transparent 

Users should employ their own or publicly available content and avoid situations where the copyright holder has not given permission. Read more about copyright at McGill

The TSB strongly recommends disclosing any use of generative AI in your work. This includes identifying content that has been produced using generative AI. 

Educated 

There are two aspects to being an educated user of generative AI tools. 

The first is understanding the strengths, limitations and responsible use of the tools. In addition to thinking about the FASTER considerations, take the time to familiarize yourself with the Policy on Responsible Use of McGill Information Technology Resources and read the set of approved recommendations from the Senate Sub-committee on generative AI for teaching and learning at McGill. You should also keep your training up to date, including the latest training available to you on McGill's cybersecurity awareness training platform

The second aspect is knowing how to get the most generative AI. Learn how to write effective prompts to generate the best possible content. And, whether reviewing content you have generated, or content generated by others, be a critical consumer and learn how to identify potential weaknesses (such as biases). 

Relevant 

The final part of FASTER is about keeping your use of generative AI relevant. Remember, AI is simply another tool of the many you have at your disposal. Just as you wouldn’t use a hammer to slice bread, generative AI is not always going to be appropriate for every task. The TSB suggests that, before using an AI tool, you ask yourself whether its use would:  

Although generative AI can be useful, and even fun, you should always ask yourself whether you even need to use it for a given task. 

Explore additional information through the following links: 

Back to top