FAQs

FAQs

Image
Three students working together at a computer


We have compiled FAQs here for students around large language models (LLMs), such as ChatGPT, Gemini, Co-Pilot, and others. We continue to update this information as the technologies evolve.

For students, the university’s code of academic integrity applies and can be used to determine what is ethical versus unethical conduct regarding the use of generative AI tools. Specifically, students are required to provide proper attribution in their submitted course assignments. Students should consult each instructor's syllabus and policy regarding AI use and attribution. 

It depends on the instructor and course policies. The use of generative AI in courses is allowed at the discretion of the instructor. Each course will have its own approach to when, how, or if generative AI may be used. In the case that generative AI is allowed, each course will have specific guidance about how to indicate when and how generative AI was used. Always consult with each of your instructors on the appropriate use of generative AI in specific courses before using AI for assignments.  

The following guide from the UA Libraries offers some possible suggestions for attributing generative AI use in your work. Always consult with your instructor on the appropriate methods of attribution for specific courses. 

Always fact-check the accuracy of the AI’s outputs.

Many citation styles have methods for generative AI attribution. Please see the following guides below that may be relevant to your specific context or field.  

A recently released report by the U.S. Copyright Office concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements.

AI tools may store or analyze your inputs. Avoid inputting personal, confidential, sensitive, or university data into unapproved AI tools due to the potential risks associated with data privacy and security breaches. The UA Libraries also provide information on privacy protection. Exercise caution when engaging with new AI applications.

To protect our community and reduce information security risk, the University of Arizona is taking proactive measures, including blocking use of the DeepSeek mobile application and website on:

  • University-issued devices and personal devices used for university business, including computers, tablets, smart phones, and other internet-enabled equipment.
  • University wired and wireless networks, including residence halls, libraries and Student Union facilities. 

Please note: The security vulnerabilities are primarily associated with the mobile app and website, and not with integrations with services/vendors such as Microsoft and Perplexity at this time. 

The university also strongly cautions against using DeepSeek on personal devices to prevent your information and data from being compromised. Visit DeepSeek AI - Overview and Security Risks to learn more.