Frequently Asked Questions

You ask the questions, we find the answers. 

We have compiled FAQs here for faculty, staff, and students around large language models (LLMs), such as ChatGPT, Gemini, Co-Pilot, and others. We continue to update this information as the technologies evolve. If you have feedback about these resources or others you would like to see, please contact us. You can also find valuable information on the responsible use of generative AI at the UA Libraries and the University Center for Assessment, Teaching and Technology (UCATT)

What are the University's recommendations regarding the ethical use of generative AI for faculty? 

The University recommends that faculty provide guidance to their students about the use of technologies as a part of their course. If a student is unsure regarding the proper use of generative AI as a part of a course assignment, they should consult their faculty/instructor. Syllabus guidance for faculty is provided by the Center for Assessment, Teaching, and Technology (UCATT).  

Does the University support generative AI use for faculty/instructors?  

The University of Arizona offersstaff online workshops and training, includingO’ReillyandMicrosoft Learnas well asEDGE Learning andLinkedIn Learning. Other resources include theProcess Automation Team, a community of practice open to the University community that tackles AI questions, among other topics, and UCATT, the UA Libraries, and DataLab.

How can generative AI be integrated into curriculum or research? 

Generative AI will likely be a part of our students’ future. Thus, teaching them how to effectively use these technologies within their academic disciplines can be beneficial. We anticipate that faculty will integrate these tools into their curricula, just as all new relevant technologies have been integrated into curricula and research over the years. 

Researchers are already using AI to generate code, analyze data/text, and for many other applications. Researchers need to be aware of compliance requirements and the implications for research including human subjects research (both under HIPAA and not under HIPAA), Indigenous data sovereignty, and other ethical considerations.  

Always fact-check the accuracy of the AI’s outputs. 

How should faculty approach student work that may have been created with generative AI? 

Faculty generally assess the originality of coursework submitted by the students in their courses, often in consultation with their students (see Academic Freedom and Freedom of Expression). 

Detection tools should be used with caution when determining the presence of generative AI in writing and other submitted materials. As with any detection tool, false positive results can be expected and taking into account the genesis of work is crucial to making a full determination (see The false positives and false negatives of generative AI detection tools in education and academic research: The case of ChatGPT). Please see UCATT and the UA Libraries for more information.

What support is available for integrating or using generative AI in your course? 

UCATT and the UA Libraries offer resources for teaching and learning with AI, workshops, and information on faculty learning communities, assignment design and learning, academic integrity, and syllabus policies.  

How can I protect data privacy and security when using ChatGPT?

AI tools may store or analyze your inputs. Avoid inputting personal, confidential, sensitive, or university data into unapproved AI tools due to the potential risks associated with data privacy and security breaches. The UA Libraries also provide information on privacy protection. Exercise caution when engaging with new AI applications. 

What are the copyright laws regarding the use of generative AI?

A recently released report by the U.S. Copyright Office concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements.

How can I design AI-related coursework while maintaining academic integrity?

Current conversations about AI offer a chance to reimagine the assessment of student learning. Whether students are creating an infographic, writing an annotated bibliography, or designing a graphic for a marketing campaign – AI can be used throughout. UCATT offers strategies and resources to guide teaching decisions.

How do I detect AI-generated submissions in student work?

Use AI detection tools cautiously, as they are not foolproof. Encourage students to submit drafts, explain their thought processes, and use AI ethically. Visit UCATT’s section on Getting Started with AI for more information. 

What ethical concerns should I consider when incorporating AI into my teaching?

Be mindful of biases in AI models, the potential spread of misinformation, and the digital divide that may limit student access to AI tools. UCATT offers more information on opportunities, ethical issues, and risks of generative AI. 

What are the University's policies regarding the ethical use of generative AI for students? 

For students, the university’s code of academic integrity applies and can be used to determine what is ethical versus unethical conduct regarding the use of generative AI tools. Specifically, students are required to provide proper attribution in their submitted course assignments. Students should consult each instructor's syllabus and policy regarding AI use and attribution. 

Can I use generative AI in my coursework? 

It depends on the instructor and course policies. The use of generative AI in courses is allowed at the discretion of the instructor. Each course will have its own approach to when, how, or if generative AI may be used. In the case that generative AI is allowed, each course will have specific guidance about how to indicate when and how generative AI was used. Always consult with each of your instructors on the appropriate use of generative AI in specific courses before using AI for assignments.  

The following guide from the UA Libraries offers some possible suggestions for attributing generative AI use in your work. Always consult with your instructor on the appropriate methods of attribution for specific courses. 

Always fact-check the accuracy of the AI’s outputs. 

How do I create a citation indicating generative AI use? 

Many citation styles have methods for generative AI attribution. Please see the following guides below that may be relevant to your specific context or field.  

Where can I find on-campus guidance and support for using generative AI in my assignments? 

Many campus units offer support for students using generative AI and other AI applications in their courses.  

What are the copyright laws regarding the use of generative AI?

A recently released report by the U.S. Copyright Office concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements.

How can I protect data privacy and security when using ChatGPT?

AI tools may store or analyze your inputs. Avoid inputting personal, confidential, sensitive, or university data into unapproved AI tools due to the potential risks associated with data privacy and security breaches. The UA Libraries also provide information on privacy protection. Exercise caution when engaging with new AI applications. 

What are the ethical concerns of using AI in research?

Consider bias in AI-generated results, data privacy and the potential for AI to generate misleading information. The UA Libraries provides more information.

Using AI in research complicates the informed consent process with research participants, in that the details of how data/information will be used in the future is unknown. Therefore, it is important to clearly explain in the informed consent whether AI is used and acknowledge there will be future use of any information.

Are there restrictions on using AI tools for data analysis?

Researchers must comply with university, funding agency, and IRB guidelines to ensure responsible AI use in handling sensitive or human-subject data. There are no restrictions on use of AI at this time; however, considerations should include implications for future use, data ownership, and the confidential nature of any information collected before using AI. Disclosure to the IRB and the research subject is required when using AI in human research. 

What are the copyright laws regarding the use of generative AI?

A recently released report by the U.S. Copyright Office concludes that the outputs of generative AI can be protected by copyright only where a human author has determined sufficient expressive elements.

How are federal agencies reacting to the use of generative AI in research proposal review?  

Currently, the National Institutes of Health (NIH), National Science Foundation and NASA, among other federal agencies, does not allow generative AI usage for grant proposals or peer reviews. Researchers are urged to stay updated on current policies of federal agencies regarding the use of generative AI. This ban hinges on the confidential nature of the peer review process. Uploading proposal information violates that. 

Do scientific journals accept manuscripts that have used generative AI for text, images, or video?

Each journal review board creates its own guidelines for generative AI use, so researchers are urged to stay updated on each journal's current policies regarding generative AI for each type of media. Many journals now require disclosure. Science and Springer Nature state that AI-assisted tools like ChatGPT and other large language models do not satisfy their authorship criteria, and they generally do not accept AI-generated images or videos for publication.

Always fact-check the accuracy of the AI’s outputs. 

Does the University support generative AI use for researchers?  

The University of Arizona offers staff online workshops and training, including O’Reilly and Microsoft Learn as well as EDGE Learning and LinkedIn Learning. Other resources include the Process Automation Team, a community of practice open to the University community that tackles AI questions, among other topics, and UCATT, the UA Libraries, and DataLab.

Can I use AI to draft university emails, reports, or website content?

AI can assist in drafting, but human oversight is essential to ensure accuracy, tone, and compliance with university branding and privacy policies. Always fact-check the accuracy of the AI’s outputs and avoid inputting confidential, sensitive, or personally identifiable information in AI applications.

Does the University support generative AI use for staff?  

The University of Arizona offers staff online workshops and training, including O’Reilly and Microsoft Learn as well as EDGE Learning and LinkedIn Learning. Other resources include the Process Automation Team, a community of practice open to the University community that tackles AI questions, among other topics, and the UA Libraries and DataLab.

The newly formed AI Marketing and Communications Community of Practice also is planning to explore use cases, guidelines, tools and training that will help MarCom professionals across campus integrate generative AI into their workflows.  

Can AI be used for automating administrative tasks?

Yes, but always verify outputs for accuracy, security, and compliance with university regulations. Resources include the Process Automation Team, a community of practice open to the University community that tackles AI questions.

How can I protect data privacy and security when using ChatGPT?

AI tools may store or analyze your inputs. Avoid inputting personal, confidential, sensitive, or university data into unapproved AI tools due to the potential risks associated with data privacy and security breaches. The UA Libraries also provide information on privacy protection. Exercise caution when engaging with new AI applications.