Skip to main content

Cornell University

Guidelines for Artificial Intelligence

This article applies to: Artificial Intelligence (AI)

Cornell’s guidelines seek to balance the exciting new possibilities offered by these tools with awareness of their limitations and the need for rigorous attention to accuracy, intellectual property, security, privacy, and ethical issues. These guidelines are upheld by existing university policies.

A key to exploring AI tools centers on important choices about which tools we use and the privacy and protection of an individual’s personal information and institutional data. Free AI tools that are not offered by Cornell do not provide any material protection of data and should not be used to share or process institutional academic or administrative information.

Accountability

You are accountable for your work, regardless of the tools you use to produce it. When using generative AI tools, always verify the information for errors and biases and exercise caution to avoid copyright infringement. Generative AI excels at applying predictions and patterns to create new content, but since it cannot understand what it produces, the results are sometimes misleading, outdated, or false.

Confidentiality and Privacy

If you are using public generative AI tools, you cannot enter any Cornell information, or another person's information, that is confidential, proprietary, subject to federal or state regulations, or otherwise considered sensitive or restricted. Any information you provide to public generative AI tools is considered public and may be stored and used by anyone else.

As noted in the University Privacy Statement, Cornell strives to honor the Privacy Principles: Notice, Choice, Accountability for Onward Transfer, Security, Data Integrity and Purpose Limitation, Access, and Recourse.

Use for Education and Pedagogy

Cornell encourages a flexible framework in which faculty and instructors can choose to prohibit, to allow with attribution, or to encourage generative AI use. In addition to the CU Committee Report: Generative Artificial Intelligence for Education and Pedagogy delivered in July 2023 and resources from the Center for Teaching Innovation, check with your college, department, or instructor for specific guidance.  

Use for Research

The widespread availability of generative AI tools offers new opportunities of creativity and efficiency and, as with any new tool, depends on humans for responsible and ethical deployment in research and society. The Cornell University Task Force Report, Generative AI in Academic Research: Perspectives and Cultural Norms (December 2023), offers perspectives and practical guidelines to the Cornell community on the use of generative AI in the practice and dissemination of academic research.

Use for Administration and Other Purposes

By the end of 2023, Cornell is aiming to offer or recommend a set of generative AI tools that will meet the needs of staff doing administrative work, while providing sufficient risk, security, and privacy protections. The use of generative AI for administration purposes must comply with the guidelines of the Cornell Generative AI in Administration Task Force Report (January 2024).

Comments?

To share feedback about this page or request support, log in with your NetID

At Cornell we value your privacy. To view
our university's privacy practices, including
information use and third parties, visit University Privacy.