Skip to main content

Cornell University

Latest News

Digital illustration of a brain overlaid with computer imagery.

Artificial intelligence is a tool with the potential to do great good in the world. Perhaps you’ve already benefited from it, using it to analyze data or solve a problem. But like any tool used irresponsibly, or in the wrong hands, artificial intelligence can do harm. If not used safely, it can contribute to data and privacy violations. Cybercriminals can exploit its ability to mimic and generate images and language.

 
  1. Make sure you're using Copilot safely.

    There are many valuable uses for Copilot in the workplace. But if you’re not careful, you might violate Cornell’s security policies or put the data at risk of unauthorized access should there ever be a data breach. Learn more about how to use Copilot safely.

     

  2. Don’t get fooled by deepfakes.

Some artificial intelligence tools have become so good at generating voices and video, that to an untrained eye, the ai-generated content is easily mistaken for a real human. Learn more about deepfakes and how they’re used by cybercriminals to phish for your information.

Find more Cybersecurity Awareness Month resources from Weill Cornell Medicine.

P.S. Take this 1-question survey and help the IT Security Office learn more about the Cornell community’s tech habits.

 


Tags

Comments?

To share feedback about this page or request support, log in with your NetID

At Cornell we value your privacy. To view
our university's privacy practices, including
information use and third parties, visit University Privacy.