Skip to main content

Cornell University

Cybersecurity in the Age of Artificial Intelligence

This article applies to: National Cybersecurity Awareness Month

On This Page

Artificial intelligence is a tool with the potential to do great good in the world. Perhaps you’ve already benefited from it, using it to analyze data or solve a problem. But like any tool used irresponsibly, or in the wrong hands, artificial intelligence can do harm. If not used safely, it can contribute to data and privacy violations. Cybercriminals can exploit its ability to mimic and generate images and language.

 

Using Copilot Safely

There are many valuable uses for Copilot in the workplace. But if you’re not careful, you might violate Cornell’s security policies or put the data at risk of unauthorized access should there ever be a data breach. Learn more about how to use Copilot safely.

 

Don’t Get Fooled by Deepfakes

Some artificial intelligence tools have become so good at generating voices and video, that to an untrained eye, the ai-generated content is easily mistaken for a real human. Learn more about deepfakes and how they’re used by cybercriminals to phish for your information. 

 

Find more Cybersecurity Awareness Month resources from Weill Cornell Medicine.

Comments?

To share feedback about this page or request support, log in with your NetID

At Cornell we value your privacy. To view
our university's privacy practices, including
information use and third parties, visit University Privacy.