Skip to main content

Cornell University

Generative AI in Administration Task Force Report (January 2024)

This article applies to: Artificial Intelligence (AI)

On This Page

Executive Summary 

The Administrative AI Task Force was established with representatives from diverse administrative units, including external affairs, finance, budget, human resources, research and education administration, information technologies, audit, library, eCornell, and facilities, across central and distributed departments in Ithaca, Cornell Tech, and Weill Cornell Medicine (New York and Doha), to evaluate artificial intelligence (AI) for administrative purposes. The task force was tasked to achieve three primary objectives: 

  • Identify potential risks to university operations resulting from the improper use of AI and possible mitigation options. 
  • Review university policies to ensure alignment with potential administrative uses of AI. Identify gaps and propose updates. 
  • Create an overview of AI’s value across administrative domains, outlining specific examples of services where AI could enhance service delivery. Focus on near-term applications with likely rapid return on investment, separating out longer term potential. 

AI use in administration is a strategic opportunity to elevate service standards, reconsider policies and procedures, and meet the evolving needs of Cornell’s stakeholders. The task force delved into the risks, use cases, and benefits of generative AI within and between the university’s administrative services, aiming to identify ways to support the Cornell mission effectively and efficiently. 

The broad adoption of AI is not just an incremental step; it is akin to the pivotal shift from centralized to distributed computing. Just as personal computers revolutionized every desk, AI promises to redefine how all university services are approached. Faculty and staff are already making use of consumer AI tools and the sooner they are provided with campus-licensed tools that protect university data, the quicker the university can mitigate potential risks imposed by the use of this technology. 

In addressing the risks and concerns arising from the proliferation and anticipated ubiquity of AI, the task force examined national and international guidance, as well as statements from peer institutions and other organizations. They considered university-specific factors and formulated a representative list of potential risks, concerns, and mitigations. Recommended practices to protect individual and community well-being, build trustworthy AI, and promote fair and ethical AI deployment are outlined. 

The identified use cases target areas where current pain points could be alleviated through an investment in AI, thereby enhancing service delivery. These use cases can be solved with a variety of contracting and implementation approaches: vendor-provided AI services; university-wide availability of a secure generative AI chat tool; and customer solutions leveraging a generative AI platform. Effort for the opportunities was categorized as short (less than six months), medium (6-18 months), or long term (beyond 18 months), with an expectation of reasonable institutional process cooperation. Cost and return on investment were estimated and risks considered.

Introduction 

For this report, the Administrative AI Task Force referenced President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence for the definition of AI: 

The term “artificial intelligence” or “AI” has the meaning set forth in United States Code 15 U.S.C. 9401(3)

a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine-and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action. 

To help guide AI deployment to support Cornell’s administrative needs, the task force reviewed a growing body of national and international guidance, including the National Institute of Standards and Technology (NIST) AI Risk Framework, the European Union AI Act, and the Draft Risk Assessment Regulations for the California Privacy Protection Agency. These frameworks aim to ensure the safety, transparency, and accountability of AI systems. Each set of rules places a strong emphasis on risk assessments, accountable human oversight, testing and validation, stakeholder consultation, and appropriate data governance. These measures protect individual and community well-being, build trustworthy AI, and promote fair and ethical AI deployment. 

Key Principles 

Ethical and Safe Use 

  • AI services deployed must comply with law. 
  • In deploying AI, the university should focus on the effects on the faculty, student, and staff communities in learning, understanding, and using AI in their academic and administrative work and should strive to help improve the services of all Cornell community members. 
  • Decision makers should understand the context in which AI will be used, including intended users, business value and goals, and the potential for positive and negative impacts. 
  • Leaders must take proactive measures with AI services to prevent discrimination. Impact assessments, representative data review, and independent evaluation should be performed, in proportion to potential risks, to help provide these protections.
  • A process must exist to decommission legacy services when appropriate. 

Privacy and Confidentiality 

  • Where AI systems will use or analyze personal data, including sensitive personal data, privacy impact assessments should be conducted. 
  • Personally identifiable information (PII) should be protected through focused data collection and respecting user permissions. 
  • Cornell should be transparent about automatic decision-making processes for significant decisions, including a description of how the process functions and an explanation of outcomes. 

Data Quality and Control 

  • Processes should ensure that relevant data is used and collected through legal and ethical means, and that data provenance is documented and managed. 
  • Systems must undergo testing and risk mitigation to demonstrate safety and effectiveness for intended use. 
  • Appropriate technological and organizational controls should be in place to deter bad actors. 

Human Oversight 

  • Meaningful human oversight and clear lines of authority over AI system deployment, use, and impacts should be ensured. 
  • AI governance processes should be documented, maintained, reviewed, and updated to ensure accountability. 
  • People should be able to ask for human alternatives where appropriate. Additional safeguards may be needed for high-risk systems. 
  • Contingency processes should be in place to handle failures or incidents with high-risk AI systems. 

Alignment with Audience 

  • Systems should be deployed in consultation with diverse stakeholders to identify risks.
  • Clear and adequate information should be provided to users regarding AI system capabilities and limitations. 

Risks, Concerns, and Mitigations 

To develop a representative list of potential concerns, the task force reviewed AI documents from peer institutions and other organizations and considered university-specific factors. 

Risk of AI-Enhanced Cyber and Related Attacks 

The misuse of AI by external entities can expose sensitive information through data mining and re­identification, leading to breaches of privacy and potential legal repercussions. One significant example is the potential misuse of AI to circumvent data de-identification strategies by piecing together seemingly harmless information (e.g., unidentified human genomic data) to re-identify individuals. AI can also be employed by malicious actors to orchestrate more sophisticated and believable phishing campaigns or spam messages that exploit human vulnerabilities with greater precision and effectiveness. 

Mitigation opportunity: The university must remain vigilant, implementing good contract oversight and robust security measures and fostering a culture of cybersecurity awareness to safeguard against these evolving risks associated with external AI utilization. Training and awareness campaigns for faculty, staff, and students will remain a critical component of mitigating external threat risk. 

Risk of Non-Action 

While the deployment of new technology carries inherent risks, the task force emphasizes that a lack of leadership leaning in and funding coordinated deployment also poses risk. Risks include inappropriate use, the costs of multiple redundant, one-off acquisitions of products by multiple departments and colleges, the security ramifications of a myriad of new AI processors without appropriate regulatory oversight, poor cross-institutional integration, and Cornell’s competitive standing relative to its peers. 

Mitigation opportunity: Establish and publicize a university-wide formalized strategy and mechanisms for administrative AI system acquisition and service development. 

Risk of Poor Compliance Oversight 

The external pressures of the rapidly evolving global regulatory environment regarding AI and privacy will require an expanded audit, compliance, and privacy oversight framework and AI literacy. Risks resulting from mishandling regulated data could result in a variety of reputational, financial, regulatory, and legal consequences. 

Mitigation opportunity: Ensure coordination and evolution among the departments and groups across Cornell that are tasked with compliance, privacy, policy, governance, and the like to address compliance needs in the AI space. 

Risk of Poor Service, Information Quality, and Accuracy 

The potential for misinformation with AI solutions has been widely acknowledged. Identified types of inaccuracies include: 

  • Confabulations (sometimes called hallucinations), where an AI solution authoritatively suggests information that is false. 
  • Impersonation, where audio/video generation of key institutional figures is inappropriately used to simulate their delivery for deceptive or malicious purposes. 
  • Biases in the training model for AI, where existing disparities in decision making or information review are exacerbated. If ingested data sets are historically biased, those biases will be inherited and perpetuated. 
  • Bugs or errors in AI engines, which are often hard to detect from a user perspective but can generate confabulations or biases of their own. 

Mitigation opportunity: Appropriate staffing and training, with a focus on auditing and evaluating the training data and output of the tools, will be necessary. A mandatory, documented testing and quality assurance plan per AI solution is recommended as part of IT governance. A mechanism for users to report concerns and allow for analysis by units running the tools may be critical. Users will need to be advised that despite compelling and authoritative presentations, AI outputs must be validated. 

Risk of AI Energy Usage 

The use and development of AI models for Cornell’s research, teaching, and administrative needs will be energy intensive. This consumption may conflict with the commitment to carbon neutrality by 2035. 

Mitigation opportunity: To oversee and control the increased energy consumption, Cornell must maximize the value of AI. In selecting vendors, the university should insist on an understanding of the vendors’ corporate carbon goals and consider that certain models are more energy efficient. 

Risk of AI Impact on Institutional Trust 

Inaccurate content and the failure to use AI ethically and without appropriate disclosure can be reputationally damaging and lead to community dissatisfaction and public relations issues. Community comfort with AI-generated messaging may be lower than messaging penned by university leaders, especially in situations requiring empathy (like national crises). 

Mitigation opportunity: Human review of AI outputs and, where relevant, training on carefully generating AI prompts, will be required. Guidelines around the use of AI in generating communications content already exist from University Relations and should be codified and widely shared. 

Risk of Legal Action Against Cornell 

As with any tool used in a regulated space, hasty or disordered deployment of AI for administrative purposes could lead to consequences such as bias in decision-making, plagiarism, and violation of privacy laws. 

Risk of Rapid Growth in the Number of Data Processing Vendors 

The low cost and widespread availability of AI services poses a risk that they will be deployed for use at Cornell without appropriate oversight and contractual controls in place. 

Mitigation opportunity: Adherence to IT governance procedures will assure appropriate risk, privacy, and contractual review. 

Risk of Inadequate Staffing and Training 

As with any administrative tool, managers and unit leaders must familiarize themselves with AI, exercise oversight over how their teams explore, use, and implement it, and provide direction for skills development where appropriate. 

Mitigation opportunity: The university should ensure that staff have the appropriate training and skills to effectively use AI tools within administrative contexts. This training initiative should encompass comprehensive coverage of policies, best practices, and general instructions for employing this technology. Additionally, it is imperative to evaluate the potential for fraudulent use or inaccurate outcomes and institute suitable internal controls to mitigate such risks. 

Risk of Disenfranchised Staff 

There is concern that AI could displace or augment staff positions through automation of routine tasks and reduce critical analysis skills if overreliance on AI for decision-making occurs. This may create low morale, fear, or disengagement in employees. The key challenge is to integrate AI in a way that augments rather than replaces human capabilities. Removing drudgery and enhancing individual performance can lead to a more effective and innovative workforce. 

Mitigation opportunity: AI offers the opportunity to enhance employee skills and productivity through automation, data analysis, and the incorporation of a “human in the loop” approach, where human oversight and expertise guide and optimize AI-driven processes. Managers, leaders, and the university as a whole will need to apply transparency and engagement to help allay the anxiety that staff may feel over potential job displacement. 

Controls Framework for Administrative Use of AI 

In light of the noted risks and mitigation opportunities, the task force reviewed existing university controls to guide the ethical, safe, and compliant use of AI during this early phase of adaptation and adoption. The risks and mitigation opportunities can be addressed through Cornell’s Seven Elements of Ethics and Compliance Excellence: oversight and accountability (governance); policies and procedures; education and awareness; program evaluation and monitoring; effective reporting lines, authority, and roles; consistent enforcement of standards; and response and prevention. These elements work together to create resilient programs and are generally in place at the university. 

Three elements were identified by the task force as deserving additional attention: oversight and accountability (governance); education and awareness; and policies and procedures. Expansion of the existing AI webpage to serve as the repository for AI guidance and resources is recommended. 

Oversight and Accountability (Governance) 

Each university campus currently has committees (such as the Ithaca-based University Privacy Committee) that apply appropriate governance and policies to ensure maintenance of the highest ethical standards in the storage, collection, and use of personal data; careful consideration of issues of bias and consent associated with data analysis and predictive analytics; and consistent compliance with relevant international, federal, and state laws. The task force recommends combining, at least in part, the charges of these committees to support cross-campus coordination and review of issues and questions about ethical use of AI. 

Education and Awareness 

The most commonly identified mitigation opportunity and control gap is the need for staff education and awareness to increase staff AI literacy. The development of such a program, along with ethical and compliant administrative use, is strongly recommended. 

Policies and Procedures 

The task force evaluated the current University Policy landscape to better understand whether a standalone policy addressing AI is needed, especially given the current state as AI technology and the regulatory landscape both continue to evolve rapidly. Given the corpus of existing policies focused on data security, data access, and data use, as well as the fact that contractual terms generally address acquisition and appropriate use of AI systems, the task force determined that it is not appropriate at this time to develop a standalone AI policy. 

Approximately 25 existing university-level policies (and additional college-and unit-level policies and procedures) apply to the acquisition and use of information technology, which includes AI. That said, the enhanced ease with which the Cornell community can apply such tools and the potential for cavalier trust of large language models and other services may require further policy review in the future. The task force also recommends that the university continue to monitor the evolving international and domestic AI regulatory landscape to drive future policy development. Regarding procedure, the task force emphasizes that IT governance on the Ithaca campus requires any administrative IT expenditure to be approved through the Statement of Need process. 

AI Resource Repository 

The task force recommends expanding Cornell’s general AI webpage to serve as an AI resource repository to guide the safe, ethical, and compliant administrative use of AI. 

  • Review and enhance the position statement with key stakeholders 
  • Expand guardrails and resources list based on the task force recommendations 
  • Provide links to the repository from other university websites and key forms (privacy, procurement, research, academic, policy, IT security, IT help desk, Statement of Need form, the university’s AI research websites, etc.) 
  •  Determine the equivalent resource repository for Weill Cornell Medicine 

Opportunities 

With the risks fully explored, this report turns to recommendations for specific investments in AI use on campus. The task force strongly advocates for a two-pronged strategy: simultaneously pursue a focused portfolio of central enterprise AI use opportunities and make AI platforms and toolkits available to allow the full Cornell community to begin testing AI in their work in a technically and contractually safe environment. 

The task force believes that many initiatives will rely on the basic prerequisite of safe access to generative AI platforms, such as those available from Azure and AWS. In addition, AI capabilities offered through specific-purpose tools (for example, Zoom AI Companion and Microsoft Copilot) can provide individuals and departments with safe access to AI in the short term. 

As noted in the Risks, Concerns, and Mitigations section, if the university fails to provide safe, broad access to AI platforms and sandboxes, it is likely community members will seek out their own, possibly unvetted AI tools that could place Cornell’s data at significant risk. 

Forums for Educating Employees, Sharing Employee Expertise, and Developing AI Services

It is critical that AI adoption be democratized - focused on benefiting all employees. AI can be an asset across all Cornell service domains, embraced as an opportunity to supplement staff capabilities rather than a threat. This will require investment in staff education, so employees can effectively leverage AI to support their work rather than feeling displaced by AI. 

Tools for supporting and developing AI services are rapidly advancing, and numerous units across Cornell have already allocated resources to explore AI models, platforms, and vertical solutions. While this engagement is noteworthy, the skills learned and solutions built should ideally be coordinated and structured to achieve optimal value for individuals and the university’s objectives and needs. The task force strongly recommends that staff who are exploring AI value across Cornell actively participate in a formal enterprise AI collaboration to contribute to the development of AI strategy and services. 

AI Services Approach 

Three types of contracting and implementation approaches can address the administrative AI use: vertical/embedded AI, a general end-user enterprise large language model (LLM), and a generative AI platform. 

Vertical/Embedded AI 

Vertical/embedded AI is part of existing tools. Examples include Microsoft 365 Copilot, Workday, Salesforce Einstein Copilot, Zoom AI Companion, Adobe Firefly, and ticketing systems that have AI chatbots. Some vendors are offering AI add-ons as part of existing contracts at no additional cost. 

Use cases include: 

  • Create minutes, notes, summaries, action items 
  • Generate concepts for images, reports, and presentations 
  • Automate routine support requests 
  • Create data visualizations 

End-User Enterprise LLM AI Services 

Deploying a large language model (LLM) at enterprise scale, for example, Microsoft Copilot, ChatGPT, Azure OpenAI, Llama2, Claude, and the like, would solve a variety of administrative AI use cases around document creation, corpus querying, templating, etc. in a context with contractual data protections. Advanced data analysis capabilities will enable individuals to interact with their own data. 

Use cases include: 

  • Adjust reading level and clarity of documents and digital communications 
  • Improve the job description creation process 
  • Summarize and analyze resumes and job applications 
  • Generate drafts of letters and other documents based on semi-structured data (e.g., tenure and promotion letters) 

Generative AI Platforms 

More specialized campus tools that leverage AI can be readily built using a generative AI platform (e.g., Azure OpenAI). The platform would also support standardization of essential components required for in-house AI application development, such as a prompt library, vector store, hosting for open-source foundational models or trained models, inference APIs, and more. 

Use cases include: 

  • Review and regularize contracts, leases, etc. 
  • Screen purchases for preapproval or automated approval 
  • Identify trends and anomalies in a broad set of campus data 
  • Improve turnaround time for tech licensing tasks 
  • Assist with web-accessibility tasks 
  • Assist with labor-intensive tasks such as querying large volumes of documents, adjusting text to adhere to style guides, and doing compare/contrast analysis 

Vertical /
Embedded AI

General End-User
Enterprise LLM

Generative AI
Platform

Call Trees and Text-to-Speech

Curriculum Exploration for
Prospective Students

Project Management
Resource Utilization

Zoom AI Companion

*Admissions Application Review

*Animation Generation

*Applicant Pool Credentials Review

*Audio/Video Description

*Development/Coding Assistant

*Document Creation

*Enterprise-wide Chatbot

*Recruitment Documents and Market Targets

*Transfer Credit Evaluation

*Web Accessibility

*Website Content Analysis 

*Animation Generation

*Audio/Video Description

*Development/Coding Assistant

*Document Creation 

Contract Analysis

Grant Opportunities

Internal Grant Review

Sponsored Research -Proposal Preparation

*Admissions Application Review

*Applicant Pool Credentials Review

*Enterprise-wide Chatbot

*Recruitment Documents and Market Targets

*Transfer Credit Evaluation

*Web Accessibility

*Website Content Analysis 

* Use case could be addressed in multiple ways

Arrow

Near-Term AI Opportunities 

These opportunities are expected to have an implementation period of less than 6 months, assuming needed institutional resources are allocated. 

Animation Generation 

Use AI to create animations for courses or other purposes based on contextual text or prompts. AI-generated animations could be used as-is (with human review) or as inspiration for final animations. 

Cost versus savings: Existing tools such as Adobe Firefly are building in this capability. Creating animations manually is labor intensive, with multiple iterations before the right animation is selected. An animator’s work would be significantly simplified by using AI to take text inputs and generate multiple options. Savings would primarily be reduction in time because of the improved efficiency and variety of options provided by AI. 

Development/Coding Assistant 

Enable developers to do AI-assisted programming for tasks such as syntax, writing unit tests, identifying helpful modules, and even entire functions or applications. Some AI tools can generate a prototype based on a quick sketch of an app or website. These capabilities should improve efficiency, reduce errors and risks, provide better documentation, and decrease the learning curve for newer developers or developers moving between languages and platforms. Examples of tools include GitHub Copilot, VSCode + ChatGPT, and Cursor. 

Cost versus savings: A competitive bid would be needed to select the tools. A very rough estimate based on pricing for one tool is less than $50K/year for the university’s estimated 500 developers. Savings would be from efficiency gains through substantially increased productivity and shorter project completion times.

Document Creation 

Enable the use of generative AI to do a wide variety of writing tasks across both campuses: 

  • Create minutes, notes, summaries and action items 
  • Generate concepts for images, reports, and presentations 
  • Create data visualizations 
  • Adjust reading level and clarity of documents and digital communications 
  • Improve the job description creation process to achieve consistency with tone, keywords, education requirements, experience, and other attributes 
  • Summarize and analyze resumes and job applications 
  • Draft tech briefs in preparation for marketing and licensing Cornell inventions 
  • Generate drafts of letters and other documents based on semi-structured data (e.g., tenure and promotion letters) 

Cost versus savings: Enterprise licensing costs for an LLM such as ChatGPT are not yet known. A very rough estimate is more than $200K, plus additional costs for staff training on risk handling, prompt engineering, best use, and restrictions. The savings are likely to be dramatic given that this access is a prerequisite for most other use cases and that enabling staff to use these foundational tools will allow the discovery of many additional use cases. 

Enterprise-wide Chatbot 

Deploy a chatbot at enterprise scale for automated customer support and basic self-service support. Interest is very high at Cornell for use in information technology, admissions, financial aid, human resources, benefits, payroll, registration, academic policies, billing, and more. Benefits include 24/7 automated handling of inquiries (with “live agent” handoff during business hours); support for web, texting, and phone interactions; the enablement of support staff to focus on complex troubleshooting and activities that require human intervention; and a unified “ask your question anywhere” experience for the community. 

NOTE: Basic use of chatbots to interact with information found on public webpages and other standard documents is a short-term use case (less than 6 months). The longer-term benefit to be realized is through providing personalized customer support and enabling individuals to retrieve information about themselves or initiate actions through the chatbot. This use falls in the 6-18-month range.

Cost versus savings: A competitive enterprise chatbot bid would be needed to accurately estimate cost. A very rough estimate is $250K. Operational cost is estimated at $25K-$50K for testing, performance analysis and tuning, and routine maintenance and support activities. The investment is expected to be cost neutral in that the university could easily expect to cover the cost through avoidance of increasing support staff headcount and potentially avoiding hiring in other areas by retraining some support staff to fill those needs. This supposition is based on the overall support volume across the university and the current state of insufficient support staff to keep up with the volume. Customers would also benefit from being able to do transactions 24/7, instead of being constrained to business hours or phone support. 

Zoom AI Companion 

Produce meeting summaries for meeting hosts to review, edit, and share. Other features that are anticipated to become available for higher education include in-meeting real-time catch-up for late arrivals. Benefits include clearer, more accurate, and quicker notes with actionable highlights. Absent participants would especially benefit. Standardization could also reduce lag time, inconsistency, and confirmation bias in notes. 

Cost versus savings: Zoom AI Companion is included in the existing university Zoom license. There would be nominal cost for implementation. Significant productivity improvements would be realized almost immediately. 

Medium-Term AI Use Opportunities 

These opportunities are expected to have an implementation period of 6-18 months, assuming needed institutional resources are allocated. 

Audio/Video Description 

Use AI to define actions, characters, scene changes, on-screen text, and other visual content in a video scene and generate an audio description of those activities. An AI tool to automate this workflow in a cost-­effective way would help Cornell meet its web content accessibility goals. 

Cost versus savings: Descriptions are relatively costly to produce. An AI-based automation would not only help Cornell meet its accessibility goals, but also do so at a cost savings. 

Contract Analysis 

Reduce contract execution time and improve favorability of terms by using AI to simplify the submission process, identify the correct groups to negotiate the contract, highlight the terms of interest to each group, redline unusual terms and terms that Cornell does not typically accept, and compare rejected terms with previously accepted similar terms to increase leverage in negotiations. 

Cost versus savings: This opportunity would require a new vendor or in-house AI application development, so cost is to be determined. A contract reviewer typically takes 5-6 hours to do a full award analysis. If AI handled half of the tasks, it would save 3 hours per contract. With approximately 1,200 award contracts in the Office of Sponsored Programs alone, this would save approximately 1.7 FTE. Since the application would be used by multiple departments, the FTE savings are expected to be higher.

Grant Opportunities 

Provide improved support for researchers to address challenges and gaps in finding relevant grant opportunities. Faculty currently use a manual/broadcast process to discover salient grant opportunities and many may be missed. Using AI would allow for both ad hoc querying and programmatic matching with known info (goals, expertise, past work) about faculty and researchers. Development would require direction from the Office of Sponsored Programs, research departments, and faculty to refine needs. 

Cost versus savings: This opportunity would require a new vendor or in-house AI application development, so cost is to be determined. Savings would be from increased grant funding and labor savings. 

Transfer Credit Evaluation 

Use AI to assist with transfer credit evaluations for admitted transfer students. The improved turnaround time could help these admitted students feel comfortable committing to Cornell and begin planning their academic journey earlier. 

Cost versus savings: This opportunity would require a new vendor or in-house AI application development, so cost is to be determined. Savings would be realized through reduction in faculty and staff time spent on transfer credit evaluations. 

Web Accessibility 

Use AI to generate ALT text for images, analyze HTML for accessibility issues, and provide not only clear guidance for remediation, but also fixes in the form of compliant code and text changes. 

Cost versus savings: It is difficult to estimate cost for this use case. The market is evolving; it seems likely that a vendor such as Siteimprove would offer a solution, or it could be purpose-built within an AI platform. Savings would be a reduction in staff time spent on manually doing accessibility tasks. 

Long-Term AI Use Opportunities 

The experiences and knowledge gained through pilot AI initiatives will help inform estimates for the long­term AI use opportunities. A rough estimate is that these opportunities may take 18 months or more. 

Admissions Application Review 

Given the volume of applications Cornell receives for both undergraduate and graduate programs, some level of AI-powered application to augment human review has significant potential. It could speed up the review process (allowing applicants to receive an admission decision more quickly), reduce the need to hire temporary/seasonal part-time staff for application review, and reduce the amount of time that full­time staff spend on the review process. 

Cost versus savings: This opportunity would require a new vendor or in-house AI application development, so cost is to be determined. Savings would be a reduction in full-time staff time dedicated to application review and the need for seasonal temporary workers.

Internal Grant Review 

Improve grant submissions by leveraging an LLM trained with prior successful and unsuccessful grant submissions. These reviews could also serve as pre-reviews to typical internal human reviews, to help move submissions to second-draft quality before review. This is a more involved use case since it would involve custom training and review. 

Cost versus savings: It is unclear if Cornell would build such a tool internally or if the market will provide this as an integrated tool. Infrastructure costs would likely be under $50K, but training could involve expensive expertise. Savings would be a higher success rate of grant applications and a reduction in the work of the internal human review committees. A 1% improvement in successful grants would be a $21M increase in indirect cost recoveries.

Appendix 1: Additional AI Use Opportunities 

The Opportunities section outlined the use cases that the task force is recommending as the highest priority for the university to pursue. This appendix provides the additional use cases reviewed by the task force. 

Applicant Pool Credentials Review 

Short-term (less than 6 months)

Use AI to do preliminary analysis and selection in large application pools for institutional job postings to help the hiring manager identify candidates who are the best fit in a shorter timeframe. Possible tasks: Anonymizing candidate information, using blind recruitment techniques, and flagging potential biases in job descriptions and assessments. Automated screening of applications to ensure consistent and fair review regardless of when the application is received. Analyzing candidate profiles against job descriptions to match skills, experience, and other relevant factors. Summaries of resumes and cover letters. Drafting responses to candidates and answering frequent questions. 

Cost versus savings: Estimated cost is $50K-$200K for start-up costs with software, initial licensing, and training for hiring managers. Ongoing costs would be annual licensing for the solutions and training for first-time users. Savings would be reduction in time for the recruiting and applicant review process and loss of good candidates due to slow processes. An additional benefit would be a larger number of applicants getting some review. 

Curriculum Exploration for Prospective Students 

Short-term (less than 6 months)

Enable prospective students to explore Cornell’s breadth of academic opportunities using AI to see the ways in which they could tailor their coursework to their interests and to determine which college or school they should apply to. 

Cost versus savings: This tool already exists for current students (pathways.cornell.edu), and while there could be some additional investment needed to scale this tool for prospective students, costs should be minimal. 

Recruitment Documents and Market Targets 

Short-term (less than 6 months)

Use AI to improve the overall applicant / candidate pool by providing document chat, review, and analysis of specific markets, the market trends, peer postings, and enhancements to our job postings and documentation. Possible tasks: Assess the language and tone of the position descriptions and job postings to add appeal and inclusivity for the target audience within a specific area or market. Gather and analyze data for job postings in a specific market area to better understand how Cornell job postings compare with others from peers, similar positions, or trades. Identify the most relevant keywords and phrases for a specific role in a particular area or market to increase the likelihood of Cornell job postings showing in relevant searches. Track the performance of job postings, including views, click-through rates, and application rates.

Cost versus savings: Estimated cost is $50K for start-up costs with software, initial licensing, and training for hiring managers. Ongoing costs would be annual licensing for the solutions and training for first-time users. Savings would be reduction in time spent on these activities, and gains from activities not currently being done. 

Sponsored Research - Proposal Preparation 

Short-term (less than 6 months)

Use AI to assist with the overall process of proposal preparation, including reviewing large volumes of text, writing standardized letters and proposal sections, and review of outputs against various proposal standards. Possible tasks include summarizing large Request for Proposal (RFP) or Notice of Funding Opportunity (NOFO) documents and extracting key information. Ensuring conformance with agency-specific guidelines. Assisting with proposal drafting, for example, creating an outline from a solicitation per sponsor formatting guidelines or drafting sections based on input data. Providing standardized responses for sections such as travel that factor in institution rules, agency rules, specific government per diem rates, and flight/car/hotel estimates. Assisting with drafting letters of support. 

Cost versus savings: Cost for set-up for staff and non-staff is estimated less than $50k, including risk handling. Ongoing costs are expected to be $20K or less. 

To measure savings, a typical NIH or NSF proposal takes 120 hours to prepare. The average number of proposals written per year for NIH (HHS) and NSF alone for the Ithaca-based campuses for FY18-FY22 was approximately 900. Considering only proposals for these two agencies, the average number of hours spent annually is estimated at approximately 108K hours or 52 FTE. Expanding to include any proposal submitted from the Ithaca based campuses, the number of proposals is approximately 2400, equating to 290K hours or potentially 140 FTE. Implementation at other universities (New York University, Harvard, and others) suggests AI would reduce time spent on proposals by a factor of 10 or more. Using much more conservative estimates of only a 5% reduction in overall time and halving the hour estimate per proposal from 120 to 60 hours, annual savings are estimated at approximately 3.5 FTE. 

Call Trees and Text-to-Speech 

Medium-term (6-18 months) 

Use text-to-speech AI to enable faster recording times for new messages, easier tweaking of existing messages, and uniformity of voice across a variety of call tree services. Call trees could be modified more often to provide more up-to-date information and without the need to schedule voice talent for recording. 

Cost versus savings: Call tree services already under contract at Cornell may incorporate this functionality for free or as an added cost. No one-time costs are anticipated. Savings could be calculated by comparing the staff time it takes to record custom messages against the staff time needed to generate them with AI plus the subscription cost.

Project Management Resource Utilization 

Medium-term (6-18 months) 

Use AI in project management to analyze resource utilization, return-on-investment considerations, and redundancy reviews and provide a baseline for new projects and department initiatives. 

Cost versus savings: In-depth business analysis would be needed to assess cost for a university-wide solution. The scope and requirements for staff, non-staff, and risk handling would need to be defined, as well as the extent of integration with department-specific platforms to consume resource data and unique business processes/rules. Savings would be in the form of project efficiencies, streamlined information, and accurate forecasting/allocation of resources and elimination of unnecessary steps or tasks. 

Website Content Analysis 

Medium-term (6-18 months) 

Use AI to do analysis on websites to flag outdated information and highlight discrepancies in similar information across different webpages. 

Cost versus savings: This opportunity would require a new vendor or in-house AI application development, so cost is to be determined. Cost is very roughly estimated at $50K-200K. While LLMs are clearly capable of doing this kind of work, it is unclear whether an off-the-shelf tool would work well in Cornell’s environment of thousands of websites. Savings would be a reduction in the overall volume of webpages being maintained (reduction in labor and hosting costs), reduced support cost and potentially litigation cost from people relying on inconsistent or inaccurate information, and the capacity for staff to do higher-value work. 

Appendix 2: Task Force Members 

The Administrative AI Task Force was established with representatives from diverse administrative units, including external affairs, finance, budget, human resources, research and education administration, information technologies, audit, library, eCornell, and facilities, across central and distributed departments in Ithaca, Cornell Tech, and Weill Cornell Medicine (New York and Doha), to evaluate artificial intelligence (AI) for administrative purposes. The task force was tasked to achieve three primary objectives: 

  • Identify potential risks to university operations resulting from the improper use of AI and possible mitigation options. 
  • Review university policies to ensure alignment with potential administrative uses of AI. Identify gaps and propose updates. 
  • Create an overview of AI’s value across administrative domains, outlining specific examples of services where AI could enhance service delivery. Focus on near-term applications with likely rapid return on investment, separating out longer term potential. 

Ayham Boucher, Research Administration Information Services, Cornell University 
Laura Bradford, Office of General Counsel, Weill Cornell Medicine 
Seth Brahler, Human Resources, Cornell University 
Alexis Brubaker, Office of the Chief Risk Officer, Cornell University
Adam D. Cheriff, Internal Medicine, Weill Cornell Medicine 
Kelley Cooper, Facilities and Campus Services, Cornell University 
Dan Dickinson, External Affairs, Weill Cornell Medicine 
Nerida C. Dimasi, Administration, Weill Cornell Medicine - Qatar 
Dan Dwyer, University Controller, Cornell University 
Philip Dzwonczyk, Budget and Planning, Cornell University 
Ellen Finn, Human Resources, Weill Cornell Medicine 
Adam Garriga, Research Operations, Weill Cornell Medicine 
Beth Goelzer, Cornell Information Technologies, Cornell University 
Paula Herber, Information Technologies and Services, Weill Cornell Medicine 
Sarah Jewell, Information Technologies and Services, Weill Cornell Medicine 
Rebecca Joffrey, Cornell Information Technologies, Cornell University 
Maria Joseph, Information Technologies and Services, Weill Cornell Medicine 
Gloria Kao, External Affairs, Weill Cornell Medicine 
Debbi Kruszewski-Warner, Alumni Affairs and Development, Cornell University 
Thomas McGrath, Budget and Financial Strategy, Weill Cornell Medicine 
Michael T. Murphy, Facilities Management and Campus Operation, Weill Cornell Medicine
Prabhakaran Nagarajan, eCornell, Cornell University 
Andrew M. Page, Cornell Information Technologies, Cornell University 
Adam P. Palcich, University Relations, Cornell University 
Warren Petrofsky, Administration, College of Arts and Sciences 
John Ruffing, Cornell Information Technologies, Cornell University
Michael B Slade, Medical Education, Weill Cornell Medicine 
Cara Squicciarini, Financial Management, Weill Cornell Medicine 
Kelly Shawn Strickland, Cornell Information Technologies, Cornell University 
Dan Sweeney, Finance and Operations, Cornell University
Marie A. Taylor, Office of General Counsel, Cornell University 
Brian J. Tschinkel, Information Technologies and Services, Weill Cornell Medicine
Vinay I. Varughese, Information Technologies and Services, Weill Cornell Medicine 
R. David Vernon, Cornell Information Technologies, Cornell University 
Simeon Warner, Cornell University Library, Cornell University 
Rachel Weinert, Vice Provost for Enrollment, Cornell University 
Michael Weissman, Finance and Business Operations, Cornell University 

Comments?

To share feedback about this page or request support, log in with your NetID

At Cornell we value your privacy. To view
our university's privacy practices, including
information use and third parties, visit University Privacy.