Skip to main content

Cornell University

SandboxAI

""

Cornell is developing a platform that provides access to multiple Large Language Models (LLMs) that will be hosted within Cornell's private cloud infrastructure and managed by Cornell staff. The first LLMs supported by this platform are Claude 3.5 sonnet and OpenAI GPT-4o. 

SandboxAI is currently in an exploratory phase and not yet widely available.

 

Description

To put Cornell at the forefront of Artificial Intelligence innovation, as well as provide tools that allow faculty and staff to realize time and effort savings, Cornell needs AI tools that allow its constituents the opportunity to manipulate protected data (as defined by University policy and applicable federal and state laws).

CIT is developing a chatbot tool that allows Cornell to use AI in a Cornell-owned environment. The goals of this project include providing an AI tool that can be used with moderate risk data.

SandboxAI is currently available to those with a demonstrated need for this secure environment. During this initial exploratory phase, CIT plans to continue developing SandboxAI. 

Changes to existing AI tools (like Copilot) may inform the future of this tool at Cornell.

Access SandboxAI

Sandbox AI is not yet generally available. However, if you have an institutional problem that requires data-protected AI to help solve, please submit a request to access this tool.

About the AI Models

OpenAI Models

GPT-5
A cutting-edge, multimodal model from OpenAI that replaces previous GPT versions. It is designed to provide expert-level insights and automatically adjusts its reasoning and response style based on the complexity of the user's query, showing significant improvements in writing, coding, and health-related topics.

GPT-5-chat
This model is a variant of GPT-5 that is specifically optimized for conversational AI applications. It excels at generating coherent, contextually aware, and engaging dialogue, making it ideal for advanced chatbots and virtual assistants.

GPT-5-mini
A more compact and cost-effective version of GPT-5, the Mini is designed for faster responses and lower latency. It is well-suited for tasks that require quick and accurate answers without the need for the deep reasoning capabilities of the full GPT-5 model.

GPT-5-nano
The smallest and fastest model in the GPT-5 family, Nano is optimized for developer tools and real-time applications where ultra-low latency is a critical factor.

GPT-4o-mini
A more compact and efficient version of the GPT-4o model. It's designed to offer a balance of performance and speed for a variety of tasks, with lower computational cost.

O3-mini
A streamlined and cost-effective version of the o3 model. It provides advanced reasoning capabilities with three user-selectable levels of processing effort, balancing computational power and response speed.

Anthropic Model

Claude-3.5-haiku
The fastest and most affordable model in Anthropic's Claude 3.5 family. It is optimized for real-time applications like customer service chatbots and content moderation, offering improved intelligence and instruction-following capabilities.

Claude-3.5-Sonnet 
The successor to Claude 3 Sonnet, this is Anthropic's most balanced model, offering a strong combination of intelligence, speed, and cost-effectiveness. It is designed for complex enterprise workloads, excelling at nuanced content creation, advanced reasoning, and code generation. It also features powerful vision capabilities for interpreting charts, graphs, and images.

Claude 4 Sonnet 
Claude Sonnet 4 improves on Claude Sonnet 3.7 across a variety of areas, especially coding. It offers frontier performance that’s practical for most AI use cases, including user-facing AI assistants and high-volume tasks.

Cohere Models

Command-r
A large language model from Cohere optimized for enterprise use cases like retrieval-augmented generation (RAG), summarization, and question answering. It supports 10 languages and is designed to balance efficiency and accuracy.

Command-r-plus
A more powerful, 104-billion parameter version of Command R with advanced capabilities for complex enterprise tasks. It features multi-step tool use, allowing it to combine multiple tools to accomplish difficult assignments and self-correct.

Meta Models

LLAMA-3.2-90b-vision-instruct
A 90-billion parameter, instruction-tuned multimodal model from Meta. It's optimized for visual recognition, image reasoning, and generating text descriptions of visual data like charts and graphs.

LLAMA-3.2-11b-vision-instruct
An 11-billion parameter multimodal model from the Llama 3.2 family. It integrates image and text reasoning for tasks like visual question answering, image captioning, and document analysis.

LLAMA-3.2-3b-instruct
A 3-billion parameter text-generation model from Meta. It is designed for a variety of natural language processing tasks, including content generation, summarization, and translation, in a more compact size.

LLAMA-3.2-1b-instruct
A 1-billion parameter, lightweight language model designed for efficient performance in low-resource environments. It supports eight core languages and is suitable for tasks like summarization and dialogue.

LLAMA-3.1-405b-instruct
A massive 405-billion parameter model that is one of the largest publicly available language models. It is designed for high-performance, enterprise-level applications, excelling in general knowledge, advanced reasoning, and multilingual tasks.

Support for SandboxAI

SandboxAI has a support team separate from the IT Service desk. For help fill out this form.

Comments?

To share feedback about this page or request support, log in with your NetID

At Cornell we value your privacy. To view
our university's privacy practices, including
information use and third parties, visit University Privacy.