OpenAI

2302 Articles
EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments
OpenAI

EPFL Researchers Unveil FG2 at CVPR: A New AI Model That Slashes Localization Errors by 28% for Autonomous Vehicles in GPS-Denied Environments

Navigating the dense urban canyons of cities like San Francisco or New York can be a nightmare for GPS systems. The towering skyscrapers...

Microsoft AI Introduces Code Researcher: A Deep Research Agent for Large Systems Code and Commit History
OpenAI

Microsoft AI Introduces Code Researcher: A Deep Research Agent for Large Systems Code and Commit History

Rise of Autonomous Coding Agents in System Software Debugging The use of AI in software development has gained traction with the emergence of...

Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDev
OpenAI

Building AI-Powered Applications Using the Plan → Files → Code Workflow in TinyDev

In this tutorial, we introduce TinyDev class implementation, a minimal yet powerful AI code generation tool that utilizes the Gemini API to transform...

OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs
OpenAI

OThink-R1: A Dual-Mode Reasoning Framework to Cut Redundant Computation in LLMs

The Inefficiency of Static Chain-of-Thought Reasoning in LRMs Recent LRMs achieve top performance by using detailed CoT reasoning to solve complex tasks. However,...

AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%
OpenAI

AI-Generated Ad Created with Google’s Veo3 Airs During NBA Finals, Slashing Production Costs by 95%

A lone AI filmmaker, a cutting-edge generative video model, and a national TV spot during one of the year’s biggest sporting events. This...

Internal Coherence Maximization (ICM): A Label-Free, Unsupervised Training Framework for LLMs
OpenAI

Internal Coherence Maximization (ICM): A Label-Free, Unsupervised Training Framework for LLMs

Post-training methods for pre-trained language models (LMs) depend on human supervision through demonstrations or preference feedback to specify desired behaviors. However, this approach...

MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language Models
OpenAI

MemOS: A Memory-Centric Operating System for Evolving and Adaptive Large Language Models

LLMs are increasingly seen as key to achieving Artificial General Intelligence (AGI), but they face major limitations in how they handle memory. Most...

OpenThoughts: A Scalable Supervised Fine-Tuning SFT Data Curation Pipeline for Reasoning Models
OpenAI

OpenThoughts: A Scalable Supervised Fine-Tuning SFT Data Curation Pipeline for Reasoning Models

The Growing Complexity of Reasoning Data Curation Recent reasoning models, such as DeepSeek-R1 and o3, have shown outstanding performance in mathematical, coding, and...