OpenAI

1604 Articles
Frame-Dependent Agency: Implications for Reinforcement Learning and Intelligence
OpenAI

Frame-Dependent Agency: Implications for Reinforcement Learning and Intelligence

The study examines the concept of agency, defined as a system’s ability to direct outcomes toward a goal, and argues that determining whether...

OpenAI Introduces Competitive Programming with Large Reasoning Models
OpenAI

OpenAI Introduces Competitive Programming with Large Reasoning Models

Competitive programming has long served as a benchmark for assessing problem-solving and coding skills. These challenges require advanced computational thinking, efficient algorithms, and...

A Step-by-Step Tutorial on Robustly Validating and Structuring User, Product, and Order Data with Pydantic in Python
OpenAI

A Step-by-Step Tutorial on Robustly Validating and Structuring User, Product, and Order Data with Pydantic in Python

In many modern Python applications, especially those that handle incoming data (e.g., JSON payloads from an API), ensuring that the data is valid,...

Building an AI Research Agent for Essay Writing
OpenAI

Building an AI Research Agent for Essay Writing

In this tutorial, we will build an advanced AI-powered research agent that can write essays on given topics. This agent follows a structured...

Are Autoregressive LLMs Really Doomed? A Commentary on Yann LeCun’s Recent Keynote at AI Action Summit
OpenAI

Are Autoregressive LLMs Really Doomed? A Commentary on Yann LeCun’s Recent Keynote at AI Action Summit

Yann LeCun, Chief AI Scientist at Meta and one of the pioneers of modern AI, recently argued that autoregressive Large Language Models (LLMs)...

This AI Paper Introduces CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance
OpenAI

This AI Paper Introduces CodeSteer: Symbolic-Augmented Language Models via Code/Text Guidance

Large language models (LLMs) struggle with precise computations, symbolic manipulations, and algorithmic tasks, often requiring structured problem-solving approaches. While language models demonstrate strengths...

NuminaMath 1.5: Second Iteration of NuminaMath Advancing AI-Powered Mathematical Problem Solving with Enhanced Competition-Level Datasets, Verified Metadata, and Improved Reasoning Capabilities
OpenAI

NuminaMath 1.5: Second Iteration of NuminaMath Advancing AI-Powered Mathematical Problem Solving with Enhanced Competition-Level Datasets, Verified Metadata, and Improved Reasoning Capabilities

Mathematical reasoning remains one of the most complex challenges in AI. While AI has advanced in NLP and pattern recognition, its ability to...

Advancing Scalable Text-to-Speech Synthesis: Llasa’s Transformer-Based Framework for Improved Speech Quality and Emotional Expressiveness
OpenAI

Advancing Scalable Text-to-Speech Synthesis: Llasa’s Transformer-Based Framework for Improved Speech Quality and Emotional Expressiveness

Recent advancements in LLMs, such as the GPT series and emerging “o1” models, highlight the benefits of scaling training and inference-time computing. While...