Home OpenAI Sony Researchers Propose TalkHier: A Novel AI Framework for LLM-MA Systems that Addresses Key Challenges in Communication and Refinement
OpenAI

Sony Researchers Propose TalkHier: A Novel AI Framework for LLM-MA Systems that Addresses Key Challenges in Communication and Refinement

Share
Sony Researchers Propose TalkHier: A Novel AI Framework for LLM-MA Systems that Addresses Key Challenges in Communication and Refinement
Share


LLM-based multi-agent (LLM-MA) systems enable multiple language model agents to collaborate on complex tasks by dividing responsibilities. These systems are used in robotics, finance, and coding but face challenges in communication and refinement. Text-based communication leads to long, unstructured exchanges, making it hard to track tasks, maintain structure, and recall past interactions. Refinement methods like debates and feedback-based improvements struggle as important inputs may be ignored or biased due to processing order. These issues limit the efficiency of LLM-MA systems in handling multi-step problems.

Currently, LLM-based multi-agent systems use debate, self-refinement, and multi-agent feedback to handle complex tasks. These techniques become unstructured and hard to control based on text-based interaction. Agents struggle to follow subtasks, remember previous interactions, and provide consistent responses. Various communication structures, including chain and tree-based models, try to enhance efficiency but do not have explicit protocols for structuring information. Feedback-refinement techniques try to increase accuracy but have challenges with biased or duplicate inputs, making evaluation unreliable. Without systematic communication and feedback on a large scale, such systems still are inefficient and error-prone.

To mitigate these issues, researchers from Sony Group Corporation, Japan, proposed TalkHier, a framework that improves communication and task coordination in multi-agent systems using structured protocols and hierarchical refinement. Unlike standard approaches, TalkHier explicitly describes the interactions of agents and task formulation more and more subtly, reducing error and efficiency. Agents execute formalized roles, and scaling is automatically adapted to different issues by the system, resulting in improved decision-making and coordination.

This framework structures agents in a graph such that each node is an agent, and edges represent communication paths. Agents possess independent memory, which allows them to hold pertinent information and make decisions based on informed inputs without using shared memory. Communication follows a formal process: messages contain content, background information, and intermediate outputs. Agents are organized into teams with supervisors monitoring the process, and a subset of agents serve as members and supervisors, resulting in a nested hierarchy. Work is allocated, assessed, and improved in a series of iterations until it passes a quality threshold, with the goal of accuracy and minimizing errors.

Upon evaluation, researchers assessed TalkHier across multiple benchmarks to analyze its effectiveness. On the MMLU dataset, covering Moral Scenario, College Physics, Machine Learning, Formal Logic, and US Foreign Policy, TalkHier, built on GPT-4o, achieved the highest accuracy of 88.38%, surpassing AgentVerse (83.66%) and single-agent baselines like ReAct7@ (67.19%) and GPT-4o-7@ (71.15%), demonstrating the benefits of hierarchical refinement. On the WikiQA dataset, it outperformed baselines in open-domain question answering with a ROUGE-1 score of 0.3461 (+5.32%) and a BERTScore of 0.6079 (+3.30%), exceeding AutoGPT (0.3286 ROUGE-1, 0.5885 BERTScore). An ablation study showed that removing the evaluation supervisor or structured communication significantly reduced accuracy, confirming their importance. TalkHier outperformed OKG by 17.63% across Faithfulness, Fluency, Attractiveness, and Character Count Violation on the Camera dataset for ad text generation, with human evaluations validating its multi-agent assessments. While OpenAI-o1’s internal architecture was not revealed, TalkHier posted competitive MMLU scores and beat it decisively on WikiQA, showing flexibility between tasks and dominance over majority voting and open-source multi-agent systems.

In the end, the proposed framework improved communication, reasoning, and coordination in LLM multi-agent systems by combining a structured protocol with hierarchical refinement, which resulted in a better performance on several benchmarks. Including messages, intermediate results, and context information ensured structured interactions without sacrificing heterogeneous agent feedback. Even with increased API expenses, TalkHier set a new benchmark for scalable, objective multi-agent cooperation. This methodology can serve as a baseline in subsequent research, directing improvement in effective communication mechanisms and low-cost multi-agent interactions, ultimately towards advancing LLM-based cooperative systems.


Check out the Paper and GitHub Page. All credit for this research goes to the researchers of this project. Also, feel free to follow us on Twitter and don’t forget to join our 75k+ ML SubReddit.

🚨 Recommended Read- LG AI Research Releases NEXUS: An Advanced System Integrating Agent AI System and Data Compliance Standards to Address Legal Concerns in AI Datasets


Divyesh is a consulting intern at Marktechpost. He is pursuing a BTech in Agricultural and Food Engineering from the Indian Institute of Technology, Kharagpur. He is a Data Science and Machine learning enthusiast who wants to integrate these leading technologies into the agricultural domain and solve challenges.



Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

By submitting this form, you are consenting to receive marketing emails and alerts from: techaireports.com. You can revoke your consent to receive emails at any time by using the Unsubscribe link, found at the bottom of every email.

Latest Posts

Related Articles
What are AI Agents? Demystifying Autonomous Software with a Human Touch
OpenAI

What are AI Agents? Demystifying Autonomous Software with a Human Touch

In today’s digital landscape, technology continues to advance at a steady pace....

Moonshot AI and UCLA Researchers Release Moonlight: A 3B/16B-Parameter Mixture-of-Expert (MoE) Model Trained with 5.7T Tokens Using Muon Optimizer
OpenAI

Moonshot AI and UCLA Researchers Release Moonlight: A 3B/16B-Parameter Mixture-of-Expert (MoE) Model Trained with 5.7T Tokens Using Muon Optimizer

Training large language models (LLMs) has become central to advancing artificial intelligence,...

TokenSkip: Optimizing Chain-of-Thought Reasoning in LLMs Through Controllable Token Compression
OpenAI

TokenSkip: Optimizing Chain-of-Thought Reasoning in LLMs Through Controllable Token Compression

Large Language Models (LLMs) face significant challenges in complex reasoning tasks, despite...