Home OpenAI CONClave: Enhancing Security and Trust in Cooperative Autonomous Vehicle Networks Cooperative Infrastructure Sensors Environments
OpenAI

CONClave: Enhancing Security and Trust in Cooperative Autonomous Vehicle Networks Cooperative Infrastructure Sensors Environments

Share
CONClave: Enhancing Security and Trust in Cooperative Autonomous Vehicle Networks Cooperative Infrastructure Sensors Environments
Share


The cooperative operation of autonomous vehicles can greatly improve road safety and efficiency. However, securing these systems against unauthorized participants poses a significant challenge. This issue is not just about technical solutions, it also involves preventing against intentionally disrupting cooperative applications and faulty vehicles unintentionally causing disruptions due to errors. Detecting and preventing these disruptions, whether intentional or not, is essential for successful cooperation among autonomous vehicles. The challenge expands to include consensus among vehicles without a trusted third party equipped with independent sensors in every vehicle area network.

Existing state-of-the-art methods address authentication, consensus, and trust scoring either separately or in limited subsets of cooperative scenarios. One of the approaches is Blockchain-based event logging, but it fails to address the issue of unauthorized participants and lacks mechanisms to prevent authenticated users from fabricating events. Another approach includes specialized machine learning-based trust scoring, but lacks integration with consensus methods, limiting its potential benefits. Some researchers have attempted to tackle authentication and consensus problems simultaneously, but their approaches depend on specific application assumptions, making them unsuitable for general cooperative scenarios.

Researchers from Arizona State University in Tempe, Arizona, USA, have introduced CONClave, an application-level network protocol designed for sensor networks that require reliable and trustworthy data in Cooperative Autonomous Vehicles (CAVs) and Cooperative Infrastructure Sensors (CISs) environments. CONClave introduces a tightly coupled authentication, consensus, and trust scoring mechanism, providing comprehensive security and reliability for cooperative perception in autonomous vehicles. It is reliable in preventing security flaws, detecting even minor sensing faults, and enhancing the robustness and accuracy of cooperative perception in CAVs while minimizing overhead.

The CONClave method introduced a three-step process to achieve secure consensus and trust scoring for reliable cooperative driving:

  • The first step is authenticating all participants using a novel scheme that utilizes homomorphic hashing, incorporates manufacturer and government entities, and allows peer-to-peer authentication without constant communication with a trusted RSU. 
  • Second, it utilizes a modified Bosco consensus protocol for an agreement on sensor values submitted by each participant, ensuring that communication faults do not manifest as errors in the output.
  • Lastly, the trust scoring technique is applied to the resulting sensor input set, using parameterized sensor pipeline accuracy values instead of camera confidence values.

The performance of CONClave is evaluated against the state-of-the-art trust scoring method, Trupercept, using fault and malicious injection scenarios on 1/10 scale model autonomous vehicles. CONClave demonstrated superior detection rates across all categories: 96.7% for sensor extrinsic faults, 83.5% for software faults, 67.3% for malicious injections and removals, and 100% for communication faults, and malicious injections. Moreover, CONClave’s mean time to detection was 1.83 times faster on average and up to 6.23 times faster in the best-case scenario compared to TruPercept.

In this paper, researchers proposed CONClave, a method for securing cooperative perception-based applications in connected autonomous vehicles. It contains three key components: an authentication method, a consensus round, and a trust scoring method, and all are pipelined for real-time operation. CONClave has demonstrated superior performance, compared to the state-of-the-art method TruPercept, detecting a wider range of faults and errors, including malicious and unintentional ones, while operating at higher speeds. Future research will aim to enhance CONClave’s capabilities to cover all cooperative driving situations, including those that involve trust scoring for path planning.


Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter..

Don’t Forget to join our 50k+ ML SubReddit

⏩ ⏩ FREE AI WEBINAR: ‘SAM 2 for Video: How to Fine-tune On Your Data’ (Wed, Sep 25, 4:00 AM – 4:45 AM EST)


Sajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.





Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
This AI Paper Explores Long Chain-of-Thought Reasoning: Enhancing Large Language Models with Reinforcement Learning and Supervised Fine-Tuning
OpenAI

This AI Paper Explores Long Chain-of-Thought Reasoning: Enhancing Large Language Models with Reinforcement Learning and Supervised Fine-Tuning

Large language models (LLMs) have demonstrated proficiency in solving complex problems across...

Shanghai AI Lab Releases OREAL-7B and OREAL-32B: Advancing Mathematical Reasoning with Outcome Reward-Based Reinforcement Learning
OpenAI

Shanghai AI Lab Releases OREAL-7B and OREAL-32B: Advancing Mathematical Reasoning with Outcome Reward-Based Reinforcement Learning

Mathematical reasoning remains a difficult area for artificial intelligence (AI) due to...

Vintix: Scaling In-Context Reinforcement Learning for Generalist AI Agents
OpenAI

Vintix: Scaling In-Context Reinforcement Learning for Generalist AI Agents

Developing AI systems that learn from their surroundings during execution involves creating...