Home MarkTechPost The virtuous cycle of AI research
MarkTechPost

The virtuous cycle of AI research

Share
The virtuous cycle of AI research
Share


We recently caught up with Petar Veličković, a research scientist at DeepMind. Along with his co-authors, Petar is presenting his paper The CLRS Algorithmic Reasoning Benchmark at ICML 2022 in Baltimore, Maryland, USA.

My journey to DeepMind…

Throughout my undergraduate courses at the University of Cambridge, the inability to skilfully play the game of Go was seen as clear evidence of the shortcomings of modern-day deep learning systems. I always wondered how mastering such games might escape the realm of possibility.

However, in early 2016, just as I started my PhD in machine learning, that all changed. DeepMind took on one of the best Go players in the world for a challenge match, which I spent several sleepless nights watching. DeepMind won, producing ground-breaking gameplay (e.g. “Move 37”) in the process.

From that point on, I thought of DeepMind as a company that could make seemingly impossible things happen. So, I focused my efforts on, one day, joining the company. Shortly after submitting my PhD in early 2019, I began my journey as a research scientist at DeepMind!

My role…

My role is a virtuous cycle of learning, researching, communicating, and advising. I’m always actively trying to learn new things (most recently Category Theory, a fascinating way of studying computational structure), read relevant literature, and watch talks and seminars.

Then using these learnings, I brainstorm with my teammates about how we can broaden this body of knowledge to positively impact the world. From these sessions, ideas are born, and we leverage a combination of theoretical analysis and programming to set and validate our hypotheses. If our methods bear fruit, we typically write a paper sharing insights with the broader community.

Researching a result is not nearly as valuable without appropriately communicating it, and empowering others to effectively make use of it. Because of this, I spend a lot of time presenting our work at conferences like ICML, giving talks, and co-advising students. This often leads to forming new connections and uncovering novel scientific results to explore, setting the virtuous cycle in motion one more time!

At ICML…

We’re giving a spotlight presentation on our paper, The CLRS algorithmic reasoning benchmark, which we hope will support and enrich efforts in the rapidly emerging area of neural algorithmic reasoning. In this research, we task graph neural networks with executing thirty diverse algorithms from the Introduction to Algorithms textbook.

Many recent research efforts seek to construct neural networks capable of executing algorithmic computation, primarily to endow them with reasoning capabilities – which neural networks typically lack. Critically, every one of these papers generates its own dataset, which makes it hard to track progress, and raises the barrier of entry into the field.

The CLRS benchmark, with its readily exposed dataset generators, and publicly available code, seeks to improve on these challenges. We’ve already seen a great level of enthusiasm from the community, and we hope to channel it even further during ICML.

The future of algorithmic reasoning…

The main dream of our research on algorithmic reasoning is to capture the computation of classical algorithms inside high-dimensional neural executors. This would then allow us to deploy these executors directly over raw or noisy data representations, and hence “apply the classical algorithm” over inputs it was never designed to be executed on.

What’s exciting is that this method has the potential to enable data-efficient reinforcement learning. Reinforcement learning is packed with examples of strong classical algorithms, but most of them can’t be applied in standard environments (such as Atari), given that they require access to a wealth of privileged information. Our blueprint would make this type of application possible by capturing the computation of these algorithms inside neural executors, after which they can be directly deployed over an agent’s internal representations. We even have a working prototype that was published at NeurIPS 2021. I can’t wait to see what comes next!

I’m looking forward to…

I’m looking forward to the ICML Workshop on Human-Machine Collaboration and Teaming, a topic close to my heart. Fundamentally, I believe that the greatest applications of AI will come about through synergy with human domain experts. This approach is also very in line with our recent work on empowering the intuition of pure mathematicians using AI, which was published on the cover of Nature late last year.

The workshop organisers invited me for a panel discussion to discuss the broader implications of these efforts. I’ll be speaking alongside a fascinating group of co-panellists, including Sir Tim Gowers, whom I admired during my undergraduate studies at Trinity College, Cambridge. Needless to say, I’m really excited about this panel!

Looking ahead…

For me, major conferences like ICML represent a moment to pause and reflect on diversity and inclusion in our field. While hybrid and virtual conference formats make events accessible to more people than ever before, there’s much more we need to do to make AI a diverse, equitable, and inclusive field. AI-related interventions will impact us all, and we need to make sure that underrepresented communities remain an important part of the conversation.

This is exactly why I’m teaching a course on Geometric Deep Learning at the African Master’s in Machine Intelligence (AMMI) – a topic of my recently co-authored proto-book. AMMI offers top-tier machine learning tuition to Africa’s brightest emerging researchers, building a healthy ecosystem of AI practitioners within the region. I’m so happy to have recently met several AMMI students that have gone on to join DeepMind for internship positions.

I’m also incredibly passionate about outreach opportunities in the Eastern European region, where I originate from, which gave me the scientific grounding and curiosity necessary to master artificial intelligence concepts. The Eastern European Machine Learning (EEML) community is particularly impressive – through its activities, aspiring students and practitioners in the region are connected with world-class researchers and provided with invaluable career advice. This year, I helped bring EEML to my hometown of Belgrade, as one of the lead organisers of the EEML Serbian Machine Learning Workshop. I hope this is only the first in a series of events to strengthen the local AI community and empower the future AI leaders in the EE region.



Source link

Share

Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Articles
Google’s research on quantum error correction
MarkTechPost

Google’s research on quantum error correction

Quantum computers have the potential to revolutionize drug discovery, material design and...

A new era of discovery
MarkTechPost

A new era of discovery

AI is revolutionizing the landscape of scientific research, enabling advancements at a...

Pushing the frontiers of audio generation
MarkTechPost

Pushing the frontiers of audio generation

Technologies Published 30 October 2024 Authors Zalán Borsos, Matt Sharifi and Marco...

New generative AI tools open the doors of music creation
MarkTechPost

New generative AI tools open the doors of music creation

This work was made possible by core research and engineering efforts from...