Equivariant Labs

We believe understanding AI systems is one of the defining scientific and societal challenges of our time.

Focused on the inner workings of AI systems, supporting their safe and ethical development through research, education, and communication.

Research. Educate.
Communicate.

At the intersection of mathematics, machine learning, and AI safety — building a deeper understanding of AI systems and their societal implications.

01

Consultancy

Technical research consulting in AI and AI safety, drawing on deep expertise in mathematics and machine learning.

02

Education

Workshops and lectures designed to explain how AI systems work, making complex research accessible to diverse audiences.

03

Communication

Translating the latest AI research into clear, accessible insights for policymakers, journalists, and the public.


Shivam Arora

Shivam Arora

Founder & Lead Researcher

Equivariant Labs is led by Shivam Arora, with expertise in Mathematics and Machine Learning. The lab works collaboratively with contributors from research, education, and governance communities to build a deeper collective understanding of AI systems and their societal implications.


Building the AI Safety
Community

Facilitating courses, mentoring projects, and supporting the next generation of AI safety researchers.

BlueDot Impact
BlueDot Impact

Technical AI Safety Course

Facilitated 30+ sessions covering evaluations, mechanistic interpretability, and AI control research. Mentored approximately 14 projects in technical AI safety through the AI Safety Project Course.

30+ sessions ~14 projects mentored
Toronto AI Safety
University of Toronto

AI Alignment Fellowship

Facilitating the AI Alignment Fellowship through the Toronto AI Safety Student Initiative — an 8-week program designed to introduce students to AI safety and alignment research.

8-week fellowship

Research & Perspectives

Exploring the nuances of AI safety research methodology and practice.

The Projection Problem: Two Pitfalls in AI Safety Research

An examination of two common pitfalls: treating current LLMs as proto-superintelligent AI, and the conflation of product safety work with existential risk research. Researchers should be precise about whether they are doing product safety versus x-risk research.

Read article

Stay in Touch

YouTube

Videos on things we find interesting in AI and Mathematics.

Equivariant Labs | AI Equivariant Labs | Mathematics

Newsletter

Weekly updates on AI news, research, advancement, safety, and policy.

LinkedIn Newsletter Substack