HΩ 2025 Review
What is HΩ?
HΩ (Horizon Omega) is a Canadian non-profit whose mission is to reduce risks from advanced AI through collaboration, research, and education.
What we’re doing so far:
Convene technical research, e.g. the Guaranteed Safe AI Seminars supporting work towards high-assurance safety guarantees
Host cross-disciplinary collaboration, e.g. AI Safety Unconference (formerly VAISU) towards a participatory and inclusive collaboration
Build community infrastructure, e.g. local event series and hub to help people share knowledge, orient themselves, find each other
Share resources openly: e.g. recordings that make safety work easier to access and join
2025 in review
In 2025, HΩ was volunteer-run and focused on maintaining and expanding the following set of projects.
Guaranteed Safe AI Seminars
The seminars aim to accelerate the Towards Guaranteed Safe AI research direction. The series now has 289 subscribers. We had the following talks, with a total of 249 RSVPs:
Engineering Rational Cooperative AI via Inverse Planning and Probabilistic Programming with Tan Zhi Xuan – Assistant Prof. in National Uni of Singapore & IHPC, PhD in MIT Probabilistic Computing Project and Computational Cognitive Science lab
Using PDDL Planning to Ensure Safety in LLM-based Agents with Agustín Martinez Suñé – Ph.D. in Computer Science & postdoc, OXCAV, Uni of Oxford
When AI met AR with Clark Barrett – Director, Stanford Center for Automated Reasoning and co-director, Stanford Center for AI Safety
Safe Learning Under Irreversible Dynamics via Asking for Help with Benjamin Plaut – Postdoc at CHAI studying guaranteed-safe AI
Model-Based Soft Maximization of Suitable Metrics of Long-Term Human Power with Jobst Heitzig – Senior Mathematician AI Safety Designer
Towards Safe and Hallucination-Free Coding AIs with GasStationManager – Independent Researcher
All event recordings are available on YouTube.
Montréal ecosystem
We launched aisafetymontreal.org in July as an info hub listing AI safety organizations, efforts, and events in Montréal.
We launched the Montréal AI safety, ethics, governance event series in September. Since, we ran 18 talks/atelier events, and one hackathon. We had a total of 218 registered participants. We featured local experts: researchers at Mila and McGill, independent researchers, and a social impact initiative founder. We also ran 8 coworking sessions.
Limits to Control workshop
At the request of collaborators, we facilitated the Limits to Control workshop, held on June 11-13, at the University of Louisville. It featured participants like Roman Yampolskiy, Anders Sandberg, and Forrest Landry, with expertise spanning AI safety, control impossibility and uncontainability, systems philosophy, and mathematical logic.
The participants wrote a public statement arguing that as AI systems scale towards full autonomy, reliable “control” (via technical safeguards, oversight, or regulation) eventually becomes impossible in principle, so we must urgently map unknown controllability thresholds and rethink governance to avoid permanently losing human self-determination.
AI Safety Events & Training Substack
We had started the AI Safety Events & Training Substack in late 2023 to provide a streamlined way to stay up to date on upcoming AI safety events and training opportunities, globally. It grew to 1000+ subscribers. As of the Fall, the project continues under the stewardship of the Alignment Ecosystem Development group.
Plans for 2026
We are committed to realizing these projects:
AI Safety Unconference (as a hybrid continuation of VAISU, the Virtual AI Safety Unconference series)
Montréal AI safety, ethics, governance events and aisafetymontreal.org hub
Beyond these, we are considering potential expansions along our mission, depending on funding and capacity:
Canadian AI governance projects, e.g. pushing for the TechStat StatsCan program to cover measurement of AI governance, risk management, and incident management, and hosting coordination meetings for Canadian AI safety players
Expanding the scope of Montréal AI safety and Canadian community building
Technical workgroup towards a research direction we are excited about
More write-ups and blog posts
Participate & support
HΩ can do more towards its mission with your involvement.
Come to the events we put up. Share your work. Start constructive conversations.
The realization of our projects depend on donations. We secured funding for the GSAI Seminars project in 2026 (thank you!), but all other projects do not have funding. Donations would be used in these ways (non-exhaustive examples):
AI Safety Unconference: staff to run the event, online tools, building a custom web app to support the event series, support for local sites (venue and food).
Montréal events and community: organizer time, venues, food, coworking sessions
HΩ as an organization: become paid employees, to realize more towards the mission
To donate, visit horizonomega.notion.site/donation.
Interested in collaborating in the org and its projects? Reach out via team@horizonomega.org.
Acknowledgements
Thanks to all participants in our events.
Thanks to all speakers, who came prepared with awesome presentations.
Thanks to Étienne, Louise, Adnan, Pinar, Eda, and Kyle for volunteering.
Thanks to the funders of the GSAI Seminars crowdfunding, and to all that donated to the Montréal AI safety, ethics, governance events.
Onward to 2026!

