TAICO Tech Week 2025 Meetup @Adaptavist


Table of Contents
Table of Contents
We are officially part of Toronto Tech Week 2025, a weeklong citywide collection of events to connect and celebrate the builders. June 23 → June 27, 2025.
The TAICO team is proud to announce our next meetup at the Adaptavist office in Toronto. Much thanks to Adaptavist for hosting!
NOTE: You must register through the lu.ma link to attend as this event is part of the Toronto Tech Week.
Registration and Event Details
Please note that we are using lu.ma for event registration and you MUST register through that link to attend. Seating is limited so please register early. Thank you and see you there!
Event registration link - lu.ma
Our goal is to bring artificial intelligence and cybersecurity together. To do that, we need to explore what’s happening in AI and what’s happening in cybersecurity, and where the two intersect and collide. We’re also working to understand how we solve problems in these areas, what that looks like, and how Canada can and will participate.
With that in mind, we’re pleased to announce our speaker and the agenda for our next meeting!
Sponsors
Thank you to our food and beverage sponsors this month, Zenity and AutoAlign! And thanks to our host, Adaptavist!
Agenda
- 👋 Welcome and introductions
- 🙏 Thank you to our amazing hosts and sponsors! 🎉
- 🎙️ Speakers
🎤 Speaker #1: Dan Adamson, CEO of AutoAlign
Title:
Agentic AI Safety for Enterprises
Abstract:
Join AutoAlign’s CEO, Dan Adamson, to explore how to keep agentic AI systems safe across a variety of enterprise applications and frameworks.
We’ll look at the additional risks that agentic systems impose, and how traditional AI safety approaches like guardrails are not sufficient. We’ll then look at how an AI supervisor model can help ensure compliance, security and robustness. This session will highlight cutting-edge advancements in Agentic AI and look at how to keep common agentic frameworks safe.
Peek into the future of enterprise Agentic AI with Dan Adamson as he shares how two decades of experience in AI security led to the development of supervisor systems—ensuring agentic AI systems remain safe and reliable.
About Dan Adamson:
Dan Adamson is the CEO of AutoAlign and a serial entrepreneur with a proven track record of inventing, growing teams, releasing products, with a focus on R&D and innovation. He previously co-founded Armilla, developing AI risk transfer solutions, and founded OutsideIQ, deploying AI-based AML and anti-fraud solutions, to be acquired by Exiger.
Prior to founding OutsideIQ, Adamson was a former Microsoft search expert and a technical lead for Bing and the Health Solutions Search teams. He also served as Chief Architect at Medstory, a vertical search start-up that was acquired by Microsoft. Adamson holds several search algorithm and AI patents, in addition to numerous academic awards and holding an M.Sc. from U.C. Berkeley and B.Sc. from McGill.
🎤 Speaker #2: Amir Kiani
Title:
Do Reasoning Models Really Reason?
Abstract:
In this talk, we’ll survey recent research at the intersection of AI and Philosophy to critically assess whether large language models that appear to reason are genuinely engaging in reasoning or merely simulating it. We’ll examine some key philosophical and technical arguments and explore what this means for our understanding of intelligence, agency, and the nature of thought itself.
About Amir Kiani:
Amir Kiani holds a PhD in Philosophy, with a focus on mathematical philosophy and logic, metaphysics, and the philosophy of AI. During the day, he works as a senior analyst at the Canadian Institute for Health Information (CIHI), where he applies analytics and machine learning to large-scale healthcare data. By night, he researches the philosophical foundations and implications of AI—or attends to his cats, depending on which is more demanding.
Amir has delivered several talks on the crossovers of AI and Philosophy, including recent presentations on whether large language models possess minds (at Microsoft’s Vancouver headquarters and the Mindstone AI community in Toronto), and on AI and corporate transparency (at the AI and Humanities Lab at the University of Toronto).
🎤 Speaker #3: Steven Harper, Zenity
Title:
Your Copilot is my insider
Abstract:
This is a shortened version of the RSA Conference Keynote presentation by Zenity CTO and Co-founder, Michael Bargury, and presents the research that he and his research team have done around agent security. A copilot is susceptible to Indirect Prompt injection, allowing for RCE, or remote copilot execution. Similar to remote code execution, remote copilot execution is more dangerous and harder to detect and defend, due to the nature of AI and specifically Agentic AI. It is also relatively easy to create and demonstrate a proof of concept.
About Steven Harper:
Steven Harper (no, not that one) is a veteran in the Internet and Cyber Security industry. He began his career at BBN, the builders of the ARPANET / DARPANET, precursors to the Internet we all know today. Having worked at numerous start ups in technologies ranging from the data center to advanced malware, Identity to DDoS, he is now at Zenity, securing Agentic AI. He is a member in the US of InfraGard, a public Private Partnership with the FBI, tasked with protecting critical infrastructure from both physical and cyber attacks. He has been involved with several high profile cyber incidents and has testified for the FBI and U.S. Government. He resides in Boston Massachusetts with his wife and Labrador Retriever, but loves Canada, and is in Toronto on a regular basis. For the record he is NOT a Bruins fan.
🎤 Speaker #4: Kosseila HD, CloudThrill
Title:
local LLM inference on Cloud managed Kubernetes your path to 100% privacy
Abstract:
In a world accustomed to AI assistants using pricey proprietary models from companies like OpenAI, what if your users require full control over the personal information exchanged with AI models? is there a way ? The answer is yes! Join us to explore how to build a web-based private chatbot powered by Local LLM inference straight out of ollama pods, ensuring 100% data privacy for containerized workloads in Kubernetes clusters powering your favorite chatbot and IDE.
About Kosseila HD:
Founder at CloudThrill, Kosseila has witnessed significant transformations in technology during his 18 years of Consulting across several industries—from on-site data center engineering and server management to the modern era of DevOps, AI inference and cloud-native solutions. His current mission at CloudThrill, is providing expertise in Cloud ops and local LLM deployment , focusing on privacy, control, and cost-effectiveness through open-source models and Kubernetes-deployed inference engines (Ollama/vLLM).
- Lightning Talks - 5 to 10 minutes long
- Curtis Collicutt - Demo of Raillock - Raillock is an open source project that “locks” MCP server tool descriptions with cryptographic signatures. It can be used as a CLI or a Python library. It can be imported into AI Agents that are MCP clients and help them protect themselves from malicious MCP servers and other MCP vulnerabilities.
- You? - Please reach out if you’d like to do a lightning talk or demo
Please reach out to us if you’d like to present at the meetup. We are looking for people to talk about what they are working on, what they are building and learning, and are open to any level of experience and technical depth. Whether you are a beginner or an expert, we want to hear from you! We’re all just out here building and learning.
- 👋 See You There!
Thanks, and we look forward to seeing you at the meetup!