Having Optimism in the Age of AI
Chris, a self-described GRC nerd working at Wistic, delivers a talk that is less about AI technology itself and more about the mindset we should bring to governing it. He opens with heavy disclaimers and memes, setting a casual tone before diving into his central argument: we have solved hard technology problems before, and we will solve this one too.
The Historical Parallel
The strongest portion of the talk walks through technology history. Chris shows ARPANET circa 1969, Margaret Hamilton's handwritten Apollo 11 code, and early personal computers to make the point that today's challenges with AI are not unprecedented in scope. Cloud computing went through a similar cycle of fear, regulation, and eventual normalization. The message is clear: incremental progress beats paralysis.
The Four Management Responses
Chris outlines four ways organizations respond to AI risk. The fear-based "no AI ever" approach and the reckless "we'll figure out security later" approach are both problematic. He advocates for the fourth option: acknowledging the problem, proposing solutions, and taking small steps forward even when "good" is not yet defined. His framing of "extreme risk acceptance" as code for "we'll never talk about this again" lands well.
GRC as a Universal Skill
A recurring theme is that GRC is not a department but a skill everyone in security should develop. Chris argues that speaking the language of risk management makes you more effective regardless of your technical specialty. This is practical advice that resonates, though it could have been delivered more concisely.
The Jurassic Park Rule and Practical Steps
Chris references the Jurassic Park principle of asking whether you should do something before asking if you could. He points to existing frameworks from NIST, ISO, and the EU AI Act as starting points rather than reinventing the wheel. His practical advice boils down to: start somewhere, use existing resources, get comfortable through doing, and expand your comfort zone incrementally.
Who Should Watch
This talk is best suited for GRC professionals and security practitioners who feel overwhelmed by AI governance responsibilities. If you want technical AI security content, look elsewhere. If you need a pep talk about tackling AI policy one step at a time, this delivers.