The theme for 2026 will be “Reimagining Resilience: Empowering Local Communities in a Time of Uncertain Federal Support.” How can state and local actors prepare, withstand, recover, and adapt better in the face of increasingly severe natural disasters as Federal funds are being cut or redirected? Some initial focus areas are Helene-impacted regions and Puerto Rico, with possible extensions elsewhere, such as Maui and Los Angeles.
Laboratory5’s Applied Futures Lab will be represented at this event with our partners at Content Evolution. If you are in DC April 12- 14, swing by and say hi to Kevin Clark, President of Content Evolution.
The discussion is loud, fast, and often technical. New tools appear every week. Headlines promise transformation, disruption, automation, acceleration.
But the deeper work is quieter.
It lives in how we think.
Over the past few years, I’ve been studying how people interact with intelligent systems. The same technology lands in different hands and produces completely different outcomes. Some people feel overwhelmed. Others find creative momentum. Some freeze. Others experiment.
If you’ve been paying attention to the conversation around AI, you’ve probably noticed something strange.
Artificial intelligence is often described as a technological shift. In reality, it is also reshaping how we think, decide, collaborate, and imagine consequences across time.
When systems can generate ideas, synthesize information, and participate in creative processes, our relationship with knowledge changes. Our relationship with uncertainty changes. Even our relationship with time shifts.
The challenge is not simply learning new tools.
The challenge is learning how to think clearly inside systems that keep evolving.
The Future AI Mindset workshop was designed to address that challenge. It translates the ideas from the book into a structured thinking practice.
This approach is grounded in my work as an applied futurist and in the foresight methods I helped co-create over the years: Threatcasting, Futurecasting, andAppliedExperiential Futures. These methods were developed in environments where imagination must be disciplined and decisions carry real consequences.
Enterprise strategy. Government planning. Defense scenarios. Higher education. Innovation labs.
The goal in those environments is not prediction. It is orientation. Helping people think clearly in the presence of uncertainty.
The workshop brings that same discipline into our relationship with AI.
What We Explore Together
The six-week program is designed as a live thinking lab.
Each week we explore a different dimension of the Future AI Mindset framework, combining discussion, reflection, and practical exercises drawn directly from the book.
Participants will examine questions such as:
How does AI reshape cognition and creativity? What does collaboration look like when some collaborators are intelligent systems? How do we maintain judgment when machines produce convincing outputs? How do we think across time when technological change accelerates?
Instead of rushing toward answers, we practice structured curiosity.
We explore scenarios. We examine assumptions. We learn to hold multiple possible futures at once without collapsing into certainty.
These are skills futures thinkers have practiced for decades, and they are becoming increasingly valuable in AI-integrated environments.
What You Will Build
By the end of the six weeks, participants leave with more than notes or ideas. They will have built a set of practical tools they can continue using.
These include:
A personal Future AI Mindset statement that clarifies how you want to engage with intelligent systems.
An Applied Futures Map that connects present decisions with longer-term possibilities.
A backcasting pathway, working from a future you want toward actions you can take today.
A decision anchor that helps guide choices in moments of uncertainty.
And perhaps most important, a personal thinking practice that can travel with you into your work, teams, and projects.
These artifacts are designed to remain useful long after the workshop ends.
Who This Is For
This workshop is designed for people who sense that AI is not just changing technology. It is changing the terrain we think and act within.
Leaders making strategic decisions without complete information.
Educators preparing students for futures we cannot fully describe yet.
Researchers, designers, and builders working inside emerging systems.
Curious professionals who want to remain intentional rather than reactive.
If you have ever felt the acceleration and thought, I need to be thoughtful about how I navigate this moment, you are exactly the kind of person this workshop was designed for.
The Structure
The program runs for six weeks and meets live once a week.
Start date: March 30, 2026 Session length: One hour each week Format: Live virtual sessions with recordings available
And if someone comes to mind who might enjoy spending an hour a week thinking about the future together, feel free to share it with them.
The future is not something we simply wait for.
It is something we learn to think into.
Cyndi Coon is a time traveler and rule-bender, nerding out for good using data, science and curious questions as an Applied Futurist, author, creative, ecosystem builder, facilitator, producer, researcher, storyteller and publisher for: governments, the military, higher education, private partnerships, enterprise, and industry. Cyndi is the Founder and Principal Futurist at Applied Futures Lab, Founder of Laboratory5, and Co-founder of Arizona State University’s Threatcasting Lab and is the co-founder atThreatcasting.ai. Cyndi is the author of Future AI Mindset, co-author of Threatcasting (2021), Futurecasting (2026) and the author of numerous reports, articles and book chapters. Founder and Publisher at Turkey Hill Press.
If you’ve felt the ai acceleration and thought, “I need to be intentional here,” I write a book for you.
The noise around AI is loud. The deeper question is quieter: What cognitive habits allow you to stay steady inside systems that move faster than your instincts?
That question has shaped my work for the past two years. I’ve watched people across sectors engage the same generative systems and walk away with wildly different results. Not because of access. Not because of intelligence. Because of mindset.
This is not a guide to tools. It won’t teach you how to craft the perfect prompt or optimize a workflow. It’s about the orientation required to work wisely inside complex, uncertain systems, including AI.
At the center are eight interacting lenses: collaboration, confidence, positivity, preparedness, balance, specificity, open-mindedness, and patience. On their own, they sound simple. Together, they form a durable posture toward the future.
Collaboration without specificity is just pleasant agreement. Confidence without balance moves fast in the wrong direction. Patience without openness hardens into rigidity.
Used together, they create traction.
The book also includes the practical methods I’ve used for years in applied foresight work: backcasting from a desired future to identify what must be true today, scenario planning to hold multiple plausible futures at once, and Applied Futures Mapping to visually connect present decisions to long-term consequences. There’s even a time-travel visualization exercise you can record in your own voice to reconnect with the future you’re working toward.
This framework isn’t theoretical. I used it to build a curriculum for the U.S. Department of Energy, and I’m currently training the program at the National Labs.
Beginning March 30, I’ll also be offering a public six-week online workshop based on the book.
We are in a moment where our tools amplify both brilliance and blind spots at scale. The difference between reacting and shaping what comes next has less to do with technical fluency and more to do with cognitive discipline.
The Future AI Mindset is an invitation to practice that discipline.