What I Learned
AI is no longer a future concept — it’s actively shaping how people discover, decide, and interact with digital products. As someone interested in thoughtfully implementing AI into real products, I recently completed UXcel’s AI Fundamentals for UX certification to better understand what good UX looks like in an AI-first world.
What stood out most wasn’t the technology itself, but the consistent focus on designing AI around human intuition, trust, and control.
TL;DR
UXcel’s AI Fundamentals for UX certification reinforced that great AI experiences are built on clear mental models, intentional trust-building, and strong user control. AI UX isn’t about flashy features — it’s about transparency, responsibility, and designing systems that feel intuitive and human-centered.
Designing AI Starts with Mental Models
Users don’t think in terms of models or probabilities — they think in expectations. One of the strongest takeaways was how important it is to design AI experiences that align with users’ mental models.
This means:
• Setting clear expectations about what AI can and can’t do
• Choosing AI use cases that genuinely add value
• Avoiding “magic” behavior that feels unpredictable or misleading
When AI behaves in understandable ways, trust has a chance to form.
Trust Is a Designed Experience
Trust in AI doesn’t happen by default — it’s intentionally built. The course emphasized designing interactions that make AI feel explainable and dependable, not mysterious.
Key ideas included:
• Making AI actions visible and understandable
• Providing context for recommendations or outputs
• Designing feedback loops so users can guide or correct the system
Transparency isn’t about exposing technical complexity — it’s about helping users feel informed and confident.
User Control Is Essential
A recurring theme throughout the course was user agency. Responsible AI experiences give users meaningful control over how and when AI is used.
This includes:
• Clear opt-in moments
• Customization and adjustable autonomy
• Safety nets like undo actions and clear boundaries
When users feel in control, AI feels collaborative instead of intrusive.
A Note on the Instructor
The course is taught by Dr. Slava Polonski, a UX Researcher at Google who has spent years studying how people make decisions in complex, high-stakes moments. He’s also contributed to Google’s People + AI Guidebook and has taught these principles to teams at companies like Spotify, as well as at design conferences around the world. That depth of real-world experience shows throughout the course and grounds the material in practical, human-centered thinking.