(2)
principles
Core Principles for Building AI That Works
Put humans first
Human-driven, not technology-driven.
Start with what people actually need, not what your model can do.
If the benefits don’t clearly outweigh foreseeable risks, don’t build it.
2.
Keep people in control
AI should amplify human judgment, creativity, and intentionality, not replace or obscure them.
Users should drive, AI should assist.
Give people the power to constrain, override, and guide the system. They need to understand what's happening and stay in the driver's seat, not become passengers watching their tools make decisions for them.
Offer meaningful controls, confirmations for high-impact actions, easy undo, and well-lit exits. Default to human-in-the-loop for consequential outcomes.
Keep humans in the loop, preserve control, and make the system’s autonomy transparent and understandable.
3.
Open the black box
Users deserve (and need) explanations they can internalize. The how and why behind AI decisions must be surfaced, in digestible, context‑sensitive form.
Explain how the AI works, what data it uses, and why it produces certain outputs. Make your confidence levels visible. If you're uncertain, say so. If you're wrong, own it.
This builds trust, reduces fear, supports accountability, and helps detect errors or biases.
Explain what’s happening (in plain language), expose confidence/uncertainty, show key factors or evidence where feasible, and teach the user how to get better results.
4.
Design for the mess
AI outputs will be imperfect, inconsistent, and sometimes flat-out wrong. Build for that reality.
Help users spot errors, provide ways to recover gracefully, and make uncertainty actionable.
Confident-sounding bullshit is still bullshit.
5.
Calibrate through honesty
Set realistic expectations upfront. Underpromise and overdeliver. Show users your limitations before they discover them the hard way.
State clearly what the system can and cannot do, typical failure modes, and when it needs human review. Scope tightly before you scale.
Trust builds slowly through consistent honesty, not through overselling capabilities.
Be upfront about limits, uncertainties, and the “guardrails” in place.
Clearly define who is responsible when things go wrong.
Enable external audits, logging, redress processes, and mechanisms for recourse.
Assume misrecognitions, hallucinations, and edge cases. Own errors, show recovery paths, and log/root-cause them to improve the model and the UX.
6.
Make models learnable
Help users understand how to work with your AI effectively. Teach them about generative variability, show them good examples, and build on patterns they already know.
Provide guidance on how to best use the AI system.
Learn how the user thinks about the task and the AI's role.
The best AI doesn't require a PhD to use.
7.
Support co-creation, not dictation
AI should collaborate with users, not replace them. Enable iterative refinement, provide meaningful controls, and let people shape outputs to meet their actual needs. The best results come from humans and AI working together.
8.
Protect what matters
User data belongs to users. Be clear about what you collect and why. Test and monitor for harms—bias, toxicity, misinformation. The potential to cause damage is real, so treat it seriously. Privacy and security aren't optional.
Engineer for robustness, monitored operation, and safe failure. Detect and handle uncertainty; degrade gracefully with honest fallbacks.
Collect the least you need, obtain meaningful consent, protect it end-to-end, and honor data provenance and usage limits.
Embed privacy as a first-class concern from the start (not patched-on later).
Collect the minimal data needed; protect it throughout its lifecycle.
Let users inspect, control, and delete data where possible.
Define success and safety metrics before launch (task success, harm rates, false-X, user trust), watch them after launch, and tie guardrails to hard thresholds and kill-switches.
9.
Know what not to automate
Some things require human judgment—ethical decisions, emotional understanding, subjective evaluation. Some things have inherent value in the manual process. Just because you can automate something doesn't mean you should.
10.
Design for everyone, with everyone
AI impacts all of us, so all of us should shape it. Build diverse teams. Test with real users. Consider different abilities, backgrounds, and contexts. Fair isn't the default—you have to design for it deliberately.
Be fair and inclusive by design
Proactively assess and mitigate bias across data, models, UX copy, and outcomes; test with diverse users and contexts.
Design to treat diverse individuals equitably. Guard against reinforcing systemic biases in data, models, or design.
Actively audit, mitigate, and monitor for disparities across demographics, contexts, and use cases.
AI should work for everyone, regardless of ability, background, language, or context.
Intentionally include marginalized voices in design & testing.
Strive for universal usability, not “one-size-fits-all.”
don’t deepen the privilege gap.
11.
Embrace the variability
Multiple outputs aren't a bug, they're a feature. Help users navigate options, compare alternatives, and curate what works. Give them tools to find the signal in the noise. Orient users to generative variability by helping them understand the potential variations in AI-generated outputs.
12.
Provide context, not just answers
Memory and history matter. Help users maintain continuity across sessions, resurface relevant past context, and carry their work forward. Connect conversations instead of treating every interaction as a fresh start.