AI tools like Alex are genuinely powerful. That power comes with genuine responsibility. This page isn't a list of rules — it's a framework for thinking about how to use AI well. The workshop is built on these principles, and we think they're worth stating plainly.
You are responsible for everything you produce.
When you submit a report, send an email, file a brief, or publish an article — your name is on it. It doesn't matter whether AI helped draft it. The professional standard doesn't change because the tool changed.
This means you must read, verify, and own everything AI helps you create. "Alex wrote it" is not an acceptable explanation for an error — to your organization, your clients, or yourself.
Verify before you trust.
AI systems can produce confident-sounding text that is factually wrong. This is not a bug that will be patched — it is a structural characteristic of how large language models work. They generate plausible text; accuracy is a byproduct, not a guarantee.
Critical claims — statistics, citations, legal precedents, medical information, financial figures — must be independently verified before use. The more consequential the claim, the more rigorous the verification.
Protect what is not yours to share.
When you type into an AI tool, you are sending data to an external system. Most enterprise AI tools process this on remote servers. Some retain inputs for model improvement. Policies vary and change.
Treat AI tools with the same discretion you would apply to any external communication:
- Personal data — Do not paste names, contact details, health records, or financial information about individuals without authorization.
- Confidential business information — Client data, deal details, personnel matters, unreleased financials, and trade secrets belong inside your organization.
- Material non-public information — Never input MNPI into any AI tool. This is a legal matter, not just a policy one.
- Student data — Educators and students alike should follow their institution's data classification policies.
Be transparent about AI involvement.
Different contexts have different norms — and those norms are still evolving. Academic institutions have explicit policies on AI use. Employers are developing them. Professional associations are debating them.
The default principle is transparency: when AI meaningfully contributed to work you are presenting as your own, acknowledge it. This applies to:
- Academic submissions (check your institution's policy — many now require disclosure)
- Published work and journalism
- Client deliverables where the client has reasonable expectations about your process
- Internal work products where AI involvement would affect how they're evaluated
Using AI as a thinking partner — asking it to critique your draft, stress-test your argument, or help you research a topic — is generally different from using it to generate the work wholesale. The distinction matters.
Recognize and correct for bias.
AI models reflect patterns in their training data — which means they reflect the biases in that data. These biases are not always visible and they are not always corrected by the model.
This shows up in concrete ways: AI systems can represent some groups better than others, replicate historical inequities in recommendations, produce language that encodes assumptions about gender, race, or ability, and perform differently across languages, cultures, and demographics.
Your responsibility as a practitioner is to:
- Ask who might be harmed or excluded by the output you're creating
- Review AI-generated content actively for implicit assumptions
- Not treat AI output as neutral just because it's machine-generated
- Seek diverse perspectives when AI is being used to make consequential decisions about people
Keep humans in the loop for consequential decisions.
AI can help you think through decisions faster and more thoroughly. It should not replace judgment in decisions that materially affect people's lives, livelihoods, access to resources, or legal rights.
Hiring decisions, performance evaluations, medical treatment, legal representation, financial advice, and similar high-stakes domains require human judgment, professional accountability, and legal compliance — regardless of how capable AI tools become.
Use AI to grow, not to bypass growth.
The most important risk in this workshop is the risk nobody talks about: using AI in a way that atrophies the skills it's supposed to help you apply.
If AI writes your first draft every time, you stop developing a writing voice. If AI structures every argument, you stop building structuring intuition. If AI answers every research question, you stop developing your own depth of knowledge.
The professionals who will thrive with AI are the ones who use it to accelerate and amplify genuine expertise — not to substitute for it. Alex is designed to be a thinking partner: it should make your thinking sharper, not replace it.
Further Reading
These frameworks and guidelines shaped how we think about responsible AI in this workshop:
- Microsoft Responsible AI Principles — Fairness, reliability, privacy, inclusiveness, transparency, and accountability
- Google Responsible AI Practices — Practical guidance on building and deploying AI responsibly
- NIST AI Risk Management Framework — The US federal standard for AI risk governance
- EU AI Act Overview — The European regulatory approach to AI safety and rights
- OpenAI Safety — Research and commitments on AI safety from one of the field's key labs