Lesson 4 of 4

When NOT to Use AI

Knowing when to put the tool down is as important as knowing how to use it.

Why This Matters

AI literacy isn't just about using AI well—it's about knowing when not to use it at all. Some situations call for human judgement, human connection, or human accountability. Using AI inappropriately can damage relationships, breach confidentiality, create legal risk, or simply rob you of learning opportunities. The wisest AI users know its boundaries.

Key Principles

  • 1.
    Never Share Confidential Information

    Don't paste confidential documents, customer data, proprietary code, or personal information into AI tools. Many AI systems use inputs for training or store conversations. Ask yourself: "Would I be comfortable if this appeared in tomorrow's news?" If not, don't share it.

  • 2.
    Don't Automate High-Stakes Decisions

    AI shouldn't make decisions that significantly affect people's lives: hiring, firing, medical diagnoses, legal judgements, financial advice. AI can inform these decisions, but a human must make them. You can't outsource accountability to a machine.

  • 3.
    Don't Use AI for Emotional Situations

    Delivering bad news, offering condolences, handling complaints from upset customers, or having difficult conversations—these require human empathy. AI-generated sympathy is hollow. People can tell when they're getting a bot, and it damages trust.

  • 4.
    Preserve Your Learning Opportunities

    If you're supposed to be learning a skill, using AI to bypass the work defeats the purpose. The struggle is where learning happens. Early in your career especially, doing the hard work yourself builds capabilities AI can't give you. Don't outsource your own development.

  • 5.
    Respect Ethical Boundaries

    Don't use AI to deceive people—whether that's faking academic work, impersonating others, creating misleading content, or manipulating people. The same ethical principles that apply to your work apply to AI-assisted work. If it would be wrong to do manually, it's wrong to do with AI.

Practice with AI

Use these prompts with ChatGPT, Claude, or any AI assistant to practice this skill:

Practice Prompt:

"I'm going to describe some workplace scenarios. For each one, tell me whether AI assistance is appropriate, inappropriate, or somewhere in between—and why. Ready? Scenario 1: [describe situation]"

Get Feedback:

"What are the most common mistakes people make when using AI at work? What boundaries should I set for myself? Help me create a personal policy for appropriate AI use."

Key Insight

"Just because we can doesn't mean we should."

— Common ethical principle

Books to Explore

  • Co-Intelligence: Living and Working with AI by Ethan Mollick
  • The Alignment Problem by Brian Christian
  • AI Snake Oil by Arvind Narayanan & Sayash Kapoor

AI Literacy Complete!

You've finished all 4 lessons in the AI Literacy topic. You now understand what AI can and can't do, how to prompt effectively, how to review AI work critically, and when human judgement should take precedence.

Congratulations! You've completed the entire Careers Track. You've built a foundation in communication, critical thinking, problem solving, collaboration, self-management, creativity, leadership, commercial awareness, negotiation, networking, and now AI literacy.

These skills compound over time. Keep practising, stay curious, and remember: wisdom isn't knowing everything—it's knowing what you don't know and being willing to learn.