The Cadet
You know AI exists. You have tried it. Now the real question: do you know what you do not know?
Last updated: March 21, 2026
What Defines a Cadet
You know AI exists and you have tried it. That alone puts you ahead of a surprising number of professionals. Maybe you have asked ChatGPT a question, used an AI writing assistant, or experimented with an image generator. You have some awareness that these tools are changing how work gets done.
But right now, you are typing requests the way you would type into a search engine. Short queries. Vague instructions. You hit enter and hope for something useful. Sometimes you get it. Sometimes you do not. The outputs feel hit-or-miss because they are.
You do not yet know what good looks like. That is not a criticism. It is the defining characteristic of this level. When you have never seen a well-crafted AI interaction, you have no frame of reference for what is possible. You cannot improve what you cannot see.
The gap between where you are and where you could be is not technical. It is not about learning to code or understanding neural networks. The gap is self-awareness. Knowing what you know, knowing what you do not know, and being honest about the difference.
That is why self-awareness is the human skill at Level 1. It is the foundation that everything else gets built on. Without it, you skip ahead to advanced techniques and wonder why they do not work. With it, you build a solid base that compounds at every level above.
The Science of Self-Awareness
Self-awareness is not a soft skill. It is the most extensively researched component of emotional intelligence, and its impact on professional performance is measurable.
Daniel Goleman's emotional intelligence model identifies five core domains: self-awareness, self-regulation, motivation, empathy, and social skill. Self-awareness sits at the foundation. It is the skill that enables all the others. You cannot regulate emotions you do not recognize. You cannot empathize with others if you are blind to your own patterns.
Goleman defines self-awareness specifically as "understanding your own emotions and their effects on performance." In the context of AI, this translates directly: understanding your own knowledge gaps and their effects on how you use AI tools.
The critical insight from Goleman's research, and the reason self-awareness is the skill at Level 1, is that emotional intelligence is learned and learnable. It is not a fixed trait you are born with. The Consortium for Research on Emotional Intelligence in Organizations has documented that EI competencies can be developed through deliberate practice, coaching, and structured feedback.
This matters because most people assume AI proficiency is about technical knowledge. It is not. The 7 Levels framework is built on the principle that human skills drive AI outcomes. The first human skill, the one that unlocks everything above it, is the ability to honestly assess where you stand.
The Dunning-Kruger Trap
Here is where it gets uncomfortable. A 2025 study from Aalto University, led by researchers Robin Welsch and Laiana da Silva Fernandes, examined what happens to metacognition when people use AI tools. The study involved approximately 500 participants completing cognitive tasks both with and without AI assistance.
The traditional Dunning-Kruger effect says that low-skilled people overestimate their abilities while high-skilled people underestimate theirs. The Aalto study found that when AI enters the picture, this pattern disappears entirely. It gets replaced by something worse: universal overconfidence.
Every group overestimated their performance when using AI. It did not matter whether they were beginners or experienced users. Performance improved by about 3 points on average, but participants overestimated their improvement by 4 points. The gap between actual and perceived performance widened for everyone.
The root cause is cognitive offloading. When you hand a problem to AI, you stop engaging with it at a deep level. Most participants asked only one question per problem. They accepted the first answer. They did not iterate, push back, or verify. They outsourced not just the work but their judgment about the quality of the work.
This is the trap at Level 1. You use AI, you get a result, and you assume the result is good because it looks polished. AI outputs are fluent. They are grammatically correct, well-structured, and confident in tone. That fluency masks the fact that the content might be wrong, shallow, or completely off-target for your actual need.
The antidote is self-awareness. Not about AI. About yourself. About what you actually asked for versus what you actually needed. About whether you accepted the first answer because it was good or because it sounded good.
Where Most People Are
If you are reading this and thinking "that sounds like me," you are in very large company. The data on AI adoption in 2025 paints a clear picture of where the workforce actually stands.
According to Pew Research Center, only 21% of US workers use AI on the job. That means nearly four out of five workers either do not use it or have not started. Gallup's Q4 2025 data is even more stark: 49% of American workers have never used AI at all. Not at work. Not personally. Never.
When you combine the data, 65% of American workers do not use AI much or at all. The vast majority of the workforce is at Level 1 or below.
On the organizational side, McKinsey reports that 78% of organizations use AI in at least one business function. That sounds high until you look at the next number: only 38% of those organizations offer any form of AI training to their employees. Companies are buying tools but not teaching people how to use them. They are expecting adoption without investment in development.
This creates a gap that most workers feel but cannot name. You know you should be using AI. Your company probably expects you to. But nobody showed you how to do it well. Nobody gave you a framework for what "good" looks like. You are on your own, which means you are stuck in the pattern of vague queries and hit-or-miss results.
The 7 Levels framework exists to close that gap. And it starts here, at Level 1, with the honest acknowledgment that awareness alone is not proficiency.
Metacognition: Thinking About Your Thinking
The US Department of Education defines metacognition as "the ability to think about your own thinking processes and to plan, monitor, and evaluate your own learning." It is one of the most reliable predictors of academic and professional performance across every field studied.
Research from MIT has shown that individuals with strong metacognitive skills perform better on complex tasks and work more efficiently. They do not just work harder. They work smarter because they can accurately assess what they know, identify what they need to learn, and adjust their approach in real time.
The connection to AI use is direct and measurable. The Aalto University study concluded that "AI literacy alone is not enough; people need platforms that foster metacognition." Knowing about AI tools does not help if you cannot accurately judge how well you are using them. You need the ability to step back from the interaction and ask: did that actually work? Did I get what I needed, or did I get something that looked right on the surface?
Self-awareness about what you know and what you do not know is the skill that enables everything else in the 7 Levels framework. At Level 2, you will learn to give AI structured instructions. At Level 3, you will learn to push back and iterate. At Level 4, you will manage entire conversations as systems. None of those skills work without the foundation of honest self-assessment.
Metacognition is not complicated. It is the habit of pausing after an AI interaction and asking three questions. Did I get what I actually needed? What did I ask for versus what I should have asked for? What would I do differently next time? That habit, practiced consistently, is what moves you from Level 1 to Level 2.
Practical Exercise: The AI Audit
Your First AI Audit
This exercise takes 15 minutes. It will give you a clear picture of where you stand and where your growth edges are.
- Write down three things you have used AI for in the last month. Be specific. Not "writing" but "drafted an email to a client about a delayed project." Not "research" but "asked ChatGPT about competitor pricing in the SaaS market."
- For each one, rate your satisfaction with the result from 1 to 10. Be honest. A 10 means you used the output exactly as the AI gave it and it was perfect. A 1 means you threw it away and started over.
- For any score below 7, write down what you told the AI and what you actually wanted. Look at the gap between the two. Did you give enough context? Did you specify the format, the audience, the tone? Did you tell the AI what success would look like?
- Notice the gap between what you asked for and what you needed. That gap is your growth edge. It is not the AI's fault. The AI responded to what you gave it. The opportunity is in getting better at giving it what it needs to give you what you need.
- Ask someone who uses AI well to show you their process. Not just their prompts. Their full process. Watch how they set up context, how they iterate, how they evaluate outputs. Watch, do not just ask. The difference between hearing about it and seeing it is enormous.
Repeat this audit every two weeks. Track your scores over time. You will see them improve as your self-awareness sharpens.
Real-World Examples
Organizations that understand the Cadet level are building structured paths forward. Here is what that looks like in practice.
Colgate-Palmolive requires all employees to complete foundational AI training before they receive access to any AI tools. This is not gatekeeping. It is recognition that awareness without structure leads to the overconfidence trap the Aalto study documented. They teach self-awareness about AI capabilities and limitations before anyone touches a tool.
Deloitte's 2026 Global Human Capital Trends report identifies the AI skills gap as the number one barrier to successful AI adoption. Not the technology gap. The skills gap. Organizations have the tools. They do not have people who know how to use them well. The report emphasizes that training must start with foundational awareness, not advanced techniques.
Gallup's data reinforces this from the employee side. Only 38% of companies offer any AI training at all. That means 62% of workers who are expected to use AI have received zero structured guidance. They are left to figure it out on their own, which means they stay at Level 1 indefinitely. They are aware. They are not progressing.
The pattern across all of these examples is the same. Most workers are at the awareness level without any structured path for development. They know AI exists. They have tried it. But they have not been given the framework, the training, or the feedback loops that would help them improve. They are Cadets without a training program.
That is what the 7 Levels framework provides. A structured progression path that starts where most people actually are, not where companies wish they were.
What Comes Next
You have the awareness. You have done the audit. You know where your gaps are. Now what?
Your prompts are vague, and your results are inconsistent. That is not a character flaw. It is a skill gap, and it is the most common one at this level. You are talking to AI the way you talk to a search engine: short queries, no context, no structure.
Level 2 is where that changes. The Ensign learns to give AI structured instructions with context, constraints, and format specifications. Your results get better because your inputs get better. The human skill at Level 2 is structured thinking: organizing your thoughts before giving them to AI.
The jump from Level 1 to Level 2 is the biggest mindset shift in the entire framework. You stop treating AI as a search engine and start treating it as a collaborator that needs clear direction. Everything above Level 2 builds on that shift.
Sources
- Goleman, D. (1998). "What Makes a Leader?" Harvard Business Review, 76(6), 93-102. hbr.org
- Consortium for Research on Emotional Intelligence in Organizations. eiconsortium.org
- Welsch, R. & da Silva Fernandes, L. (2025). Study on metacognition and AI-assisted cognitive performance. Aalto University. aalto.fi
- Pew Research Center. (2025). "AI in the Workplace." pewresearch.org
- Gallup. (2025). Q4 2025 Survey on AI Adoption Among US Workers. gallup.com
- McKinsey & Company. (2025). "The State of AI in 2025." mckinsey.com
- US Department of Education. "Metacognition." Teaching and Learning Resources. ed.gov
- MIT Research on Metacognition and Performance. Massachusetts Institute of Technology. mit.edu
- Deloitte. (2026). "Global Human Capital Trends." deloitte.com
Frequently Asked Questions
What is Level 1 of the 7 Levels of AI?
Level 1 is The Cadet, representing the AI Aware stage in the 7 Levels of AI framework developed by Harrison Painter at LaunchReady.ai. At this level, you know AI exists and have tried it, but you lack structured skills for using it effectively. The defining human skill is self-awareness, drawn from Daniel Goleman's emotional intelligence model. Self-awareness means understanding what you know, what you do not know, and where your gaps are. It is the foundation that every other level builds on.
What percentage of workers use AI?
According to Pew Research Center (2025), only 21% of US workers use AI on the job. Gallup's Q4 2025 data shows that 49% of American workers have never used AI at all. When combined, 65% of American workers do not use AI much or at all. Meanwhile, McKinsey reports that 78% of organizations use AI in at least one function, but only 38% offer any AI training. Most of the workforce is at Level 1 or below.
What is the Dunning-Kruger effect in AI?
A 2025 study from Aalto University found that when people use AI tools, the traditional Dunning-Kruger effect disappears and is replaced by universal overconfidence. All participants overestimated their abilities when using AI, regardless of experience level. Performance improved by about 3 points, but users overestimated their improvement by 4 points. The most striking finding: higher AI literacy correlated with lower metacognitive accuracy. The people who knew the most about AI were the worst at judging their own AI-assisted performance.
How do I improve my AI self-awareness?
Start with metacognition: thinking about your own thinking. Conduct an AI audit by writing down what you have used AI for, rating your satisfaction with the results, and identifying the gap between what you asked for and what you actually needed. Journal your AI interactions. Ask someone who uses AI well to show you their process. Repeat the audit every two weeks and track your scores over time. The goal is not to become an expert overnight. The goal is to develop honest awareness of where you are so you can build from there.
What's Your AI Level?
Take the assessment to find out exactly where you are in the 7 Levels. Then we'll show you what to work on next.