Coach Development Plans: Turning Evaluation Data into Growth
Evaluation Without Follow-Up Is Wasted Effort
You ran the surveys. You collected feedback from athletes, parents, and peers. You generated reports. Now what?
This is where most coaching evaluation programs stall. The Athletic Director invests significant time and energy into collecting data, then files the reports and moves on to the next season. The data sits unused. The coach never sees a clear path from evaluation results to improved performance.
Evaluation is the diagnostic step. The development plan is the treatment. Without it, you've identified symptoms but prescribed nothing. The coach doesn't know what to work on, how to work on it, or how progress will be measured. And by the time the next evaluation rolls around, the opportunity to use this season's data has passed.
This guide covers how to turn evaluation results into focused, practical development plans that actually change coaching behavior.
Start with the Self-Assessment Gap
The most powerful starting point for any development plan is the gap between how the coach sees themselves and how others experience them. This self-assessment gap is built into 360-degree evaluation: the coach rates themselves on the same dimensions that athletes, parents, and peers rate them on.
When a coach rates themselves 4.3 on motivation but athletes rate them 2.7, that 1.6-point gap is a concrete data point. It's not an opinion. It's not a single complaint. It's a measurable difference between self-perception and observed reality.
Gaps in this direction (self-rating higher than observer ratings) represent blind spots. The coach genuinely believes they're performing well in an area where others disagree. These blind spots are the highest-leverage development targets because:
- The coach has room to grow. They're not already at their ceiling in this area.
- The data is hard to dismiss. When 15 athletes independently rate the same dimension lower than the coach expected, it's difficult to write off as one person's opinion.
- Improvement is visible. When the coach works on a blind spot and the next evaluation shows the gap closing, both the coach and the Athletic Director can see measurable progress.
The reverse gap, where observers rate the coach higher than they rate themselves, is less common but worth noting. It often indicates a coach who is humble or self-critical. These areas don't need development. They need recognition.
For a detailed look at how to evaluate high school coaches and generate the data that feeds development plans, we've published a practical guide.
Identify 1-2 Focus Areas, Not 10
One of the most common mistakes in development planning is trying to address everything at once. A coaching evaluation might surface five or six areas for improvement. The temptation is to list all of them in the development plan and ask the coach to work on everything.
This doesn't work. Coaches, like everyone else, have limited attention and energy for deliberate change. Asking a coach to simultaneously improve their communication, game preparation, practice organization, parent interactions, and motivational approach guarantees that none of these areas receive meaningful focus.
Pick the 1-2 areas where:
- The self-assessment gap is largest. These are the coach's biggest blind spots.
- The impact on athlete experience is highest. Some dimensions affect athletes daily. Others are less visible.
- The coach has the ability and willingness to change. A coach who resists working on communication won't improve their communication, regardless of what the development plan says. Start with an area the coach is willing to engage with.
A focused plan with 1-2 specific areas is dramatically more effective than a comprehensive plan that touches everything. Depth beats breadth in development.
Write Specific, Observable Goals
Vague development goals are indistinguishable from no goals at all. "Improve communication" is not a goal. It's a wish. There's no way to know whether the coach achieved it, because it doesn't define what improvement looks like.
Effective development goals describe specific behaviors that can be observed and measured:
Vague: "Improve communication with parents." Specific: "Send a weekly email to parents every Sunday during the season with practice schedule, game details, and any program updates. Respond to parent emails within 48 hours."
Vague: "Be more organized in practice." Specific: "Prepare and post a written practice plan before each practice. Include specific drill names, durations, and objectives. Share the plan with assistant coaches 30 minutes before practice."
Vague: "Build better team culture." Specific: "Hold a 10-minute team meeting at the start of each week to review goals from the previous week and set goals for the upcoming week. Include at least one athlete-led component per month."
Specific goals are easier to track, easier to discuss in follow-up conversations, and easier to evaluate in the next cycle. They also make the development conversation more productive because both the AD and the coach know exactly what success looks like.
Connect Goals to Framework Dimensions
When your evaluation uses a structured framework like the CAMS framework, development goals map naturally to specific coaching dimensions.
If a coach scores low on the Anchor dimension (stability and structure), the development plan targets behaviors within that dimension: practice organization, routine consistency, clear expectations. The framework provides the category. The goals provide the specific actions.
This connection between framework and development plan creates a coherent narrative:
- The evaluation measures the coach on defined dimensions.
- The results reveal which dimensions need attention.
- The development plan targets specific behaviors within those dimensions.
- The next evaluation measures whether the coach improved on those dimensions.
Without a framework, development plans exist in isolation. With one, they're part of a continuous cycle that connects data to action to measurable progress.
Structure the Development Conversation
The development plan isn't something you hand to a coach. It's something you build together. The conversation where you review evaluation results and set development goals is the most important part of the entire process.
Structure the conversation in four stages:
Stage 1: Share Strengths First
Start with what the coach does well. Show them the dimensions where they scored highest, the positive themes from written feedback, and the areas where their self-assessment aligned with observer ratings.
This isn't just politeness. It establishes that the evaluation captured real, accurate information. When the coach sees their genuine strengths reflected in the data, they're more likely to take the areas for improvement seriously.
Stage 2: Present the Gaps
Show the 1-2 dimensions where the self-assessment gap is largest. Use the actual numbers: "You rated yourself 4.1 on practice structure. Athletes rated you 2.8. That's a 1.3-point gap."
Don't frame it as criticism. Frame it as information. "The data shows a disconnect between how you see your practice structure and how athletes experience it. Let's figure out why."
Stage 3: Explore Root Causes
The gap exists for a reason. Maybe the coach thinks posting a practice plan on the board counts as communication, but athletes want verbal explanations of why they're doing each drill. Maybe the coach prepares thoroughly but delivers instructions in a way that athletes find confusing.
Ask the coach what they think is driving the gap. Their perspective is essential because they're the one who will need to change the behavior. A development plan built on the coach's own understanding of the problem is more likely to succeed than one imposed from outside.
Stage 4: Set Goals Together
Based on the discussion, agree on 1-2 specific, observable goals. Write them down during the meeting. Both the AD and the coach should leave with the same written document.
Include:
- The development area (linked to a specific framework dimension)
- The specific behavioral goal
- How progress will be measured
- A timeline for check-in
Track Progress Across Seasons
A development plan without follow-up is just a document. The plan only produces results if you check on it.
Mid-season check-in. Schedule a brief meeting at the midpoint of the next season to discuss how the coach is progressing on their goals. This doesn't need evaluation data. It's a conversation. "How's the weekly parent email going? Any challenges?"
Next evaluation cycle. When you run the next evaluation, compare the new results to the baseline. Did the gap narrow on the targeted dimension? Did observer ratings improve in the specific area? This is the most powerful feedback loop available: concrete data showing whether the development plan produced measurable change.
Multi-season tracking. Over 2-3 evaluation cycles, you can see a coach's trajectory. Are they improving in their development areas? Are new gaps emerging? Has their overall effectiveness trended upward?
This longitudinal view is what separates a development program from a one-time evaluation. Individual evaluation cycles are snapshots. Multiple cycles over time reveal the full picture of coaching growth.
When Development Plans Aren't Working
Not every development plan produces results. When a coach shows no improvement across multiple evaluation cycles despite targeted goals and support, it's important to recognize that reality.
Possible reasons for stalled development:
The goals were too vague. If you set goals like "improve motivation," the coach may not have known what to change. Revisit the goals and make them more specific.
The coach doesn't agree with the feedback. If a coach fundamentally rejects the evaluation data ("those athletes don't know what they're talking about"), they won't change. This requires a more direct conversation about the purpose of feedback and the coach's willingness to engage.
The support isn't there. Some development areas require resources: coaching clinics, mentorship, observation time with other coaches. If you ask a coach to improve without providing tools for improvement, stalled progress is predictable.
The fit isn't right. After multiple cycles of honest effort and targeted support, if a coach isn't improving in areas that matter for athlete experience, that data informs personnel decisions. This is uncomfortable but important. Evaluation data provides objective documentation for contract decisions when development hasn't produced results.
Document Everything
Every development plan should be documented and stored alongside the evaluation report. Over time, this documentation becomes one of your most valuable administrative tools.
For development purposes, it shows the history: what was identified, what was targeted, what changed. For personnel decisions, it provides evidence of a fair, structured process. For program-level analysis, it reveals whether your coaching staff as a whole is improving over time.
Keep a record of:
- The evaluation report that informed the development plan
- The written development plan with specific goals
- Notes from the development conversation
- Mid-season check-in notes
- The subsequent evaluation report showing progress (or lack thereof)
Getting Started
If you're running evaluations but not creating development plans, you're doing half the work for half the benefit. The evaluation tells you where coaches are. The development plan tells them where to go.
Start simple. After your next evaluation cycle, sit down with each evaluated coach, identify their largest self-assessment gap, and set one specific goal together. Write it down. Check in at mid-season. Compare results the next time you evaluate.
That single loop, from data to plan to follow-up to measurement, is the foundation of a coaching development program. Everything else is refinement.
Want to see CoachLeap in action?
Watch the Demo