Introducing the Skill-ometer

»

$T2eC16ZHJIkE9qU3k6hdBRj3B8efkw~~60_32About a mile from where I live, there’s a soccer field. If you were to pass by, you would see elementary-school teams practicing in the traditional way. Coaches set out orange cones, and kids form lines and wait their turn to participate in various drills.

If you saw them, you might think: Seems like a lot of kids are just standing around.

A few blocks away stands a high school. If you were to pass by, you would see a math class. The class operates in the traditional way: students sit silently in their seats as the teacher gives her lecture.

If you saw them, you might think: Seems like a lot of students are zoning out.

You might think those thoughts. But you wouldn’t have a way to objectively measure the effectiveness of their learning. You wouldn’t have a yardstick.

And you should.

If science has taught us anything over the past few years, it’s that all learning spaces are not created equal. High-quality methods of practice are efficient, because they are aligned with the ways our brains actually improve. Ineffective methods are inefficient, because they are aligned with tradition, or emotion, or the teacher’s ego, or what looks good.

There are an infinite number of ways to screw up a learning session. But high-quality practice sessions share a few basic characteristics. Which means that it should be possible to create a simple metric to measure practice effectiveness. And since that yardstick doesn’t seem to exist, I thought I’d take a crack at creating one.

Please say hello to the Skill-ometer, an attempt at measuring practice effectiveness by measuring seven key elements.

Here’s how it works: Score your practice session by responding to each of the following statements on a scale of 1-5: 1=strongly disagree; 2= disagree; 3=neutral; 4=agree; 5= strongly agree

  • Intensity: We gave 100 percent effort and attention.
  • Engagement: We were emotionally immersed in the tasks we took on. We knew what we had to do, and it felt like a game.
  • Practicality: We practiced exactly the skill that we’ll be using later, in the same way that we’ll be using it in “game situations”
  • Repetitions: We embraced the value of repetitions, especially for the most challenging skills
  • Clarity: We understood the day’s goal, and where it fit in the larger picture
  • Reachfulness: We were pushed to spend time on the edge of our abilities, struggling and reaching just past our current competence
  • Fun: It was hard, but not miserable. There were moments of laughter and surprise.

Scoring: (Out of a maximum 35)

  • 30-35: You are in the elite zone, hanging out with Peyton Manning and Yo-Yo Ma. Keep doing what you’re doing.
  • 25-30: This is a B-plus. You are highly effective, with a few things to work on.
  • 15-25: This is closer to a B-minus. You do a few things well, but have some clear weak spots that need addressing.
  • 5-15: You need to rethink your approach and design. Start by finding those in your field who score higher and study them.

Now, this is just a rough first attempt, but it’s interesting that most of these elements are about design and communication — areas that are 1) controlled by the coach; 2) can be planned for in advance.

I think it underlines the fact that the most effective learning sessions don’t depend on what happens in the classroom or on the field, but rather on what happens in the days and hours before, when the teacher or coach is thinking, planning, and communicating.

So here’s my question: what other factors do you think should be included in this metric? What other characteristics mark your most effective learning sessions? I’d love to hear your suggestions and ideas.


Rate This

1 Star2 Stars3 Stars4 Stars5 Stars (7 votes, average: 4.71 out of 5)
Loading ... Loading ...

Share This

Bookmark and Share

11 Responses to “Introducing the Skill-ometer”

  1. Wayne says:

    Feedback: We received (or internalized our own) feedback that helped us improve our performance and think situationally.

  2. Ron says:

    As a coach and teacher, I couldn’t agree more with the poor use of time and facilities. And I love the seven step Skill-o-meter. Going to try it myself.

  3. Will Neumann says:

    This looks like a good start though, like most first attempts, it does need a bit of polishing as there are some valuable drills and activities that purposely score low on some of the areas of your scorecard. Some examples of this from the world of Brazilian Jiu-Jitsu are on display in this excellent video on the art of slowrolling by Christian Graugart (http://www.youtube.com/watch?v=TlUXO40uhP0). All of the drills in the video would score very low in the Intensity and Practicality areas (and in some sense the Repititon area), although they are likely to peg the Reachfulness and Fun meters — especially the Monkey drill.

    Some of this can be polished by just a bit more verbiage, for example, clarifying that 100% effort is not necessarily 100% physical effort, but 100% effort towards accomplishing the goals of the drill. Or perhaps you might want to consider something like what figure skating does and throw out the highest and lowest scoring areas and judge what’s left. It’ll be interesting to see what other ideas people propose here.

  4. Peter says:

    Dan – Love the idea. Great stuff as always.

    That said, I have to pull out my “variable practice soap box” again to emphasize the huge importance of your practicality concept. Contrasting game-like activities with the connotation of strict repetition, for repetition sake, is important. Yes, it is important to try and try again. How that is done is just as important. Comparing a traditional “old school” routine like “shoot 100 free throws after practice” or “hit 20 shots with your 5 iron, then 20 with your driver” are nowhere near as effective in skill acquisition/motor learning than interspersing the same number of repetitions randomly within the practice session. I’d like to repeat Dick Schmidt’s words here as mantra, “Repetition WITHOUT Repetitiveness” – is a WAY more effective way to learn for performance when it actually matters.

    Oh. And there is evidence of very strong correlation between “practicality” and “fun”. As our friend JK will remind us, nothing teaches the game like, well, the game. Go figure.

    (and soap box put away…carry on)

  5. Heather White says:

    One thing I wonder about this scale is if one category is weighted above the rest or are they all equally valuable? I like the “reachfulness” category, as it promotes a growth mindset. Is there a place for perseverance on this list? What about the quality of the activity?

  6. Jared Mathes says:

    Dan,

    I can use the same practice methods and plan (volleyball) for two different teams I am working with and come up with scores on opposite ends of the spectrum.

    The difference is the kid’s attitude and engagement.

    On one team I coach (club, tryouts, team selection, earning your roster spot), I score a 33. Very game like and kids are on the edge of their ability. They have bought into the method and they show incredible success and improvement while having fun.

    The other team is a school team where there were just enough kids with passing interest to form a full team. None of them earned their spot in the traditional sense. We just took who showed up. They have not bought into the methods. A very un-coachable group. They score an 18.

    So, because I know better, I don’t let that score get me down and send me searching for what I am doing wrong. I’m trying to solve symptoms in my practices when the problem lies outside my scope of influence.

    So, it would be nice to also be able to consider the engagement factor of those you are teaching and your ability to affect change in that area before potentially overhauling your teaching methods.

    Sometimes the problem is with the kids, not the teacher. I’m not saying we still shouldn’t try to help them. But I can do everything right and still not have kids engaged like they should be.

  7. David says:

    Hi Dan,

    Zeal: We can’t wait – after some rest – to try what we’ve learned in a real game or even just more practice.

  8. djcoyle says:

    Absolutely fantastic points, Peter — and much appreciated. Keep that soap box handy!
    It’d be great to develop language for the type of variable repetition you describe. “Exploratory reps” maybe?
    Thanks again.

  9. Wes Porter says:

    I offer two additional terms for your consideration – and from the student/player perspective. Related to practicality and repetition, the teacher/coach can also design practice and specific drills to promote the student/player’s “reflection” and “connection.” It is one thing to practice and repeat drills that we teachers/coaches know translate to performance and “game day” – it is another level to have the student/player reflect on the drill and make the connection (themselves) about how the practice translates to better performance during the game.

  10. John says:

    Hi Daniel,

    I love your blog and your book. I think I have some valuable feedback to give you about the Skill-ometer.

    In my opinion your guide lines are too abstract. A example is “We gave 100 percent effort and attention”. The truth is that this is a meaningless statement.

    I think you have done a superb job conveying what hot-beds do in your book. However if someone really wants to improve their skill exponentially or create their own hot-bed they have to have to do a 5-year full-immersion apprenticeship. Unless they live and breath that deliberate practice culture and see the power of deliberate practice they will most likely fail to replicate the results of hot-beds.

    I think what you are trying to do by creating a generic way of calculating the value of an activity is noble. However, there are major drawbacks to a “intellectual” understanding of how to grow talent. People that want to excel in whatever they do have to make sacrifices. One of those sacrifices is becoming an apprentice at a talent hot-bed. There is no substitute for that.

  11. Jay says:

    An area that I would like to see researched is how distractions effect performance and group dynamics/compatability.

    For distractions, this could include basic behavior issues, as well as obscure things like tardiness, kids in inappropriate groups for their skill level, or parents, friends, and teammates.

    From my own coaching background, when coaching a single level of swimmer rather than a broad spectrum, the results are remarkably different, even with the same number of kids. For instance, 30 B level swimmers in a group will result in 20-22 becoming A level within a season. By contrast, 30 A, B, and C level swimmers practicing together will only result in 8-10 improving to the next level in the same time frame. Teams that limit the quality that comes in rather than accepting everyone tend to do much better. Locally, a team of 120 quality swimmers wins state championships consistently over the 400-500 kid teams

Comment On This