Do we assess what we teach or do we teach what we assess?

Yesterday there was an email going around musicnet (the music teachers’ listserv) about an assessment point.
A year 13 student had presented some excellent work which didn’t quite fit the NZQA requirements. In my opinion, the assessment is totally inappropriate for this instrument.

It got me thinking about our role as teachers: Does the assessment drive what we teach?

Or do we teach what students need to know? What will make them grow? What will make them think? What will make them useful? What will make them love learning?

During the day, someone said: “it’s a necessary evil.”

This is a conversation that secondary teachers have had for a long time, because it’s driven by external assessment. I assume it’s a new thing for primary teachers. National Standards are creating a lot of tension in that sector.

What are your thoughts?

22 Comments to “Do we assess what we teach or do we teach what we assess?”

  1. Eric Rasmussen 8 August 2011 at 12:59 pm #

    Good measures (sometimes they are tests, and sometimes they are other tools of assessment) need to be part of every curriculum. Period.

    Now, hear me out. Many of these tools are being used very poorly and have given “testing” a very bad name lately. Rightly so. It’s like using a screwdriver to cut a piece of wood sometimes. Bad tool for that purpose. The tests are being used as a stick against schools who need federal funding. It’s driven educators to erase answers so that their schools won’t fail (right Atlanta? And I’ve seen it first hand elsewhere too!). SCARY what has happened/0

    So, consider the purpose of the test you are giving. If I’m measuring music achievement, then I don’t want a multiple choice (although you could) as much as a performance rating scale of intonation, rhythmic accuracy, expressive elements, and other elements such as posture, breath, etc. This measure would be administered via video and scored by 2 music teachers using rating scales (or some call them rubrics).

    Why would you do such a thing? Why would you give a multiple choice test? Why would you want to know a child’s tonal and rhythm aptitude in music using a standardized test? Why would you want to record a child’s music performance.

    TO IMPROVE YOUR INSTRUCTION, that’s why. Good measures are best used, NOT TO GRADE (that’s evaluation), but to find out that what you are teaching is actually being learned over there with the students. There are many ways to do this. There are many tools you need to build a house. Do not throw out the screwdriver. You’ll eventually have a screw that needs turning.

    • stevevoisey 8 August 2011 at 1:19 pm #

      You’re absolutely right. We need to check if what we’re doing has made an effect.
      As you’re a music teacher, I’ll draw it out a little and describe how it works here in New Zealand.
      This is year 13 (17 or 18 year old) in his final year of school. Two individual parts of his Music course is solo and group performance. His chosen instrument is bass guitar.
      Yes: there is some repertoire for solo bass – but I think it’s quite an artificial assessment. If his instrument was piano, solo performance would be a vital and valid part of his musical experience.
      Group performance is another deal entirely. It’s a bass player’s bread and butter.
      Many teachers use the assessment framework to construct their lessons. I would like to see teachers construct their lessons, then assess student engagement and learning using the assessment framework.

  2. Stephanie 8 August 2011 at 7:37 pm #

    IIRC Bic Runga or someone equally as famous failed bursary music. But the question is, what is the purpose of assessment. Is it of learning, for learning or as learning or perhaps a combination there of?

    • stevevoisey 8 August 2011 at 8:35 pm #

      Assessment in the New Zealand secondary school is basically the whole point according to many students and parents. “Get your ticket so you can get into Uni so you can get a job so you can… oh, I dunno.”

  3. Stephanie 8 August 2011 at 7:37 pm #

    IIRC Bic Runga or someone equally as famous failed bursary music. But the question is, what is the purpose of assessment. Is it of learning, for learning or as learning or perhaps a combination there of?

    • stevevoisey 8 August 2011 at 8:35 pm #

      Assessment in the New Zealand secondary school is basically the whole point according to many students and parents. “Get your ticket so you can get into Uni so you can get a job so you can… oh, I dunno.”

  4. Dr. Rasmussen 13 August 2011 at 6:10 am #

    Questions about a test that someone fails but who later demonstrates (through a better measure of musicianship or whatever) that he/she is indeed accomplished in what the test was supposed to be measuring. Does a test actually measure what the author says it does? If not, don’t use it. If so, does that purpose serve the teacher’s needs? Also consider: the content may not be right. The questions may not represent musical understanding, but rather musical knowledge (two different things). If you are not literate and the questions are in language, what does that do to a great musician who doesn’t have a good control of the language the test is written in? Is there enough time for students of varying needs to complete the items? There are a myriad of factors to consider. Among the most important are subjective and objective validity. The reliability of the measure is also an important factor to consider. The statistics of the test can be very meaningful to determine whether the test has the backbone to be a good measure. These include the mean, standard deviation, range, mode, median, item discrimination and item difficulty levels, among some others (like rater reliability). This is probably overkill for this blog, but I stand up for good measures because to blame tests is like blaming the screwdriver for not being able to cut a piece of wood very well. You are abandoning your students to not use a good measure to help you improve your instruction. (See previous response in this blog.)

    • stevevoisey 13 August 2011 at 11:00 am #

      Again, I agree. However, would you give a violinist a test on paradiddles? Or comment on the intonation of a pianist?
      I contend that a solo assessment for bass guitar is largely inappropriate.

      • Eric Rasmussen 14 August 2011 at 5:12 am #

        Violinists obviously do not need to know paradiddles unless they’re instrumental music teachers, or composers, or any number of things that are—in this case—less important than having expertise at playing the violin. Intonation of the piano is silly of course. I would somewhat disagree with you that solo assessment on bass guitar is inappropriate. A bass player should be able to play solo works that are appropriate for his/her level of achievement. Jaco Pastorious practically reinvented the instrument by playing solos. Hear “Tracy” on his first album self titled. And his solo with Joni Mitchell here on YouTube. http://www.youtube.com/watch?v=7HHjJG7yWBE
        Nothing wrong with having a bassist play a solo. It’s not their bread and butter in the playing world, but they should know how to do it. Right? Express themselves creatively? Improvise? All musicians should strive to this higher levels of music achievement. That’s my opinion at least.

  5. Dr. Rasmussen 13 August 2011 at 6:10 am #

    Questions about a test that someone fails but who later demonstrates (through a better measure of musicianship or whatever) that he/she is indeed accomplished in what the test was supposed to be measuring. Does a test actually measure what the author says it does? If not, don’t use it. If so, does that purpose serve the teacher’s needs? Also consider: the content may not be right. The questions may not represent musical understanding, but rather musical knowledge (two different things). If you are not literate and the questions are in language, what does that do to a great musician who doesn’t have a good control of the language the test is written in? Is there enough time for students of varying needs to complete the items? There are a myriad of factors to consider. Among the most important are subjective and objective validity. The reliability of the measure is also an important factor to consider. The statistics of the test can be very meaningful to determine whether the test has the backbone to be a good measure. These include the mean, standard deviation, range, mode, median, item discrimination and item difficulty levels, among some others (like rater reliability). This is probably overkill for this blog, but I stand up for good measures because to blame tests is like blaming the screwdriver for not being able to cut a piece of wood very well. You are abandoning your students to not use a good measure to help you improve your instruction. (See previous response in this blog.)

  6. Brandt Schneider (@brandtschneider) 14 August 2011 at 5:23 am #

    I would say that a bass solo is very appropriate. Compared to a multiple choice test for math or literature. I think all these tests are artificial constructs.

    But….

    We spend a LOT of time and money teaching these kids. Shouldn’t there be something?

  7. Brandt Schneider (@brandtschneider) 14 August 2011 at 5:23 am #

    I would say that a bass solo is very appropriate. Compared to a multiple choice test for math or literature. I think all these tests are artificial constructs.

    But….

    We spend a LOT of time and money teaching these kids. Shouldn’t there be something?

  8. Brandt Schneider (@brandtschneider) 14 August 2011 at 5:28 am #

    You would think the best assessment would be a gig/performance. But then again I’ve sat next to a lot of poor players at a gig. Should they fail? I don’t know their story (beginner? special ed? just broke their finger?).

    My mind spinning a bit…

  9. Brandt Schneider (@brandtschneider) 14 August 2011 at 5:28 am #

    You would think the best assessment would be a gig/performance. But then again I’ve sat next to a lot of poor players at a gig. Should they fail? I don’t know their story (beginner? special ed? just broke their finger?).

    My mind spinning a bit…

  10. Douglass Gaking (@gakingmusic) 14 August 2011 at 5:53 pm #

    I assess in-class individual performance with rubrics (1 for singing, 1 for instruments). This year, I am trying to classify songs and activities by difficulty level, so I can document how assessments show students’ preparedness to advance to the next difficulty level. Hopefully this will help me push the kids to expand into more advanced levels of hearing, singing, reading, and creating music. I’m trying to figure out the best way to database all the data so that it can be most useful.

    Administrators these days want to see data-driven instruction. They don’t always expect to see it from a “special area teacher,” but they will be pleasantly surprised if the do. It won’t be long before the state starts forcing us to do it and/or making it a condition for “merit-based pay.” (I hate to say it, but it’s coming.) We need to be proactive about figuring out sound strategies for doing this stuff on our terms before the state is laying down the requirements.

    • stevevoisey 16 August 2011 at 10:19 pm #

      “But if you judge a fish on its ability to climb a tree, it will live its whole life believing it is stupid.”

      http://feeds.gawker.com/~r/lifehacker/full/~3/PJjbsRzcWTw/everyone-is-a-genius

      • Eric Rasmussen 17 August 2011 at 4:02 am #

        Steve,
        First, you’re a bit over the top. Second, there’s a fish that does climb trees. Pretty awesome. Maybe some of your fish can climb trees and you never asked them to show you they can. Nothing wrong with having one item of a “test” that only 1 person a year would get right. That way, you know everybody’s being challenged and you’d teach better to the individual needs of your students. That’s what a master educator does.

  11. Douglass Gaking (@gakingmusic) 14 August 2011 at 5:53 pm #

    I assess in-class individual performance with rubrics (1 for singing, 1 for instruments). This year, I am trying to classify songs and activities by difficulty level, so I can document how assessments show students’ preparedness to advance to the next difficulty level. Hopefully this will help me push the kids to expand into more advanced levels of hearing, singing, reading, and creating music. I’m trying to figure out the best way to database all the data so that it can be most useful.

    Administrators these days want to see data-driven instruction. They don’t always expect to see it from a “special area teacher,” but they will be pleasantly surprised if the do. It won’t be long before the state starts forcing us to do it and/or making it a condition for “merit-based pay.” (I hate to say it, but it’s coming.) We need to be proactive about figuring out sound strategies for doing this stuff on our terms before the state is laying down the requirements.

    • stevevoisey 16 August 2011 at 10:19 pm #

      “But if you judge a fish on its ability to climb a tree, it will live its whole life believing it is stupid.”

      http://feeds.gawker.com/~r/lifehacker/full/~3/PJjbsRzcWTw/everyone-is-a-genius

      • Eric Rasmussen 17 August 2011 at 4:02 am #

        Steve,
        First, you’re a bit over the top. Second, there’s a fish that does climb trees. Pretty awesome. Maybe some of your fish can climb trees and you never asked them to show you they can. Nothing wrong with having one item of a “test” that only 1 person a year would get right. That way, you know everybody’s being challenged and you’d teach better to the individual needs of your students. That’s what a master educator does.


Leave a Reply

You must be logged in to post a comment.