“Regardless of particular techniques,we assume that all purposeful and effective teachers follow a cycle of
plan-revise-teach-assess-reflect-adjust many times.”
(Wiggins & McTighe 2005, p. 8)
Throughout my education there was a strong emphasis on the importance of assessing and evaluating our instruction and overall instructional design (i.e., an overarching program or entire curriculum). While I understood the concept of assessment and evaluation, I underestimated its power. Allen (2004) states, “assessment should be done because faculty are professional educators who want to ensure that the learning environments they provide support the development of their students.” (p. 6) Now, it makes perfect sense, if I develop instruction with a desired outcome in mind, I should assess and evaluate whether or not the outcome is achieved!
Just as it’s important to assess effectiveness of a course or an entire curriculum, it’s equally important to have a systematic way to assess our interpreting and discourse analysis skills. What I recognize now, after having taught my first courses, is the importance of assessment and evaluation as a foundational tool. My practicum was taught in the community, not at a college or university. I did not have to award grades and I did not work with students on formal assessment and evaluation, instead we discussed giving and receiving feedback. As I look back over the course, I clearly see I missed the opportunity to formally introduce the topics of assessment and evaluation. What’s more, the input I collected from students, in the form of an end of course survey, indicated students would have preferred a chance to do more formal assessment and evaluation. If I would have introduced assessment and evaluation tools, such as a matrix or rubric, students would have been equipped with a tool for discussing their work throughout the course. I did introduce terminology and students began incorporating this terminology into their discussions, an assessment tool would have been a beneficial supplemental tool for these discussions.
On the flipside, I experienced firsthand the benefits of ongoing instructional assessment and evaluation. Each of my classes ended with an activity called, “Muddiest Point.” This activity solicited feedback from the students, anonymously, about topics which were still unclear to the students. This feedback allowed me to see where students need additional instruction and helped me identify areas for improvement in my instruction. It also provided me an opportunity to re-explain a topic so that students were clear and able to move on to the next lesson. Muddiest Point activities, classroom discussions and student surveys provided me with an excellent framework for assessing and evaluating the effectiveness of my instruction and student learning.
I also had the pleasure of conducting my first ever research project, in the form of action research. This research allowed me to focus on one aspect of my classroom instruction in order to identify areas for improvement. I chose to focus on classroom activities in an effort to determine which activities student’s perceived as most effective for their professional growth. Assessing and evaluating student’s perceptions around the various classroom activities provided me with wonderful insight about what was and what was not effective. I began my research weary of my skills as a research and finish the project hungry for more. The entire action research project was a rewarding process that taught me the importance of systematically assessing and evaluating my practice in order to achieve improvement.
Within this domain, Assessment & Evaluation, you will see evidence of my growing knowledge around how to assess and evaluate a program. You will also see my reflections regarding how to make improvements to instructional assessment and evaluation moving forward in my career as an educator and an interpreter.
References:
Wiggins, G & McTighe, J. (2005). Understanding by Design. Pearson Merril Prentice Hall: Upper Saddle River, NJ.
Assessment and evaluation were a new skill set for me, especially program evaluation. This artifact is a paper outlining the steps for conducting an organizational evaluation of North Carolina’s RID state Affiliate Chapter (NCRID). This paper was drafted for a course entitled, “Program Evaluation and Assessment.” I chose this paper because I believe it demonstrates my emerging skills related to conducting program evaluation.
This course offered me valuable insight into the benefits of conducting assessment and evaluation. At the time I wrote this paper, I was acting President of NCRID. The Board of Directors and I were in the midst of discussions regarding how to make systematic changes to our organization. Through the creation of this paper, I was able to work through the various steps needed to conduct an organization evaluation in order to elicit change for NCRID. Ultimately, this paper became the catalyst for a series of community forums and in-depth organizational evaluation. The result was a strategic plan that is now the guiding document for continued change within NCRID. (The tenets of the strategic plan are also included within this artifact.)
Through this course, I developed my understanding of the who, what and why’s of evaluation. Wiggins and McTighe (2005) discuss the importance of one’s ability to understand a topic and then transfer that understanding to other contexts, “to understand is to be able to wisely and effectively use – transfer – what we know, in context; to apply knowledge and skill effectively, in realistic tasks and settings. To have understood means that we show evidence of being able to transfer what we know.” (p. 7) For me, developing an understanding of evaluation through the development of this paper and the NCRID strategic plan now allows me to transfer this understanding to interpreting and interpreter education.
I know and understand the steps of evaluation including defining the purpose, identifying evaluative criteria, taking inventory of resources and synthesizing results. I now recognize my ability to conduct assessments and evaluations and even more importantly, the resulting benefit of doing so. Ultimately, assessment and evaluation help us as educators identify the worth and/or merit of a particular program, organization, project, lesson, etc. For interpreters, assessment may be used to evaluate areas for improvement. As I experienced firsthand with NCRID, evaluation may lead to clearly identified goals for improvement.
I value the lessons learned from this course because I know the skills and knowledge will transfer to my endeavors as an interpreter educator. I now have the resources to systematically assess and evaluate my instructional designs, individual courses and overarching programs or curricula. These skills will continue to provide me with evidence for continued enhancement to my teaching and instructional design.
Artifacts:
Program Evaluation Write Up from “Program Evaluation and Assessment”
NCRID Strategic Plan Goals
References:
Wiggins, G. & McTighe, J. (2005) Understanding by Design. Merrill Prentice Hall; Upper Saddle River, NJ.
During the Northeastern program I had an opportunity to assess my work as an interpreter. This is one skill area where I had minimal previous exposure. These artifacts are my assessments and reflections of a time shifted meaning transfer (conversational) and two real time meaning transfers (one ASL to English and the other English to ASL).
Having to view my own meaning transfers was an excellent lesson in both assessing work and developing my empathy for students who will assess their work too! These exercises forced me to consider what was effective (or ineffective) and why. The most challenging part of assessing my work was trying to determining the “how” of the assessment.
The Time Shifted Meaning Transfer was the first of the three to be assessed. I did not use a rubric and simply based my assessment on my knowledge of meaning transfer. I found this text easier to assess because it involved two live individuals, whereas the Real Time Meaning Transfers were produced from recorded texts with no live audience. The inclusion of two live participants made the interpretation authentic and the participants’ responses helped me gauge the effectiveness of the interpretation.
The two Real Time Meaning Transfers were more difficult for me to assess. I found great benefit in conducting a post interpretation “write out loud protocol.” These protocols were conducted immediately after filming the interpretations. I did not filter my feelings but simply jotted down my initial thoughts. This proved to be a helpful activity as it gave me insight to whether my initial reflections were accurate or not. I believe the out loud protocols could prove to be a useful technique in working with students. Having access to their own thoughts and feeling could considerably aid students in first getting in tune with their reflection and critical thinking skills while also allowing them to develop the ability to formally assess their work.
In assessing my own Real Time Meaning Transfer, I relied on the Outcomes Rubric developed by TIEM Online. I admittedly struggled at first and realized approximately half way through the interpretation that I was not even using the rubric. Essentially what assisted me was the ability to look for patterns. As we assess our work we have to look for components of the meaning transfer that present themselves repeatedly; a one-time omission, addition, errant fingerspelling does not constitute effectiveness or ineffectiveness. Recognizing this and being able to identify patterns in my own work was instrumental in guiding my assessment. In addition, I found it beneficial to view the text as a whole…I should not have been surprised by this as it reinforces the emphasis on discourse analysis. Ultimately, it was assessing my work from the discourse analysis perspective that offered me the greatest tools. I originally spend too much time trying to isolate individual aspects of my work, which proved to be ineffective. Once I viewed both source and target texts as whole texts I had much better access to assessing the overall discourse and effective and ineffective characteristics of my meaning transfer.
I did “toy” with a rubric of my own using the skills identified in Marty M. Taylor’s Interpreting Skills series (1993 & 2003) but ultimately returned to the Outcomes Rubric. Both of which are included as artifacts.
As I begin to incorporate assessment into my teaching repertoire, I recognize the need to introduce students to assessment slowly and from the perspective of discourse analysis. Doing so will not only assist students in developing a better understanding of discourse and linguistic features but also aid them in developing the skills necessary to assess the effectiveness of theirs and others work. Ultimately, assessing my work was an incredibly powerful tool for identifying areas of improvement and positive aspects of my work. After conducting my assessments, I had concrete evidence of areas for improvement and was able to focus my attention on those skills. Assessment is a skills I wish to continue to hone in order to become more comfortable with the process and in order to continually assess and improve my work as an interpreter.
References:
Taylor, M. (2002). Interpreting Skills: American Sign Language to English. Interpreting Consolidated: Edmonton, Alberta.
Taylor, M. (1993). Interpreting Skills: English to American Sign Language. Interpreting Consolidated: Edmonton, Alberta.
My practicum teaching was a unique placement not within a university or college but out in the community via professional development. I taught a series of courses to my fellow staff interpreters, a few community workshops and a ten week series a small group of advanced interpreters. Without the framework of an established institution I did not have to hand in grades or formally assess the students. One on hand, this allowed me a lot of freedom to create a series of lessons. On the other, I did not have strict guidelines/requirements to meet in terms of assessment or evaluation.
For the community workshops, my evaluations were conducted in the form of post workshop evaluations. These are required by RID in order to award CEUs. I did choose to create my own evaluation and not follow the tradition evaluation form used by RID. These evaluations proved to be useful in the evaluation of the effectiveness of my lessons, as evident by the changes I implemented in the instructional design (see the Instructional Design Domain, “Evolution of Instruction”).
The other two practicum placements proved to be much more difficult for me. In hindsight, I clearly see that I missed the opportunity to introduce assessment and evaluation to the students/participants. In reflecting on my practice I recognize my lack of addressing this content stemmed from my lack of experience. If I may be honest, I believe I was intimated by the idea of actually providing evaluation, in the form of a “grade” or written comments on student’s work. Therefore, all of my evaluation occurred in the form of verbal feedback during class discussions.
What I found surprising is that students, in their end of course evaluations, continually commented that they would have preferred more feedback. While I was hesitant to give it, they wanted it. What an incredibly valuable lesson.
In one class students from the advanced group recorded themselves producing a Real Time Meaning Transfer. I offered all students an opportunity to meet with me one-on-one in order to review their work. Of the eight students, only two chose to meet with me. These two feedback sessions went very well. I used reflective questioning to ask students their thoughts. While both were expecting, and perhaps wanting, me to be prescriptive I required them to assess their work. I did provide some guiding questions to help their assessment but essentially they identified their own strengths and areas for improvement.
With my advanced class, we spent time in class discussing feedback. Each student expressed how they like receiving feedback and how they do not like receiving feedback. While this did not formally address assessment and evaluation, it did start a dialog about what features of the interpreted work would be addressed in giving feedback.
I believe the best way I could incorporate formal instruction regarding assessment and evaluation in the future, is to work with the students to develop a rubric and/or observation sheet; something that the class, as a whole, develops and agrees upon for assessing their own work and others. We spent at least one full class discussing prosody and prosodic features; this would be an excellent springboard for creating an assessment form. Since the advanced interpreter course focused on formal register, we could have transferred our discussions about “what is formal register” (in English and ASL) into assessment criteria. These criteria could have then been used to provide feedback when student’s watched other’s work.
In both series (staff interpreters and community interpreters) we worked through the steps of discourse mapping. Each of the steps, including the concept maps and identification of salient linguistic features would have been excellent sources for me to collect, provide feedback, evaluate student’s understanding and then return.
As I become more comfortable with my own ability to assess, I must embrace the benefits in order to improve practice, to enhance skill and to promote collegiality. As a class, we continually discussed giving feedback and working as teams with an emphasis on the work, not the person, yet I failed to truly capitalize the teaching moment.