Note: Another Oldybutgoody. Wrote it awhile back when I was facilitating professional learning in Eagle Pass, Texas for the school district there with colleague James McNamara. One of the funniest moments with Jim McNamara came when I was digging through the trunk of my car and found a brand new comb. As an expression of gratitude for Jim's mentorship--I had just joined the Education Service Center--I offered it to him with a long, mock heroic tale expressing my heart-felt gratitude. Jim looked me straight in the eye, and without missing a beat, took the proferred comb and said, "I'll never part with it." You see, Jim was/is bald-headed. I about fell on the ground laughing.
Sitting in the Restaurante Moderno in Piedras Negras one evening after having taught a nine-hour class in Eagle Pass, having supper and sipping a stimulating beverage, James McNamara (Technology Director for a San Antonio school district) leaned back in his wood frame chair with a red cushion, "When you're evaluating what goes on the Internet, you have to ask how it adds value to what you're looking at." Biting into a guacamole-laden Chalupa from Maria, the statement made me reconsider how I might evaluate Internet published materials. The Internet Learning Institute, a five day staff development session focusing on facilitating educators' transition from surfer to server, from hunter-gatherer on the 'Net to tele-planter, lay heavily on our minds. The questions that we considered in our two hours of discussion included:
- What method of assessment will provide student publishers with feedback on web design and content that has inter-rater reliability according to specific, predetermined categories?
- How can we construct assessment tools that focus educators on establishing a reciprocal evaluative method--centered on the web design of published content--with their students?
Constructing Reciprocal Evaluative Methods
The Internet Learning Institute class, Publishing via the Internet, has a simple premise: To publish is to make one's work known. The advantages of publishing student work have been shown in other publications, and are widely recognized, not only for the motivational impact on students' revising and editing their work, but also for another reason. That reason is increased interactivity between web-published student materials and contacts that professionals online choose to make with students themselves.
Simply put, students are no longer being graded only on their abilities, but also, for the relevancy of their work to a wide audience of technology-saavy readers. Evaluating, or extracting value as Jim McNamara puts it, means finding what in the process of the evaluation adds value to the learning process. To do this, educators have to step back and carefully consider how they extract the value of student work. In my writing workshops with students, I know I did.
As a writing teacher, favoring Nanci Atwell's approach as shown in In the Middle, I fostered student writing by having student's write about those events relevant to them. In the writing workshops, I have facilitated as a classroom teacher, students never threw away what they had written, saving every lead or piece in their folders. These pieces were not graded. Students chose what they would write about, and I graded their drafts for specific pieces (i.e. active vs. passive voice, if appropriate for that piece of writing).
By the end of the six weeks period, all students had been graded on the same objectives, but each had achieved the objectives at different times. The pieces that they published, graded or not, were eligible to be placed in their portfolio, a folder they decorated for the purpose of showing off at meetings with their parents.
Reciprocal evaluative methods (REMs) mean that both students and teachers have to sit down together and decide on how much each will be accountable for a certain time. Grading means being aware of where in the writing process each writer was at within their piece of writing, and what objectives they were focused on.
Evaluating materials published on the 'Net must involve more than just the same thing we do for print. Not because we're putting materials on the Internet, but because we have a live audience, and that with a click of the mouse, our readers can evaluate our work. But how does this happen? We had to ask ourselves several questions:
- What criteria must be set?
- What common objectives can our audience agree on?
- How do our students decide what is relevant to our audience?
"Gracias," I said to the waiter as he placed a new coke on the table. With a serious nod, he inclinded his to Jim, asking, "Otra?"
What key questions could we ask that would send us down the right path to formulating a credible response to the previous questions we'd asked.
For us, this agreement implies that students, after negotiation with the teacher, must be aware of what they are learning and why they have set out to publish a project. Students must do 3 things: 1) be willing to evaluate themselves according to standards they have set for themselves through negotiation with their teacher; 2) assume ownership for their work, and 3) use the time in class to engage in a recursive process of project development.
To ensure reciprocal evaluations, teachers and students must sit down and agree on three things before anything goes up on the web:
- How will the web published work be designed, and how many items off a web design checklist apply to this particular project?
Some sample web design checklist can be found at: http://www.mguhlin.net/techserv/workshops/webdesign/checklists/
- What were the objectives that influenced the content of the materials appearing on the web, and how is one's cognizance of these objectives to be demonstrated?
- Who is the specific audience, and what statistical weight will be given to tele-evaluators that are not members of the target audience?
Tele-Evaluators and Standard Deviation
One chalupa and half later, more stimulating beverages, Jim and I asked ourselves the question, "What's an easy way for us to set up a web page so that visitors could provide feedback?" We carefully considered the following options:
- Set up a counter on each student's main web page. Count the # of hits to a particular web page. We immediately discounted this idea, however. Anyone could easily set a web browser to visit a particular web page, reload the page many times, thus skewing the count. Also, this really wouldn't get at the quality level or depth of the visit.
- Create a mailto: link on the student web page and request that visitors send an email. This option was discounted as well. Although a better method than the web page counter because of the more detail visitors could provide, there was no guarantee that visitors would make constructive, evaluative comments on the content. Also, there was no guarantee that the tele-evaluator had the same objectives in mind as the student and the teacher.
- Create a discussion group with a link that posted messages to a specific discussion group area, much like an electronic, web-based bulletin board (i.e. a feature of FrontPage 98, although a better program called Webboard exists, albeit at a higher price). This method of sharing messages would allow tele-evaluators to post their message, as well as link to a specific content and format within the evaluation survey. This form would structure the evaluation of the tele-evaluators in ways that would be link directly to student, teacher, and project objectives.
- Unfortunately, the three methods we considered would not work effectively. Each failed because it does not give us inter-rater reliability critical to online evaluation of web-published materials. Gut feelings and expressions such as "Great work!" provide praise without puissance. What would work, however, was a web-linked Filemaker Pro database linked to the web.
The web-linked database option provided us with several advantages, but mostly the capacity to generate statistical information about the replies particularly inter-rater reliability. Inter-rater reliability and the establishment of this reliability through standard deviation could be made possible through the use of an online database. Student web pages would contain a link to the online form. Readers agreeing to offer feedback on the web page would indicate who they are (teacher, administrator, business person, parent, etc.) and then proceed to evaluate student work on the merits of the objectives listed in the online form. The contents of the form would be fed into the database, each section would be scored and the information stored and made available in the database. In this approach, tele-evaluators are being made aware of what particular objectives students have targetted in their web-based published work prior to using the online form.
Once stored in the database, the tele-evaluator, as well as others, can check the reliability of the assessment data. The information (a portion of which is statistically based) provides student with real audience feedback that focuses on their specific learning objectives, the form and function of their published work in a way neither possible or quickly available in print.
Only one question remains, unanswered, however. What did they put in the soda and guacamole?
Suggested Sites to Visit:
Rubrics and Assessment tools (10/20/96). http://problemposing.e-commerce.com/rubrics-g.htm
Web Design Checklists: http://www.mguhlin.net/techserv/workshops/webdesign/checklists/
Get Blog Updates via Email!
Bookmark this on Delicious
Subscribe to Around the Corner-MGuhlin.org
Everything posted on Miguel Guhlin's blogs/wikis are his personal opinion and do not necessarily represent the views of his employer(s) or its clients. Read Full Disclosure