Assessment

Assessment, Evaluation, Grading, and Student Products

Introduction

Do we really have to consider assessment, evaluation and grading when discussing the products students generate as learning opportunities? A strategy we sometimes use when reviewing educational literature on multimedia authoring (web pages, blogs, etc.) is to turn to the index (in books) and look for the words "assessment", "evaluation" and "grading". A discussion of these topics, which we recognize represent different constructs that we will make little effort to differentiate here, can very quickly get quite messy and idealistic. Perhaps this is why some authors are content to avoid such topics. We feel it fair to recognize that assessment, evaluation, and grading represent processes expected of educators. While not all educational activities must involve these processes, it seems unrealistic that educators can devote much time to learning activities without engaging in assessment, evaluation, and grading. We assume, in part, that unless writers made an effort to address such topics they will have limited credibility with educators.

Isn't it true that many of us have authored a blog for many years or maintained a wiki, explain our reason for doing so as a method for personal reflection and learning, and have never received a grade for our efforts?

Sure, this is true, but individuals can be motivated by different things and educators must consider how they might effectively involve all students in productive learning tasks. Our personal theoretical perspective on learning is cognitive, but it is prudent to acknowledge what appear to be truisms from other perspectives. One of the core principles of the behavioral approach is that our personal behaviors are shaped by the consequences we experience. If there are no experiences the learner interprets as a positive consequence ("interprets" is not exactly a behavioral term, but humor me), behaviors decrease in frequency and eventually cease to be generated. Whether approached from a behavioral or a cognitive perspective, learning behavior must in some way be motivated and must in some way be guided by effective feedback (information). Students adapt to the feedback they receive and make the effort to modify behaviors that are linked to consequences they experience. The key challenge in blogging or with any proposed learning activity is to find ways to emphasize the skills and knowledge we as educators perceive to be most important (Crooks, 1988). Sometimes, educators can be clear on the learning outcomes they desire, but cannot think of ways to provide feedback and consequences related to the entire range of outcomes. Unfortunately, this situation results in mixed and confusing messages to students who may or may not attend to skills or knowledge that generate no personal consequences.

If online discussion can be considered as an activity with many similarities to blogging, we can use some of the research relevant to that environment to make inferences about blogging. MacKinnon (2000) intervened in an online discussion environment and began awarding points depending on the characteristics of student posts. Posts that were irrelevant received no points. Students who restated information received in the course received one point per post. Students who generated applications for course content or who proposed examples fitting with principles stated in class received 2 points. The addition of the point system shifted the frequency of posts falling in different categories. Information statements dominated early sessions, but statements generating two-points became more frequent in later sessions. It may not surprise you that students responded in this fashion. Of course, you might claim, students will do what they expect will bring them a higher grade. Well, then, if you want students to go beyond restating the information presented to them, it appears that there is something quite concrete that can be done to encourage such behavior.

Using a Rubric to Assess and Evaluate Student Work

Evaluation activities potentially influence student behavior in multiple ways (Crooks, 1988). The evaluation process:

  • impacts student motivation,
  • influences student choice of learning activities,
  • helps students monitor their own progress, and
  • certifies the level of student accomplishment for external review.

The challenge for educators is how to design evaluation procedures that provide these potential influences. Too often, the focus is probably placed on the final purpose we list - certifying progress for external review. While a necessary task for educators, for full benefit to students, evaluation methods that also engage some of these other processes are important.

The evaluation of student blog activity or the work of a small group in creating a wiki falls within the category of performance assessment. While performance assessment may rely on a variety of techniques, the general expectation is that techniques consider a student's ability to perform a task (Office of Technology Assessment, 1992). If answering questions about student knowledge is a goal, evidence of knowledge must appear in performing the task. In this case the task is to create a product. The product and if visible behaviors involved in creating the product (e.g., timeliness of posts, involvement of all participants in creating a wiki) serve as sources of information for the assessment and evaluation processes.

So, our interest is in techniques that can serve the multiple purposes required of evaluation, is suited to performance-based assessment, and can clearly communicate feedback regarding the specific knowledge and skills we want students to emphasize.

An assessment rubric offers an approach suited to these multiple requirements. One concrete way to visualize a rubric is as a grid consisting of performance competencies as rows and levels of quality in achieving these competencies as columns. The cells within this rubric offer spaces for describing what would constitute evidence of a level of a specific competency and the point value (a relative measure of importance) for demonstrating proficiency at that level.

Rubricformat

What competencies should I include?

We don't think we should provide a specific answer to this question. The competencies you evaluate will influence how students spend their time and direct their attention so you should emphasize skills and knowledge that are central to your purpose for having students blog. One strategy you might consider if you are getting started is to do an online search for "blog" and "rubric" or "wiki" and "rubric" to review rubrics created by other teachers. Some sites collect these rubrics, but it is probably not the best idea to apply them without editing. You want to make certain your goals are the core of your rubric.

Suggestions relevant to blogs

It is interesting to consider the different competencies educators look for in evaluating blogs. Here is a list we put together by examining many online examples. Our language is pretty generic as we have attempted to capture themes from similar competencies.

Post/comment frequency - does the author meet/exceed the expected frequency for authoring posts and comments.

Post/comment quality of written product - the extent to which the written material avoids errors of spelling or grammar and conveys information or the authors position effectively.

Post/comment information quality - the extent to which the written material makes a persuasive case, is interesting or thought provoking, creative, or informative depending on the expected purpose of the blog.

Post/comment synthesis - the extent to which the written material accurately incorporates links to important authoritative sources, quotes appropriately, and accurately incorporates core ideas from authoritative sources.

Responsiveness to peers - posts and comments show awareness of the material generated by peers and synthesizes what others have said in an appropriate manner.

Visual appearance - posts make effective use of images and organize text in an attractive manner.

Suggestions relevant to wikis

We tend to think of wikis as online, multimedia versions of collaborative, information problem solving tasks. This perspective offers many convenient connections with a wide variety of content areas and helps in the identification of skills necessary to task completion and available for evaluation. Think of a typical content "report", "theme", or "review" and then consider how expectations would change as a function of cooperative research/authoring and web presentation. Major elements to be evaluated (e.g., collaboration, evidence of information search) are added or deleted depending on their relevance to the specific project and product you have in mind.

Some of the major components of such projects might include:

  • information location and processing,
  • knowledge communication,
  • basic communication skills,
  • collaboration, and
  • multimedia and technical functionality.

In creating a rubric, in generating the rubric behavioral categories and descriptors, the instructor would identify in greater detail the specific skills to be evaluated and levels of performance the reviewer would differentiate.

For example, more specific components of "multimedia and technical functionality" might include:

  • the number and functionality of internal and external links, and
  • the number and appropriateness of images (video, audio elements).

One of the potential benefits of a wiki is that the final product might be expected to emerge over time and the information within the wiki might be used to evaluate the process of getting to the final product. For example, student groups might be expected to determine who will do what and to share "raw" contributions with each other as the group works toward a final product. This approach offers some advantages in evaluating individual contributions to a collaborative process.

While these (or other) suggestions may get you started, as the expression says - "The devil is in the details." The biggest challenge is writing clear statements that explain the levels of competence associated with competency. Clear statements help the student understand what behaviors are important and what good and poor performance may look like. The same statements are intended to improve the reliability of the instructor's evaluation.

Resources

There are some great online sites offering tools for creating rubrics (and often rubrics created with these tools). Try the following:

Rubistar - create your own rubrics, examine rubrics created by others.

Rubrics 4 Teachers

 

References

Crooks, T. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58, 438-481.

MacKinnon, G. (2000). The dilemma of evaluating electronic discussion groups. Journal of Research on Computing in Education, 33, 125-131.

Office of Technology Assessment (1992). Testing in American schools: Asking the right questions. Washington, DC: Government Printing Office.


Home