Now that I’ve written in the last Vantage Point blog posts about the significant problems with, and proposed solutions to, how employee performance appraisals are done, it is time to wrap up the discussion of appraisals with several important design factors. To begin, the elephant in the room is setting the actual performance rating levels/scale that will be used. Do you have 5…3…2…or no levels? The most commonly used are 5 levels (with three being second), with labels like: 1=unsatisfactory, 2=needs improvement, 3=meets expectations/fully successful, 4= exceeds expectations/superior, 5=greatly exceeds expectations/outstanding. This appeals to our instinct to rationally manage staff with tie-ins to such things as compensation/rewards decisions, work assignments/promotions, and disciplinary/corrective actions. You can probably guess from my previous posts that I am very skeptical of such allegedly rational ratings and their use in these areas.
For instance, rating an employee in the “soft” and “hard” areas of performance[i] is quite complex and unless you have a quite sophisticated means of measuring each sub-area, such ratings are fraught with subjectivity and inaccuracy. Even so, there is no substitute for well written comments that give a much better, fuller description of how everything is going. Also, reducing everything to a single number/rating that differentiates between staff members in such gross categories actually can be counterproductive. Employees inevitably dissect and compare their ratings, often seeing them as unfair vis a vis their peers. Further, dumping staff into such broad categories (e.g., with three levels) makes the ratings relatively meaningless.
Thus, for most organizations I advocate having only two levels: needs improvement and fully successful. “Needs improvement” can reflect both the new/recently promoted-transferred employee learning the job, or one who has one or more performance deficiencies that you will work to correct. OK, you ask, what about identifying the high performers, whom you need to praise and reward with a higher rating label? Well, you do that with ongoing communication and praise as part of effectively managing your staff, and reward the employee with appropriate actions in the areas of compensation and job assignments/opportunities. Believe me, staff understand and appreciate those actions and you have avoided the counterproductive, negative consequences of the artificial stratification of them into, say, 3 or 5 levels.
A quick word about the forced distribution of employees into the range of rating levels. This is an organizations’s mandatory allocation of rating levels to staff by percentages. For example: 5% outstanding; 20% superior; 60% fully successful; 10% needs improvement; and 5% unsatisfactory. The idea is that staff always fall into a sort of bell curve of performance, and managers will avoid “rating creep” and be forced to deal with underperforming employees (even implementing a stated organizational goal of forcing poor employees out) while clearly identifying the top staff members. Also, this way one can directly tie the allocation of ratings to the budget and compensation plan. There are a lot of things wrong with all of this, including: basing ratings on budgets instead of actual performance; artificially giving low ratings when you should already have a fully trained, well-performing staff (if you’ve done your job right!); and supervisors actually still doing “rating creep” to avoid mislabeling staff who are doing well. Recently we have seen that many corporations that once used forced distribution ratings have abandoned them, so we may have seen the end of the heyday of this methodology.
Now consider 360-degree performance evaluations, where an employee is evaluated by not only the supervisor, but also by peers and subordinates (if any)? These can be quite valuable, especially if the employee is working in a team-based group/organization. Such 360-degree evaluations can be used routinely, or more intermittently, as appropriate. Care needs to be taken, however, that these evaluations are done with clear goals, a methodology that protects needed confidentiality, is directly tied to the individual employee’s situation, and provides the means for the employee to appropriately respond and interact with the persons involved in the rating group (e.g., to clarify feedback, plan future actions, etc.).
Do you tie ratings to compensation? In a previous post, I wrote that “pay for performance” is problematical. But, a well-designed appraisal system can and should be a factor in compensation decisions. The key is to make absolutely sure that such decisions are well supported and differentiate between staff members, even if the actual amounts of compensation are not that large. As one writer has observed, in awarding increased compensation “…the issue isn’t how big the pie is…it is how the pie is sliced.”[ii] In my experience, rewarding staff with non-monetary compensation, such as a set of movie tickets or a day spa pass, is very effective and avoids the dollar to dollar comparisons well.
Finally, what about automating performance evaluation systems? Such systems exist and many organizations and courts have adopted them. This is fine, BUT one must be careful that any automated system you deploy actually implements what you want to do and fits your organization, and not the other way around. I have seen automated systems that force the use of only numerical ratings at both the subsidiary and overall level, and then take the resulting number (say, on a scale of 1-100), and apply a 5-level label/rating. This may seem appealingly efficient and rational, but by now you know that this would be anathema to me. As with any automation, make sure it serves you, and not the other way around.
In closing, I’ve appended a picture of an amusing chart of “Performance Standards” that I got as a handout at a workshop many years ago. Funny, but it makes you think about how you really would be able (not!) to justify such distinctions between rating levels.
That’s it for my series of posts on performance appraisals. I hope this has sparked readers to think about what they are doing and perhaps work to improve how things are done where you work. As always, please submit your comments or questions– I’d love to hear about your experiences and especially, where you think I’ve left things out or missed the mark!
[i] “Soft” areas include qualitative measures like: attitude, service orientation, teamwork, communication skills, etc.; “Hard” areas include specific measures of quantity and timeliness of individual tasks (e.g., the number of pleadings quality-checked/docketed per hour, and at what accuracy rate).
[ii] Dick Grote in How to be Good at Performance Appraisals, p. 182.
One thought on “To rate or not to rate, is that the question? Key Design considerations for performance appraisals.”