Short answer: you don't.
In fact, it's potentially worse than that... you don't even know that someone rating a professor, assuming they are a student, which is a huge assumption to make, ever took a professor's class.
Polyratings.com does everything in its power to review questionable postings brought to our attention, but a function of Polyratings' privacy guarantee is lack of authentication and login. This means that a professor could post positive ratings about themselves to their pages, or negative ratings about other professors (both of which have happened in the past).
We have been looking into ways of curbing this practice, but for now, Polyratings users will have to rely on their own judgement in determining which ratings to consider to be accurate; if you think about it, that requirement is no different than information you get from any other source.
If you believe a rating comes from a questionable source, please report it.
The standard by which we judge all comments is a simple one: value.
We do not judge comments based upon the words they contain or the way they express their opinion, but if a comment is reported as innapropriate, we look to see what value it adds to both Polyratings.com and to Cal Poly students in general.
Calling a professor names is not only immature, but does not add value.
Posting anything but a comment (emails, test questions, etc.) about the professor does not add value.
Replying to other comments instead of giving your own opinion on the professor does not add value.
Value to the Cal Poly community is the gold standard by which we rate comments when problems are brought to our attention... if the comment lacks value, it will be deleted.
Polyratings' staff does not have time to read and approve every comment.
As such, we only hear about innapropriate comments after the fact; just because a comment appears does not mean that it's been reviewed and deemed acceptable.
If you see an innapropriate comment, report it; over 90% of the comments reported as innapropriate are either removed or moderated to remove offending material.
As an aside, the ratings engine does support an "approval process" for comments, but as stated earlier, we don't have time to approve every comment, so we don't personally use it.
If we made time to personally edit every student's comments, we'd never have time for our own school work. Think before you post.
Besides, there's the side issue of verifying that the person who's requesting we remove or edit the post is the one who really wrote it, which opens up a whole different can of worms.
As such, any requests to edit or delete comments will be ignored.
Chances are very good that the feature you're thinking of we've already heard about, and have either planned to implement it and haven't had time or have purposefully decided not to implement it for one reason or another.
Having said that, we still like hearing about new features. Since Polyratings is based upon OpenRatings, if you'd like to suggest a feature, you should go over to OpenRatings' Bugzilla open an account (it only takes a couple of seconds) and file your feature as a bug.
The OpenRatings team will then get back to you about the feasibility and time frame for implementation of your feature.
Despite the fact that this is not a question, we often get comments like this from professors and occasionally from students (if you can believe it) and we'd like to clarify our position on these types of emails.
In a nutshell, you can't sue Polyratings.com. You may think a comment about you is defamatory and libelous, and it may very well be.
But, we didn't write the comment. The comment is not ours; it's the property of the student who wrote it and while you're welcome to sue the author (assuming you can find out who they are), you really can't sue Polyratings.com, because we haven't broken any laws (and you wouldn't get any money out of us poor college students anyway).
The Communications Decency Act of 1996 protects Internet service providers (ISPs) and website operators from being sued for original comments made by visitors to the site. And while the CDA itself has been struck down by the Supreme Court for other reasons, courts, in cases involving Yahoo! and AOL, have generally followed the precedent set by the CDA that ISPs and website operators carry immunity from being sued for content posted by others.
So please... if you find inappropriate content in reference to you on Polyratings.com, please notify us. But don't write a scathing email threatening to sue us. For one, it makes your credibility go way down because you're threatening something you can't deliver on and secondly, it also doesn't really endear us to help you, even though over 98% of the time we're notified of inappropriate content, we side with the reporter of the content and not the author.
Even if they are threatening to sue the crap out of us.
A rating of "N/A" for a professor indicates that the Polyratings 2 engine could not give a rating to a professor because of missing information.
Unfortunately, some 1200 evaluations from the original Polyratings only contain comments about the professors; they do not contain the original numerical data that students were asked about professors.
Thus, these ratings show up as "blank" for a professor; if a professor's ratings are all blank, their overall rating is not applicable, or "N/A".
Two reasons for this: one is explained in question 4 below; the other is a symptom of missing data from the original Polyratings.
It is entirely possible that a professor could have some ratings with complete numerical data intact, and some ratings with no numerical data. Because the old Polyratings included the now missing data, a professor's overall evaluation could have changed, since Polyratings 2 only takes into account the data which is complete from the original Polyratings when calculating professor averages.
This, unfortuantely, could not be avoided. However, in an attempt to be fair to both students and profesors, if you find that a professor has a lower rating than they did in the original Polyratings, please email email@example.com, and we will attempt to re-adjust the scores within the system to restore their original ratings. If, on the other hand, a professor's score is higher, we will not correct the ratings.
Within the next point-release of Polyratings 2, we are considering correcting all professor's who lost over half of their ratings scores within the transfer.
This is an artifact of missing data from the original Polyratings (are you beginning to see a pattern yet?).
The summary at the top of evaluation pages gives the "Cumulative GPA" of a professor, along with the number of evaluations used to calculate that score. Therefore, it is possible that a professor may have ten written evaluations, but only seven actually contributed to the calculation of their scores. In this case, the evaluations page would report their score based upon seven evaluations, not ten.
In discussions about ways we might change the presentation of information in Polyratings 2, we decided to go with a "GPA paradigm" when it came to rating professors.
We figured that every student can relate to a "cumulative GPA" and that it would be easier to glance at a professor's "grade" to get a feel for whether or not you would like to enroll in said professor's class.
Therefore, the lowest score a professor can now receive is an F (0), and the highest, an A (or 4.0). As such, overall evaluations are now "cumulative GPAs", with a scale from 0.0 to 4.0.
Finding out which professors are on "academic probation" is left up as an exercise to the reader.
We felt it was necessary to answer this question because when offered a keyword search, people will invariably enter... shall we say interesting search terms.
If, by chance, a professor's name comes up when searched with particularly unflattering keywords, we want to deflect the question which will invariably be asked: how do you rate professors based upon keywords submitted in searches?
The technical answer? We implement keyword searches by querying a full-text index which was created on the database table containing the text of the evaluations. This is a built-in feature of the database Polyratings 2 uses; technical specifications for the full-text index can be found here.
If that answer made no sense, here's the shorter version: the database has built-in search capabilities to do "natural language"-based searches. These searches take the word(s) entered by the user and perform a statistical analysis on all of the comments submitted. Comments which include all of the words in the given order are rated higher than comments which only contain a few words out of the keyword list. This is all done by the database's full-text index searching module, and we simply pass the keywords as we receive them on to the searching algorithm's engine.
So, if you're unhappy that a professor's name pops up when searching on any particular term(s), don't complain to us; complain to the Statistics department: they probably invented the algorithm you're unhappy with.
Visit the professor addition station; be prepared to rate the professor you're suggesting we add. We do this because we want to have an initial evaluation for every professor we add (as we're sure you do).
This site has been accessed 2234606 times since 1.9.99
Polyratings.com, Version 4.0.0 © copyright 1998-2012 All rights reserved
Based on the OpenRatings professor ratings engine