I finished compiling all the individual session speaker ratings from SQLSaturday #107 held in Houston on April 21, 2012 and sent the results to the speakers this past weekend. We used the default form provided on the SQLSaturday Admin site, which had two basic inputs.
1) Expectations: Did Not Meet \ Met \ Exceeded.
2) Overall Quality of the Presentation: rate 1-5 where 5=great.
Plus there was a request to write any other comments on the back of the form. A few people did provide some constructive criticism which the speakers can use to improve; many people provided positive encouragement; but most wrote nothing.
Anybody have a problem with this form? I do; and I’d like to make it more useful for both organizers and speakers. But, how?
We’ve been conditioned since early childhood in school to receive a grade for our performance. Consequently, we tend to provide evaluations with number ratings so that we can come up with an average rating for each speaker. The problem with this type of rating for speakers, in my opinion, is that there are no defined criteria for the students (e.g. graders) to use in their evaluation – it is purely subjective and purely based on the individual’s experiences which could vary wildly at an event like SQLSaturday.
The “Expectations” rating to me is really not very useful – it is too general. There are two things that I, personally, am gauging from an “expectations” basis when I attend a session: 1) the content to be delivered based on the abstract provided; 2) the quality of the speaker based on the presenter’s professional credentials. However, what if I’m so new to a topic area that I really do not comprehend what the abstract means (although I think I do) and therefore my expectations of what I’m about to experience are very different than the reality? Or, what if I have really high expectations of a well-known speaker and I feel like their presentation is just average? How do I handle this in my evaluation? Based on what I saw in our SQLSaturday evaluations, it seems to me that most people whose expectations were not met also rated the overall quality of the presentation low. But, how can that be, when the majority of the other attendees at the same session believed that the session met or exceeded their expectations and gave a high rating on the overall quality? I saw this anomaly several times.
One of the goals of SQLSaturday is to grow the speaker base. I don’t know about you, but “grow” to me doesn’t mean just numbers, it means providing experience and maturity. How do we get the necessary feedback to the speakers to enable them to improve their presentation skills in order to better train us?
Speakers, what input would you like from your audience? If you were to redesign this simple evaluation form, what 2 or 3 questions would you ask? Do you derive value from a subjective numeric rating?
SQLSaturday organizers and User Group leaders, how would you like to see speakers evaluated to help you in selecting sessions for your next event?
Please post your comments here.