The Server and Cloud Community Lead Blog

A place to share my thoughts on community and what I am doing inside of Microsoft to drive robust conversations between our Product Development Teams and our customers.

More Background on the SBS Community Survey and answers to your questions

More Background on the SBS Community Survey and answers to your questions

  • Comments 6
  • Likes

I figured a new blog post would be better than continuing a thread of comments on the previous blog post. Where is the love? The SBS Community Survey is floundering with very few responses. 

 Results from previous surveys have been posted on this blog. I probably could have done a better job of talking about the action items I took from the last year's survey results. My posts, may have not been to the depth that some of you are looking for, but I have talked about the survey and it's purpose on numerous occassions. Here are some old posts that shed some light on what things the SBS team got out of previous surverys: So, what have I learned so far....., SBS Community Survey Update and Commitments for the SBS Community LeadI have made some preliminary changes to the SBS Community Page, In response to your feedback, I create a pretty cool little search tool...., The Survey is closing down at the end of the month,

I will try to do a better job this year talking about what action items we decide to take based on this years feedback. Looks like from Viajys comments, there are issues with the SBSC program. To that point. If you have more feedback you want to provide on the SBSC program please take the survey again and keep submitting additional feedback. Because this survey is anonymous, there is not limitation on how many times you can stuff the ballot box. Think of this as Dancing with the Stars or American Idol. I promise we will categorize and priotize the feedback and get this feedback to the people who can effect the most change. 

 Answers / Responses to your questions:

Question/ comment: "The survey refers several times to the "Windows SBS Community" without defining what that term means. Is it limited to the SBS Community Resources listed in the survey? Is that a generally accepted definition? It would be interesting to know how many people could even list what the SBS Community Resources are without any hints. How many people would consider PSS to be part of the Community?"

Answer/reply: The Windows SBS Community is whatever you define as the SBS Community. I am not in the position to define what our community is. So, no it is not limited to the SBS Community Resources listed in the survey. So, this is not a generally accepted definition or an attempt at a definition. It is what I have defined along with numerous other Community Lead across Microsoft as Categories of Community Resources that could at least be bucketed, rated, and commented on. Think of them as conversation starters. These areas/resources are typically what I point people on the product team at as an area where the community is having a conversation either directly with us, with each other, or us with them. In terms of TechNet and Blogs, it is more one way than two way, but they are generally accepted WEB 2.0 Apps that allow feedback to flow back to us.

Question/ comment: "- The concept of "value" is somewhat vague. Each resource listed can potentially be of great value or zero value, depending on when and how it is used. Some resources are better for general product and sales information, some are better for ongoing education and training, some are better for troubleshooting, etc. If one of the resources gets higher scores than another, it doesn't necessarily mean that the "losing" resource is no good. - Expanding on that last point, it might have been better if you had said "here is a list of 10 questions that someone working with SBS might have. For each question, how likely would you be to consult each of these community resources in search of an answer?" If, hypothetically speaking, you found that there was a 90% chance that people puzzled by an error message would go to third-party web sites but only a 30% chance that they'd go to the MS KB, then you'd know something was very wrong with the KB."

Answer/reply: Maybe you missed my blog post, but my major focus is to get the product team engaged with the community. The survey is a good indicator/validation engine of where I should direct the product team to focus the most attention. I do not take a low score on a specific community resource as a vote to do away with a given resource. I look at the verbatims as well and see if there is a coorelation. Sometimes, we conclude that the community finds little value in the resource, but because it is just that we have not given it enough attention and that they are upset that we have not spent more time keeping the resource up to date.

Applying a rating allows us to look for trends year over year. This is a generally accepted practice. The survey folks along with a lot of other Community Leads at Microsoft helped me design the survey and the questions. We intentionally made the questions ambiguous and we also used a very common rating system of 1 to 5 that we use in most other MSFT surveys. 

Question/ comment: Your rationale for not changing the questions from year to year makes perfect sense IF the basic survey design is sound. But the basic survey design here is fundamentally questionable because you're not getting a random sample. You're only getting motivated respondents, and you really have no way of knowing which way that will skew the answers. And even if it were a random sample, the collection of the same meaningless data year after year does not make it meaningful. (I'm thinking of question 5 in particular.) 

Answer/reply: I will break up my response to this in two parts.

  1. OK, first, if this statement were true, than the average sat numbers would be skewed year over year; "the basic survey design here is fundamentally questionable because you're not getting a random sample. You're only getting motivated respondents, and you really have no way of knowing which way that will skew the answers." OK, here are the numbers for the last two years and this year's, so far; 80% answered with a 4 or 5 two years ago, 82% last year and this year, 79%. Not a huge skew at all. Sample sizes are about the same. We are now at 544 responses (thank you very much everyone). The percentages are very close to the same if you look at how many responded with a 1, 2, 3, 4,.or 5. I am just giving you the Percentage Satisfied Sample.
  2. What are you talking about?  "And even if it were a random sample, the collection of the same meaningless data year after year does not make it meaningful. (I'm thinking of question 5 in particular.)"
    Here is question 5; "On a scale of 1 to 5 where 1 is not satisfied and 5 is very satisfied, how satisfied are you with your experience in the Windows SBS Community?" What is wrong with this question? How is this meaningless? This data is not meaningless. The more years I have this data, the less meaningless it becomes. Especially when I am using the same methods for recruiting people to take the survey (More on that later) and also especially since the in the last three years this number is with 3%. That to me is phenomenal. Other questions have have had a larger skew and in all cases thus far, the comments have supported the upward or downward trend.

Question/ comment: In my experience, many SBSers turn to search engines when looking for answers. Those search engines may spit back answers from several different community resources at once. Am I really supposed to remember how many times a search engine brought me to TechNet as opposed to a KB article as opposed to something at www.microsoft.com as opposed to a newsgroup post, etc.? 

Answer/reply: Search Engines is one of the top identified Community Resources under questions 15 - 18. I understand that a lot of people do not make the distinction on what was the final site that they landed on. What they say is that they use a search engine and RSS feeds to get their information. I make special note of this. This is why I do a lot of work to make sure our team does not spin up a thousand blogs and stay with one so that the amount of information is pooled in one place on the web and search engines discover our blog more often when people are searching for something on our product.  Check out what happens to teams who have 10's of blogs. They are not easily discovered and their community suffers because no one can find the answer that they so eloquently posted on their very little traffic blog site.

To your point "and it may be why your response rate is low. Perhaps there are hundreds or thousands of people who have reviewed the survey questions and have decided that it's not worth answering" completely and utterly false. The hit count to the survey is almost an exact match to the amount of responses.  The problem is that people are simply not going to the survey. So, when you do not encourage people to provide us feedback it is directly coorelated to low responses, not poor survey design.    

Question/ comment: I commend you for trying to get feedback on various resources, but why are you using this tool? If I recall correctly, CSS and the monitored newsgroups have their own satisfaction surveys that immediately follow the incident. If I were the CSS manager, I'd put a lot more stock in comments and ratings from people who had just finished receiving support than from people who may or may not have had any interaction with CSS in the last year. (Just because you tell people to base their ratings on the last 12 months doesn't mean they will.) As for gathering data on which MS community resources are preferred by the community, surely you have a more objective way of measuring that. For example, I assume you know exactly how many people attend each SBS webcast, event, chat, etc., and I know that most of these events ask participants for satisfaction ratings at the end. That must be a more valid indicator of popularity and satisfaction than what you're getting from the survey. I also assume that you know how many hits and inbound links are associated with each KB article, TechNet article, MS blog, SBS page, etc. Again, this objective data ought to be given more weight than the 5-point-scale ratings of some self-selected survey respondents.

Answer/reply: We chose to steer away from asking targeted and leading questions. TechNet, CSS with Newsgroups, Webcasts, etc. already has it's own survey engine to see if an article hit the mark. What we are tying to find out is are some people not even going to TechNet. If somoeone marks it as a 1 or 2 and then says in their comments, "I don't even use TechNet anymore. Every time I have read the content, it has bbeen useless / stale/ not detailed enough/ etc...." this is uesful data to us. It tells us that we are not retaining our customers and we need to do a better job to make better content. I am trying to catch the feedback that TechNet is not getting.

Susan quoted me here and I will put it in again, "Because of the previous surveys, we have made some adjustments in our community engagement. To name a few; we really worked on putting more focus on the Official SBS Blog. Our Sustaining Engineering team runs mini betas with our MVPs before KB's and Bug Fixes are released via Windows Update. We try to get more involvement in our techbetas from our User Groups. The list does go on."

Do you know why Vista Sp1 is not available as an automatic update via WSUS yet? Because of the MVPs participating in our early SE Betas and the relationship that they have developed with our SE team. WSUS had a bug that caused any Cab file over 500 megs to fail to pass a WinVerifyTrust and subsequently, WSUS will continue to try to download the Cab. The fix was created, but not available except if you called CSS. The SBS MVPs helped us push the WinSE team to force this fix as a critical fix on WSUS for 2 months prior to making SP1 available to avoid anyone having this endless download loop condition from occurring. So, actually, this Survey has has a direct impact on the way that we engage with our community and ultimately has made life easier and not harder for the community.  SP1 is potentially going to be available via WSUS this month based on our final assessment of the uptake of the critical fix.

Trust me, we do look at all the responses and take action based on the feedback. This is my job. I love my job and I love being a community lead.

 Thanks for all of your feedback,

 Kevin Beares
Community Lead - WSSG

 

Comments
  • Kevin, I'm afraid you've entered the world of "arguing with Schrag." As many people will tell you, I'm like that little lap dog that sinks his teeth into your pants leg and won't let go no matter how hard you try to shake him off.

    "There is [no] limitation on how many times you can stuff the ballot box."

    -- I'll admit that it's hard to imagine anyone trying to answer this survey more than once or twice, but the fact that it's possible for someone to write a script that would create hundreds or thousands of responses automatically makes all of the numerical data suspect. I would do this just to prove a point, but sadly I lack the programming skills.

    "The Windows SBS Community is whatever you define as the SBS Community."

    -- So what is the point of generating an annual satisfaction rating of something that no one has defined or can define? I don't see any reason to ask this question year after year.

    "We intentionally made the questions ambiguous."

    -- I don't think that's a generally accepted best practice in survey design. See the section "Keep your focus" at http://www.ccs.uottawa.ca/webmaster/survey/best-practices.html. See also the "Overall Considerations" on page 6 of http://wp.bitpipe.com/resource/org_1027448665_672/Online_Service.pdf.

    "If this statement were true, than the average sat numbers would be skewed year over year."

    -- If WHICH statement were true? But that's beside the point -- I believe the SAT numbers *are* skewed year over year. When I took the SATs, they had to perform some pretty complicated statistical tricks to convert the raw responses into the 200-800 point scale that we're so familiar with. I know they've changed the scoring since then, but I'm pretty sure that there's no simple relationship between the number of questions answered correctly and the official score.

    "What is wrong with [question 5]?"

    -- 1) As stated above, if there is no common understanding of what the community is, how can there be a meaningful rating of satisfaction?

    2) The fact that the overall satisfaction rating can be virtually unchanged while the individual component ratings swing back and forth is further evidence that the overall satisfaction rating doesn't tell you anything interesting.

    "Search Engines is one of the top identified Community Resources under questions 15 - 18."

    -- How is that possible? At https://connect.microsoft.com/SBSCommunity/Survey/Survey.aspx?SurveyID=6295, search engines are not included as a Community Resource. The choices are:

    * Windows SBS Newsgroups

    * Microsoft owned blogs

    * Third party sites / web forums / blogs

    * Microsoft Product Support Services

    * Microsoft.com Support knowledge base

    * Windows SBS 2003 on Microsoft TechNet (offline and online)

    * Windows SBS Training / Webcasts / Events / Chats)

    * The Windows Small Business Server page on www.microsoft.com

    * Windows SBS User Groups

    Maybe search engines appeared as a choice in prior years, but not this year.

    "When you do not encourage people to provide us feedback it is directly coorelated to low responses."

    -- I stated a hypothesis about why the response rate was low, and you were able to disprove the hypothesis with data. That's how this is *supposed* to work. As for my lack of encouragement being related to a low response rate worldwide, my dear man, you give me far too much credit.

    "What we are tying to find out is are some people not even going to TechNet. If somoeone marks it as a 1 or 2 and then says in their comments, "I don't even use TechNet anymore. Every time I have read the content, it has bbeen useless / stale/ not detailed enough/ etc...." this is uesful data to us."

    -- But if you ask people to give you a rating without requiring them to tell you whether they used the resource, you are not getting useful data. Suppose someone gives TechNet a 1 or 2 but says nothing at all in the comments about TechNet. Do you interpret that to mean that they have used TechNet recently and have found it lacking, or that they were so turned off by their experiences with TechNet a year or two ago that they don't bother to look there anymore? And if you don't know how to interpret the answer, how do you know whether to spend your efforts on making further improvements to the content on TechNet or on marketing the improvements you've already made to bring people back into the fold? And if you were going to do both anyway, what's the point of doing the survey?

    "This Survey has has a direct impact on the way that we engage with our community and ultimately has made life easier and not harder for the community."

    -- But the examples you just gave were about interactions with the MVPs and the user groups. You didn't do a worldwide satisfaction survey to find out if there were problems with Vista SP1 and WSUS, nor should you have. Furthermore, any dissatisfaction with how patch releases were handled in the past was dissatisfaction with Microsoft's *products* and *procedures*; it had nothing to do with the *community.*

    Last year you said "What is amazing is that if you look at the answers to the two questions below and combine the results, over 50% of the respondents listed third party sites/ web forums / blogs as their number one or two resource.  All of the MS owned sites were very low comparitively.  Now, this is not to say that the respondents do not value these MS Sites and resources, just that they do not rate them as their number one or two resources. I think this is very eye opening."

    -- So once your eyes were open, what did you do about this (if anything)? What could you do? What are the responses telling you? Do these results concern you? Should they? Would you necessarily be doing a better job if Microsoft resources got more number 1's and 2's? Maybe it just means that the kind of information people seek most often is the kind that does not lend itself to the Microsoft resources. You have a lot of data here, but you have no information.

    Two years ago you promised some significant changes to the SBS community page.

    -- Why not ask specifically what people think about the changes to the page? Or ask "what is the URL for the SBS Community Page?" and see how many people know the answer? (I sure don't. And you want some irony? I just Googled [sbs community page] and the first hit was www.microsoft.com/windowsserver2003/sbs/community/default.mspx. What do you get when you go to that link? You guessed it: "We are sorry, the page you requested cannot be found." The number two hit on Google gave me the right URL: http://www.microsoft.com/technet/community/en-us/windowssbs/default.mspx.) I note, by the way, that at the bottom of the page there is a usefulness rating survey. Does anyone ever fill that out? If so, what kind of scores are you getting?

    I don't mean to knock your intentions to get feedback. Feedback is great, and I'm thankful that you're interested in getting our opinions. But if all the good stuff is coming from the open-ended comments, then you should (a) toss the other questions because they are a waste of time and (b) pay as much attention to the open-ended comments that are spread around in blogs, blog comments, Yahoo groups, and numerous other venues throughout the year. A snapshot of comments collected in June does not deserve any special attention.

    Your friendly neighborhood sourpuss,

    David "There He Goes Again" Schrag

  • Hey David,

    I decided to just let you keep your opinions and I will keep mine. Not that I do not think that your opinions do not have merit, I just don't agree with all of them and that is what makes community so cool. We can all have our own opinions. I am getting what I need frin this survey and from our convesation. I have almost 600 responses so far. I will leave it open for the rest of the month and then spend July going over the results. I promise to share my conclusions on what he heard from the data.

    I will leave you with these thoughts David.

    From reading your comments, it seems that you put a lot of value in numbers, metrics, etc.. I value something different. I value the human apsect of research. I majored in Social Science research at the U of Md at College Park. I don't spend a lot of time getting hung up on questions being perfect. I focus on looking at the data and spending the time trying to understand what it may or may not say. I make conclusions based on the data and make adjustments to the way we engage with the community. If I get it right or wrong, we may or may not see a change in how people rate things or what they say with their comments. I can only continue to keep looking at the data. If you think that this survey is the only data point, you do not know me or even have a clue as to what I do as a community lead. All I get is data, every day. This is just one way to collect a bunch in a single time period. What is interesting is the coorelation of the feedback in the survey and the thousands of data points I hear all year long.

    The bottom line in my role is simply this. Do something and don't just be complacent with what you have done up til now.  So, I will continue to run the survey and make tweeks that I think are necessary, but as long as the data gives me something to act on, I will.

    Remember, community is not something you can control. It is something that you can merely try to have some influence on. It is based on relationships, not tools and websites. The tools and websites are merely ways to facilitate a conversation with the community and the community with itself.

    I hope that I can have some influence in making the SBS community a better place for people to hang out, to get answers, to develop relationships, improve their technical skill, and hopefully pay it forward.

    Thanks for listening.

    Kevin

  • FWIW, Kevin, I majored in psychology as an undergrad, so I have an appreciation for social science research methods. Also, my wife got her doctoral degree in education with a focus on qualititative research, so I'm not unfamiliar with its benefits (and drawbacks).

    What I DON'T value is numbers, metrics, etc., that have no meaning. You are familiar, I assume, with the concept of false precision. (For those who are not, one example of false precision is saying that a political candidate has a favorability rating of 64.462% ... with a margin of error of +/- 3%.) The problem I have with many of the questions in your survey is related to the problem of false precision. Because of the questions' ambiguity, lack of inter-rater reliability, and other methodological flaws, the responses and subsequent analyses may lead you to believe things that are not "true" -- to the extent that any set of opinions can be said to have "truth."

    All I'm saying is that if what you value is qualitative data, then don't bother to collect the quantitative data. And if you do care about quantitative data, then do what it takes to collect it properly.

    And, says my wife, even when doing qualititative research you can't have ambiguous or otherwise flawed questions. Poor questions generate poor data, no matter what kind of data you're collecting.

    David

  • David,

    I think you continue to miss my point about the data.  While false precision could be a factor in the survey, it does not necessarily mean that it is. This is why the survey is set up to gather both quantitative and qualitative data. If they two do not jive than the quantitative date is thrown out. If there is a correlation than the data is looked at more closely.

    I would argue that the questions, while slightly ambiguous are not that ambiguous and inter-rater reliability is a bunch of hooey.  While every rater has a different scale, it is still important. What I am measuring are people’s own perceptions of how they rank something. If someone ranks something a 1 (No Value), that is valid data. If a given questions hovers around a rating of 1 or 2, I look for supporting comments. If there are no comments to support it or to give me to act on, I can do nothing about it even if it I think it is valid, but the number is certainly a good indicator to me that I should be looking for comments that would probably be primarily negative regarding the low rated resource.

    Seriously, we could go on for weeks on this. We will not agree on this. I think I have said my peace and you have as well.

    Let’s move on to other topics.

    How about telling me what you think the SBS product team could do to improve your experience in the community? Do you think we should move to using Web Forums? Do you think we should have two entry points to the same discussion? In other words, you can either access the same conversations via a newsgroup client or a web forum UI via Internet Explorer (Or whatever Web Browser you prefer).

    Send me your comments, ask me to blog about something specific.

    Thanks again for your passion and your feedback,

    Kevin Beares

    Community Lead

    Windows Server Solutions Group (WSSG)

  • Happy to continue the conversation, but this time on my turf: http://davidschrag.com/schlog/334/what-i-want-from-the-sbs-community.

  • Thanks, but no thanks David.  I have run out of steam. I have a lot of data mining in my future and a long awaited vacation in one week.

    I was hoping that you would take my questions seriously and not answer my questions with questions. If you have some value that you think you can add to the conversation, I am all ears. Until then. have a great summer and hopefully, you are spending some time trying out SBS 2008.

    Take care,

    Kevin

Your comment has been posted.   Close
Thank you, your comment requires moderation so it may take a while to appear.   Close
Leave a Comment