Important notes
Before I show you the results of the study, let me talk briefly about a core concept of the Method: the categories of features.
This concept is built upon the great works of professors Herzberg (1970) and Kano (1980). It allows classifying features into 4 different categories, depending on the emotional impact they'll have on users :
- Performance category : The better, the happier. This one is the easiest to visualize : the customer satisfaction follows a predictable line depending on the quality of your feature implementation.
A good example is the power of a car: if there is too little, the driver will be frustrated. If there is a lot, he'll be happy (assuming a standard type of driver).
- Required category : Some features are just expected by users. If the product doesn't integrate it, it will be perceived as bad or incomplete. But having them won't give you love from your customers.
We expect good brake on our cars, being able to make calls on our phones and have hot water in our hotels. It doesn't spark joy but the lack of it would definitely impact our perception of the product or service.
- Attractive category : This regroups all features that will cause a positive reaction. Their particularity is that even if they're poorly implemented, they won't trigger a negative reaction.
Think about holograms, being able to feel like in Star Wars would be incredible! Even if the tech isn't there yet, even if it's a partial or unstable implementation, it will have a positive reaction.
- Indifferent category : There are obviously features that make us indifferent. Their presence or absence won't make a clear difference in our relationship with the product. This category can contain bad ideas but also features the user doesn't interact a lot with (like various cloud infrastructure and technologies).
There are a lot more important things that results from this method, but I'll keep it short here. If you want to chat about it, feel free
to hit me up!
Results
Here's what came back after analyzing the survey :
Online classrooms - No segment
Course builder - No segment
Tutoring platform - No segment
So, here we are with a bunch of interesting data! It provides a clear overview of the users that answered the survey. At first glance we can see that even if the virtual classroom would be the most expensive and challenging to implement, it is overall more attractive to the users and is a greater opportunity for the business.
The ladder of priority resulting from this data is Virtual classrooms > Tutoring platform > Course builder
But it's not over! One big issue to track with surveys is sampling: making sure that the people answering are the right ones to ask and are representative of the targeted population. Let's run a factorial analysis to check if we have hidden clusters of respondents.
Factor analysis
Elbow curve
Bingo! I won't go into too many details for this graph, but you see the elbow the curve makes at 2 clusters ? This means that if we assume that 2 different groups are combined in our sample, we can separate them and have a better homogeneity in the distribution of answers.
You might already have guessed, and it's confirmed by another question of the survey. This divide is caused by a simple factor : if the respondent is a student or a teacher.
It's not often that simple to figure out what is the hidden factor that causes the disparity. But you can always contact the group with the most exiting answers to figure out what is the common thread between them!
Anyway, let's do the analysis again but with students and teachers separated.
Online classrooms - Teachers
Online classrooms - Students
Course builder - Teachers
Course builder - Students
Tutoring platform - Teachers
Tutoring platform - Students
We now have a very different overview of what's going on. We can see quite a dramatic split on the perception of each product depending on the respondent. Here are the key takeaways :
- The online classrooms are still one of the most interesting products : even if teachers seems to care more than students (who would have thought), both groups express a clear interest for it.
- Most teachers questionned seems to want a course builder. The dominant perception is under the Attractive category, meaning that even a proof of concept or a bare-bones MVP (Main Viable Product) will have a positive response.
- Most students that answered the survey would use a tutoring platform. The difference with the previous point reside in the main category of perception, Performance. It means that a poorly implemented version will be perceived as low value, but if you can provide a good service, the reactions will be really positive.
Conclusion
The great divide of perception between the two groups clearly shows that going for either the course builder or the tutoring platform would result in having to focus solely on only one segment of current users.
Luckily for us, the online classrooms not only have the most consistent results across all the respondents, it's also what would create the most opportunities to the concerned startup. Since a proper implementation is required to have a positive impact, it will be costly and the tech team will have to face the technical hurdles to make it a seamless experience.
Of the 2 other products, the tutoring platform would require a great focus to meet the students expectations. Not being able to take advantage of the current teachers user base is also a net loss for kick-starting a new product.
On the other end, the course builder could be an interesting option as engineering as marketing. The teachers would love any tools that helps them. I wouldn't recommend it as a new focus for the startup, but could be a good side project to bring teachers prospects.
Want to work with us ?
Test your ideas, get actionable data from your users and plan your next move.