-
Notifications
You must be signed in to change notification settings - Fork 97
Weekly usability studies
Each Friday, Colorado plans to find at least one person that's willing to perform a usability study using the think aloud protocol. Colorado will post a brief summary of the results below.
For each user, Colorado provides a brief description of Metacademy before turning the user loose on metacademy.org. In the description, he presents the problem of manually/implicitly forming a dependency graph of concepts when trying to learn a new concept on Wikipedia, and states that Metacademy explicitly builds these types of graphs for the user and provides hand-picked resources to help the user learn the content.
-
I've noticed that users initially tend to interact with metacademy much like Wikipedia: starting at the concept of interest and then working backwards by clicking the prerequisite concepts (some of the users open the prereq concepts in a new tab). I find that they tend to change this interaction pattern once they begin learning some of metacademy's features, ie the learning plan and the graph view.
-
how do we want users to interpret the see-also section?
-
will users notice on-hover content, i.e. learning times?
-
users tend to click on the deactivated hide/show learned concepts buttons -- maybe we should change the appearance of these or provide a help box ("this button works after checking off concepts you've learned") if they click a deactived button
[Colorado] I think we should focus on:
-
focus on improving the experience for non-logged in users, i.e. saving learned/starred concepts. Most metacademy user's aren't going to create an account -
figure out how to explain/justify the validity of the various concepts/resources (should we include an author list for each concept?)
-
inform the user of resource type (video|text|course|etc)
-
improve check/star clickability and color changes
-
figure out how users can navigate from a concept to subsequent concepts
-
figure out search results for broad concepts (roadmaps may help here)
-
fix "all text selected" problem -
make the resource title less prominent, and instead, we should emphasize the resource location links
-
perhaps we should figure out how to make the resources more central to the learning display (emphasize that this isn't wikipedia)
- [Colorado] Perhaps we should have a set list of questions we ask the user at the end of each session
Colorado performed a usability test with 1 user. The user was a male 4th year undergraduate student in CS. The user does research in applied Bayesian statistics and was somewhat familiar with machine learning concepts.
-
Wasn't sure what to search for on main screen (20 second pause while thinking) then searched for support vector machines, a topic he said that he didn't "know much about"
-
The user immediately focused on the summary for SVM (he said that he wanted to get the gist of the concept first)
-
The user opened the prereq concept
convex optimization
by right-clicking it and opening it in a new tab ("let's see, I don't know what this is" ), [when asked why he opened it in a new tab he said that he uses this technique on wikipedia since it's easy to lose track of the initial article] -
the user continued to open foreign prerequisite concepts in new tabs (he would usually read about half of the summary for a concept and then jump down to the prereqs)
-
eventually he noticed the leaning plan on the left side "that's so fricken cool," and he largely stopped opening concepts in new tabs [this seemed to trigger an internal paradigm shift]
-
he said that the summary appears to be a "short and condensed bit of essential information," [later on, I asked if he was trying to learn the concepts from the summary rather than consulting the resources (which he didn't open), and replied that "yeah, I guess that was what I was doing, I figured the resources gave the in-depth information, like the proof of a theorem" he also said that if the summary started to sound like gibberish, he would jump down and scan the prereqs for any unknown topics]
-
he said that he viewed the "see-also" section similar to Amazon's "people who bought this product also bought..."], as in, it was a recommendation for related concepts
-
The user did not notice the on-hover learning times ("I usually don't go to websites and think, I wonder what happens if I hover over stuff")
-
The user did not click on the explore view---though later-on he said that he noticed the toolbox in the upper right hand corner but became engrossed in the presented content and forgot to check out what the buttons do.
-
The user did not seem to understand the hide/show learned concepts buttons at first (he tried clicking the deactivated buttons (other user have done this as well), a bit confused why nothing was happening). He eventually figured out the relationship with the checkmarks
-
The checked/starred checkmarks/stars did not appear noticeably different than the hovered checkmarks/stars (chrome-ubuntu) and the user had a hard time telling when he had clicked/starred a concept
-
When asked "what do you think signing up for an account would do?" he replied "perhaps let me edit content"
-
The user clicked the Roadmaps List and quickly scanned through the Bayesian Machine Learning roadmap---he didn't spend much time on the page: just quickly scanned through the concepts, clicking on a few that "didn't sound familiar"
-
When asked whether he would rather have a "show/hide" learned concepts toggle (instead of two buttons), he said that it would make the UI a bit more intuitive, but at the same time, having concepts automatically disappear after clicking them would be very confusing
-
At the end of the session, the user said that he felt metacademy's interface made "a lot of sense, once I figured out how to use it. But at first I wanted to use it like wikipedia."
-
The user liked the core/supplementary and free/paid resources separation
-
The user liked the roadmaps idea and planned to "look through the bayesian roadmap more thoroughly"
Colorado performed a usability test with 1 user. The user was male graduate student in biology, native French speaker (he also spoke fluent English). The user had "heard of machine learning," but had never had a class on the subject. Here's a brief summary of the results:
-
The user searched for "logistic regression" and clicked on the logistic regression link
-
He had no trouble finding the graph view, though he was initially confused by the ordering in the list view, and said that we should order the concepts alphabetically rather than randomly
- He later realized the ordering was a topological sort and suggested changing the "learning list" title to "learning plan" or "lesson plan" (I already made this change, I also think it's more intuitive)
-
The user thought the graph view was an "incredibly helpful way to see the content" and preferred it to the list view. This is in contrast with the NLP student that thought the list view was much more helpful than the graph view.
-
The user suggested placing the graph view next to the content display in the list view, because he felt the graph/list view felt disjoint, like you're "going to a different page" NB: the user had a very large monitor
-
The user had a very large monitor, and a lot of the text appeared fairly small on his screen; the full concept list looked pathetically small
-
The user thought the graph should have the simpler concepts at the top because the list view had the simpler concepts at the top (after playing with the system for awhile--around 20 minutes--he said that he felt comfortable with orientation of the graph view)
-
The user encountered the "all title text selected problem" in the graph view (all users have encountered this problem, I think)
-
The user suggested having the exploration graph lay on top of the list view, with the graph view at 98% opacity, i.e. to alleviate the feeling that you're going to a different page
-
The user found the checkmark functionality, and seemed to enjoy the cause-effect relationship of checking various concepts.
-
The user did not find/use the the hide/show learned buttons -- (I pointed out the hide/show learned buttons to him at the end of our session)
-
He suggested also buying the domain metaacademy, since he kept typing typing metaacademy when entering the url in various tabs (I've shown the site to some friends that have done the same thing)
-
At the end, the user said that he hadn't thought to question the credibility of the provided content (this is in contrast to users more familiar with machine learning)
Colorado performed this week's usability with two Berkeley graduate students (one focused in NLP and the other focused in computer graphics). Colorado provided a brief verbal description of metacademy (described as an apt-get for knowledge) and then observed the students interacting with metacademy for roughly 15 minutes, starting from metacademy.org's landing page. Both users used chrome on their high-performance mac laptops. Here's the summary points:
-
Noticed GP reference in main search box and said "ah, I can see who you're orienting this towards," and then read the footer on main page and said, "ah, right"
-
searched for "hierarchical dircihlet process" -- search return HDP page, clicked on HDP
-
scanned summary and asked "where did this text come from, why should I trust this author?" (I believe Roger brought up this point previously)
-
clicked disabled hide/show learned concepts button, nothing happened
-
clicked the graph view button -- big scary graph
-
said that the graph was essentially too big/complicated to be helpful, especially on more complex topics (he mentioned later that it might provide a nice initial visualization, but probably wouldn't have much of a purpose when actually trying to learn the concepts)
-
started at the top of the list and started systematically checking off the concepts he knew
-
found checks easily on the list, but had a few misclicks, and at first, he thought he couldn't unclick the checks once they were clicked (NOTE: creating clickable margins around the check/star might help)
-
This user checkout the resources and liked the exact location references
-
this user did not explore the star buttons
-
tons of clicking to mark the concepts he'd learned (at the end of the session, I asked if a "linear algebra" course would be helpful to mark a bunch of concepts at once, he was skeptical of the idea since linear algebra courses vary so much, but he was receptive to the idea of clicking a "linear algebra" course and then seeing a full checked list of concepts that he could curate
-
navigated to a different page then back to the original page and was a bit perturbed to see his checked concepts had disappeared
-
checkmark doesn't seem to disappear when clicking an on checkmark to an off checkmark (the color change wasn't very noticeable on his laptop)
-
question: does checking a shortcut concept also check the main concept?
-
thought it would be cool to group the concepts by topic in the learning view (e.g. work though probability theory and then linear algebra)
-
thought color brackets or tags to separate the different concept categories might be interesting
-
When examining resources, thought it would be nice to know if a resource was a video or pdf or textbook in advance
-
question when scanning the HDP: is this everything on metacademy or is this just the HDP?
-
user used the back button to navigate between views
-
searched for broad concept, "measure theory," and was disappointed by the results
-
confused by dashed lines in explore view around the shortcut nodes
-
encountered the "all text selected problem" where all of the titles become selected in the explore view and its hard to unselect the text
-
initial question -- only for machine learning?
-
searched for "logistic regression" (I mentioned this concept while explaining metacademy)
-
was a little disoriented by the learning view
-
starting from the HDP he climbed the dependency structure by clicking on the links he didn't know in the prereqs section
-
found the graph view easily
-
likes the quick summary in the explore view
-
"oh that's nice" when noticing the resources
-
confused by star vs check buttons, finally decided "oh those must have something to do with a logged in account"
-
tried clicking the greyed out clear/show learned buttons, not sure what they do
-
reading about kernels, he asks "what's a kernel" and wasn't able to answer this questions from the summary text of "kernel trick" and nearby concepts -- seemed frustrated -- confused if a kernel is an inner product or a kernel is a specific linear subspace. Clicked on Coursera link and quickly clicked away -- "oh I have to sign up for an entire course" (didn't notice notes mentioning that he could click the preview button)
-
User mentioned that he's largely viewing this resource like wikipedia and wanted to be able to learn the concepts without going to external resources
-
the used liked the graph/list layout and display; he complemented the color scheme, presentation, and ample use of whitespace
-
searched for "neural network" and was confused as to why their wasn't an entry for "neural network"
-
scrolled through the full concept list and clicked on QR decomposition -- wanted to know what he could learn given that we knew QR decomposition; want to see what depends on QR decompostion (would be nice to be able to do this from the graph view)
-
thinks that a border around the list icon in the explore view (on the nodes) would make it more obvious that it's a list
-
thought a light gradient in the learning view list could indicate a progression of the concepts
-
clicked the root concept and then clicked hide and was confused as to why everything disappeared
Notes: the user mentioned that dynamic graph generation/manipulation is an open problem in the field and removing/adding nodes and keeping fluidity in the display is a really hard problem (he was skeptical that we could improve much on our current graph generation technique)
This initial usability study was conducted with 3 different AI-oriented PhD students. Here's a summary of their input:
-
The search functionality is a bit confusing, all three users said some form of "what should I search for" then typed in: machine learning (14 responses, but is a seemingly random hodgepodge of topics that use the phrase "machine learning") deep learning (0 responses) bayesian regression (3 good responses)
- [Roger] I guess it makes sense that people start by searching for broad subject areas, since that's how we're used to learning about things. Once we have roadmap creation set up, we can create roadmaps for general topics like "practical machine learning" or "Bayesian machine learning," and include these in the search results.
- [Colorado] We might also want to provide links to "courses," which would be unordered lists of concepts. This might be more natural for searches like "machine learning"
- [Roger] I guess it makes sense that people start by searching for broad subject areas, since that's how we're used to learning about things. Once we have roadmap creation set up, we can create roadmaps for general topics like "practical machine learning" or "Bayesian machine learning," and include these in the search results.
-
None of the users seemed to notice the new "change visualization" arrow pointing to the explore/learn transition during the first second of viewing the learning list
-
None of the users found the explore view on their own (I pointed it out to them after they scrolled around the learning view for awhile and thought they had exhausted all of the implemented functionality: often providing feedback like "it would be great if we could visualize the unrolled dependency structure explicitly, in addition to this list")
-
None of the users found the clear/show learned buttons (again, I pointed it out to them after they clicked around for awhile and thought it would be a good idea if they could remove concepts they already knew)
-
None of the users clicked the green checkmarks when exploring the concepts in the learning view, though they did click the checkmarks in the explore view
-
one user thought clicking on nodes indicated that she had learned the concept, since the checkmark stayed visible; this user was annoyed by the way the node summaries kept hiding other nodes)
-
The learning list is confusing, it's not obvious that it provides a topological sort of the dependency structure in the explore view: 2/3 users weren't sure if the learning and explore view provided the same content. The one user that recognized that they contained the same concept was exploring logistic regression (a simple concept).
- [Roger] An alternative interface would be a sidebar on the left with the list of concepts. People would be more used to thinking of sidebars as ordered lists, and it would also solve the problem of people not noticing that the dependencies were there.
- [Colorado] Yes! I think this would be much more natural. I was personally a bit dissatisfied watching the users slowly scroll all the way to the the top of the learning list (it took a long time for some of the more advanced topics), especially once they started expanding topics. Should we keep all concepts loaded in the list view still and provide a sidebar as a quick-navigation, or should we use the sidebar as the central navigation tool where clicking on a title loads that concept into the main viewing port, i.e. only one concept is viewable at a time? I think the latter option would be more natural.
- [Roger] I agree -- use the sidebar as the main navigation, and only show one concept at a time. The current version could still work well for the mobile site, though.
- [Colorado] Yes! I think this would be much more natural. I was personally a bit dissatisfied watching the users slowly scroll all the way to the the top of the learning list (it took a long time for some of the more advanced topics), especially once they started expanding topics. Should we keep all concepts loaded in the list view still and provide a sidebar as a quick-navigation, or should we use the sidebar as the central navigation tool where clicking on a title loads that concept into the main viewing port, i.e. only one concept is viewable at a time? I think the latter option would be more natural.
- [Roger] An alternative interface would be a sidebar on the left with the list of concepts. People would be more used to thinking of sidebars as ordered lists, and it would also solve the problem of people not noticing that the dependencies were there.
-
2/3 users encountered shortcut nodes in the explore view and did not understand what the dashed edges meant
-
None of the users recognized the explore-to-learn-view transition arrows provided in the hover-summary
-
One user thought it would be nice if the resources indicated whether the user had previously visited that resource (i.e. the specific location or the general resource)
- [Roger] Should we do this as hovertext, or should it be displayed on the page itself? If it's the latter, what should the format be? I've been thinking of showing not only which concepts are already learned, but also how many steps it takes to get there given what the user already knows. (E.g. "You have already learned this concept" or "6 steps to learn this concept".)
- [Colorado] I think placing a small icon before/after the visited resources would look reasonable, maybe a little checkmark... Informing the user of how many steps they are from learning the concept is a good idea. But, for the time being (and probably the foreseeable future), the majority of our users aren't registered users. So this will typically be the total number of prereqs the concept has. So I would opt for making this information very subtle, e.g. a small number in the corner of the list-view that has explanatory onhover text.
- [Roger] Should we do this as hovertext, or should it be displayed on the page itself? If it's the latter, what should the format be? I've been thinking of showing not only which concepts are already learned, but also how many steps it takes to get there given what the user already knows. (E.g. "You have already learned this concept" or "6 steps to learn this concept".)
-
It appeared that nobody read " Read/watch one starred resource, and go to any of the others for additional clarification." At the end of session, no one knew the difference between starred/unstarred resources.
- [Roger] Would they notice the directions if they're actually using the page to learn about something?
- [Colorado] I don't know. We can leave it as-is for the moment, and readdress this issue if we find other users are confused by the star/bullet difference.
- [Roger] Would they notice the directions if they're actually using the page to learn about something?
-
one user initially thought it was confusing that clicking on the title of the resource did not take him directly to the resource, but once he found the "location" link, he seemed satisfied
- [Roger] Yeah, it probably is a bit confusing to have the main resource links be so prominent. They should probably still be there, but maybe there's a less conspicuous way to display them.
- [Colorado] Perhaps a little CSS-fu can soften the effect.
- [Roger] Yeah, it probably is a bit confusing to have the main resource links be so prominent. They should probably still be there, but maybe there's a less conspicuous way to display them.
-
One user said he would like to use this for a class, especially if it had a "review mode."
- [Roger] One idea I've been thinking about is including a spaced repetition feature. It would automatically add cards corresponding to everything you've marked as learned. This would mostly be for simple things like definitions and statements of theorems. (At one point, I attempted to use spaced repetition to learn more complex things like proofs, but never found a good way to do it.)
- [Colorado] I totally agree. I've been using Anki for various subjects since January. While not perfect, my rentention of various programming languages, math tricks, and machine learning concepts has certainly improved. This will take some finesse to address properly, so I vote for holding off on this idea for now. (PS) we discussed this idea previously, see the first entry under Goals and Ideas#ideas; it's probably a good sign that it keeps reemerging
- [Roger] One idea I've been thinking about is including a spaced repetition feature. It would automatically add cards corresponding to everything you've marked as learned. This would mostly be for simple things like definitions and statements of theorems. (At one point, I attempted to use spaced repetition to learn more complex things like proofs, but never found a good way to do it.)
-
One user mentioned that the clear/show learned buttons are in a confusing place, since traditionally, the header has site-wide navigation/operations.
-
One user didn't like that the back button didn't take them back to the previous view
-
Two users mentioned that they would like to use the arrow keys to navigate the explore view
-
One user, who had an awkward scroll wheel, wanted zoom-in/zoom-out functionality embedded into the explore view
-
there's a bug that 2 users encountered that causes the main application to "scroll down" and hide the header; I think this has to do with the application being bigger than the page itself -- I'll look into this
- [Colorado] fixed
-
one user accidentally selected all titles in the explore view and was unable to unselect them by clicking on the background of the application (this has happened to me before) -- I'll see if I can fix this
-
each user immediently zoomed out once they encountered the explore view (perhaps we should start a bit more zoomed out)
- [Roger] I second the part about starting more zoomed out.
- [Colorado] I agree, setting a static zoom-level is trivial, but trying to e.g. include the entire graph will be a bit trickier
- [Roger] I second the part about starting more zoomed out.