Last friday (december 19) was the final mume13 session. This post is meant to finish the course for our group.
Final result
You can found the final visualization operational right here (and still on the dev page). All code and data (JSONs) needed to run it locally is available as a zip, right here. This zip contains all versions, from 0.1 to 3.0. The file called ‘VisualizationV3_0_final.html’ is the final version.
Our final paper can be found right here (pdf).
Grading the other groups
We were asked to divide 20 points over the other groups. We’re going to give a small explanation on the results. We would like to state the we think all groups did a pretty good job, but the grading system automatically creates differences of 2 points rather than one point. For the record, we don’t think that any group was 1.5 times better than the others π
Mume13 (PeopleFlow): 6
Nice overall result, and a very interesting topic! The colors weren’t that good looking, but that isn’t a big deal. The choice to show in- and outgoing with the same chord was confusing to some of us.
Mumepeg (residences): 6
In our eyes the best result and also the most useful application. Maybe some minor things still need fixing, like the scalability or the accidental zooming when scrolling down. We discussed giving you 7 instead of 6, but that implied taking one point away from another group, and since your group was the only group consisting of four students, we didn’t.
Pestad (Music): 4
We were amazed by the progress this group made, given that they were behind a week ago. The demo was by far the best/funniest π However, as you might have noticed after the presentation, their was a lot of feedback from the students, which wasn’t the case for most of the other groups. For most groups, this feedback came 3 or 4 weeks ago, and was used to improve the visualization. This group hasn’t had the chance to incorporate the feedback from the students in their visualization, and had a less iterated result as a consequence, explaining the lower score
Mumemamoth (League of Legends): 4
Decent result, looking and working pretty good. The lack of a structured user study was the main reason for the lower score. Furthermore, when looking at the visualization as it is now, most of the screen doesn’t really show information, except for the star diagram, and we didn’t really like the star diagram, because it looks pretty cluttered during champion selection. We agree with the comments in class that there were other possibilities, and we feel they weren’t really explored.
happy new year π
ToPiJa