wrap-up

Last friday (december 19) was the final mume13 session. This post is meant to finish the course for our group.

Final result

You can found the final visualization operational right here (and still on the dev page). All code and data (JSONs) needed to run it locally is available as a zip, right here. This zip contains all versions, from 0.1 to 3.0. The file called ‘VisualizationV3_0_final.html’ is the final version.

Our final paper can be found right here (pdf).

Grading the other groups

We were asked to divide 20 points over the other groups. We’re going to give a small explanation on the results. We would like to state the we think all groups did a pretty good job, but the grading system automatically creates differences of 2 points rather than one point. For the record, we don’t think that any group was 1.5 times better than the others πŸ™‚

Mume13 (PeopleFlow): 6

Nice overall result, and a very interesting topic! The colors weren’t that good looking, but that isn’t a big deal. The choice to show in- and outgoing with the same chord was confusing to some of us.

Mumepeg (residences): 6

In our eyes the best result and also the most useful application. Maybe some minor things still need fixing, like the scalability or the accidental zooming when scrolling down. We discussed giving you 7 instead of 6, but that implied taking one point away from another group, and since your group was the only group consisting of four students, we didn’t.

Pestad (Music): 4

We were amazed by the progress this group made, given that they were behind a week ago. The demo was by far the best/funniest πŸ™‚ However, as you might have noticed after the presentation, their was a lot of feedback from the students, which wasn’t the case for most of the other groups. For most groups, this feedback came 3 or 4 weeks ago, and was used to improve the visualization. This group hasn’t had the chance to incorporate the feedback from the students in their visualization, and had a less iterated result as a consequence, explaining the lower score

Mumemamoth (League of Legends): 4

Decent result, looking and working pretty good. The lack of a structured user study was the main reason for the lower score. Furthermore, when looking at the visualization as it is now, most of the screen doesn’t really show information, except for the star diagram, and we didn’t really like the star diagram, because it looks pretty cluttered during champion selection. We agree with the comments in class that there were other possibilities, and we feel they weren’t really explored.

happy new year πŸ™‚

ToPiJa

Visualization update

As we stated in class a couple of times, we were still looking to add some features/information to our visualization. So, in the last week, we’ve added something new.

You can find the newest version right here, or on the dev page.

What we’ve added, is a way to indicate the number of times a team changes his coach. We’ve done this by adding red circles to the teams. Here’s an example:

Schermafbeelding 2013-12-12 om 11.24.51

This means that this team has changed their coach two times in that season. This can be anywhere between the end of the previous season and the end of the season where the circles are in. So, in other words: no dot next to a team, means that, at the moment of that ranking, they had the same coach as for the previous ranking. If there is one (or more) dot, the team has changed their coach before reaching the end of the season. This can be before the season starts (e.g. in july), or in the middle of the season, and so on…

If you hover over a dot, you see the names of the coaches (both going out and coming in):

Schermafbeelding 2013-12-12 om 11.29.32

In case of more than one dot, the changes are ordered chronologically. So, the coach that was in office when the final ranking was made, is the one coming in on the lowest dot.

Also, if we had an extra 2 months, we would work out the zooming in on seasons in a better way, and in that case, we would also place the coach changes on their actual date, instead of on a fixed position.

What do you think about this addition??

Paper update

Given the feedback we’ve got from prof. Duval last week, we updated and expanded our paper. Find the new version right here.

Some things we know we still need to do:

  • update the abstract
  • include some more screenshots of our own visualization
  • Add the section on the software design
  • improve the conclusions of the evaluation, and write a final conclusion
  • add version 3.0
  • add something about the source of our data

We know that the last part is quite important, but it seems important to us to write down the final conclusion on the course in group, so we will probably discuss that tomorrow πŸ™‚

Let us know what you think!

Pj

evaluation status update

In an earlier post, we described how we were going to evaluate our visualization.

At the moment, we have performed our first experiment with 10 people, and we screen-captured the test for 8 of them. I did some interpreting, and here are some possible conclusions. These are not final, as we still have some analyzing to do.

  • Obviously, accuracy of the answers increased (whenever a right answer was possible) when the subjects used our visualization. For the questions where there isn’t an answer that is 100% correct (for instance, how would you compare team A and B?), there is still a big difference between the answers. Intuitively, most people think Genk is better than Gent, and they change opinion after using our visualization. This shows the relevance of our visualization, as most people have prejudices based on the last few years.
  • Every test subject has used the sidebar for selecting some teams, and in 7 out of 8 screen-captures, the sidebar was the preferred method of selecting a team. This is probably because the list is in alphabetical order, and a lot of people don’t know the logos by heart. Hence, the sidebar was a very useful addition to our visualization, as it accelerates insight generation. Even when a team is searched for a second time, the logos are rarely used. We assume that the name of the team is easier to match to the text in the sidebar.
  • A lot of people used clicking (selecting a team), even when the particular question was about one team only. In this case hovering could suffice to figure out the answer. However, the majority of test subjects worked in a somewhat sequential manner: They select the necessary teams first, and then they start reasoning about what they see. A lot of people intuitively try to get the mouse back to a neutral position before the reasoning part. A problem with our visualization is that, because of the possibility to hover, there is a shortage of neutral positions. For these test subjects, the hovering actually slowed down the process of generating an hypothesis. In order to improve this, a solution might be to deactivate the hovering in the sidebar. This way, people will be able to select a team in the sidebar, and then start reasoning about it, without worrying about the mouse.

For the second evaluation, we haven’t had much answers yet. At the moment that I’m writing this, we’ve received about 12 useful answers. Given that the evaluation mainly consists of an open question, it is impossible to draw decent conclusions, or see trends, based on 12 very different answers. One thing that seems to be recurring is the fact that people go looking for insights about teams that they care about, both in the positive and in the negative sense: We have got an answer stating “It’s good that Leuven is going up”, but also one stating “Anderlecht is the best team in the league, but they’re still a bunch of *”. Believe me when I say that the ‘*’ wasn’t a positive word.

We’re not yet sure how to handle the second evaluation. Maybe we can settle for the limited number of decent answers, and have a look at those? If however, these answers do not suffice, we will have to start talking to people in person…

Let us know what you think!

Pj

Evaluation v2

Last week, we already had quite a good idea what we wanted to test with our visualization: we want to test in what way our visualization affects reasoning and hypothesis generation.

We are going to (try to) do three things:

  • The first test is a controlled experiment, with two questionnaires of 20 questions. The two questionnaires are identical, so the test users have to fill in the same questions twice: at first without the visualization, and after that with the visualization. Comparing the answers will tell something about the usefulness of our visualization. We hope that a lot of people get the questions wrong the first time, and that our visualization provides really (unknown) information. Furthermore, we will capture the actions of the user, to gain insight in what features help people finding information. We will know how useful the sidebar, hovering, scrolling, etc. are, and for what kind of questions they are used. We are not making the questions public, so we can keep on doing the experiment. We are hoping to perform this test with 5 to 10 test subjects.
  • The second test is a more open test, and we will just share it on the internet, to get as much people to answer as possible. We would like to find out what kind of information people get out of the visualization if they explore it on their own. This is rather hard, because we can’t really oblige people to do this properly. We made a google form for this, which has three pages. On the first page, we ask three questions about the belgian first division. On the second page, we ask the same questions again, but test subjects are allowed to use our visualization. These first two pages are meant to give the subjects a flavor of what kind of prejudices exist about first division, and how our visualization shows them wrong. The third page is the most important one, where we ask the subjects to list some of their own prejudices and check them with the visualization. Obviously, not everyone knows a whole lot about football, so the subjects are also allowed to just give some random facts they learn from using it, like “Anderlecht looks like a good team to me”. Any information learned from exploring/playing around with our visualization can be listed here.
  • The third test is somewhat of a back-up if the previous doesn’t really work. In this case, we will try out the same experiment in a more controlled experiment, where one of us guides the subject through the experiment. To boost the results, we will probably try to do this with small groups of subjects at the same time, so they can help each other finding interesting facts. We hope to do this with at least 5 groups of 2 or 3 subjects, if the test is necessary.

Basically, it all comes down to this: “Please tell us what YOU learned when using our visualization”

So, what did you learn? Fill in the form, or leave it in the comments! Sharing our form would be very much appreciated!

Paper: Beta version

You can find a new version of our paper here.

We know that we still have to spend some more time in the paper to deliver better quality, but we spend most of the time in developing our gamma version of the visualisation and making the screencast. We will focus more on this paper in the next weeks..

screencap

We’ve made a short video about our visualization.

It shows three things:

  • how to use our visualization
  • It has a look at some teams that relegated in the last few seasons. How did that happen for them?
  • It makes a comparison between the performances of Anderlecht and Club Brugge in the last fifteen years

Now, you’ll quickly notice that we chose to use mac os x’s Daniel to do the voice-over. We do know that this doesn’t really sound natural (and the upload to youtube didn’t help at all), but at least his pronunciation is decent πŸ˜‰ On top of that, it was quite funny to use the guy.

Pj

Evaluation

Following up on the post reviewing the paper by Lam et al, we were asked to design our own evaluation. As you might have read, there were three evaluation scenarios that were relevant to us. One of them was of the “process” group, the other two were part of the “visualization” group. Prof. Duval asked us to ignore the latter, so that leaves us with one scenario: the VDAR (visual data analysis and reasoning) scenario.

However, this doesn’t mean that you can no longer give feedback/comments on usability of our visualization. We intent to keep posting new versions online, hoping to get this kind of feedback. In this way, we will keep the “Informal Evaluation” thing going, as it is called in the paper by Lam et al.

We also intent to perform a decent evaluation of the VDAR type. This scenario focuses on knowledge and hypothesis generation. The goal of our evaluation would be to discover in what way our visualization helps people drawing conclusions on the last few years of belgian football. Given the fact that our visualization is intended for casual users, the methods described as ‘case studies’ wouldn’t make much sense for us, as they mostly aim on (long-term) specialist behavior. A controlled experiment, probably combined with Β a questionnaire, is probably the best in our case.

Since we don’t really have experience with designing such an experiment, we are not quite sure yet what tasks and questions the experiment will consist of. Since the deadline for this week is still a couple of days away, we still have time to ask for your suggestions. If anyone has good ideas of questions to put in such a VDAR questionnaire, please leave’m in the comments πŸ™‚

Update: After discussing the issue together, we came up with the following: we’d make the test person fill out a questionnaire before performing the test (thx to Niels for that one). This questionnaire would consist of football-oriented questions, like ‘where would you rank team x for their overall performance over the last 15 seasons?’ , ‘who do you think is the best team of the past 15 years?’ or ‘How many years did team x play in first division?’. After filling out the complete questionnaire, we would ask them to fill the same thing out a second time, but with our visualization to help them. Obviously, we would hope that they’d perform much better. We would also try to observe how long the test subject needs for every question. This way, we would be able to see how our visualization affects the generation of hypotheses, and what kind of knowledge can be easily extracted from it.Β 

Pj

Final Beta-version: 2.1

In the last 2 weeks we spent some more time in finalizing our previous beta version. We fixed some things that were mentioned in class 2 weeks ago and some other minor things to that our visualisation looks cleaner.

You can find our new beta version and an explanation of what things we fixed on the development page.

This visualisation should be testable on every pc, that’s why we shared it with people in the wild. We did this through Facebook, Twitter and Google+. By using more famous hashtags such as #sporza, @sportingtelenet, #extratime, etc. we would like to get as much feedback as possible from sports-people. We also mailed to some journals to ask whether they could use our visualisation. We hope to get some reactions from that too. Some organisations or companies that we would like to reach are:

We also added Google Analytics to track how many people actually visited our visualisation. It’s also interesting to know which nationalities we can reach.

Still, if you would happen to know some (professional) sports/football people (players, coaches, commentators, journalists) that would like to help us out by trying out the visualisation, don’t hesitate, contact us right now.

However, this version is still a beta version. We are planning to first evaluate the feedback we get on this version. We will also extend this visualisation with zooming in to the details of a season. More on this will follow soon on this blog πŸ™‚