Audio-Recordings of Interviews: Indispensable or Nice Supplement?

Reading in the literature on Interviewing almost everybody recommends audio recording [1]  as well as additional video [2][3]. Nevertheless I wondered how important these recordings might be especially considering that some people never learn to value a user centered design process if they find their first steps too tedious.

So: Is it worth it? (considered that you might have few time/money)

The answer is »it depends« (as usual) but I wanted to get some insight into what it depends on. Sadly I could not find the key to the rocket science building, so all you get are my experiences in several small case studies.

What I did

I did several interviews as part of three different projects. In each interview I took notes and recorded the audio. After all interviews I complemented the notes from memory as soon as possible. I  filled in gaps and extended bullet points to more verbose descriptions of what was explained to me or observed. Than I transferred the information in a text file.

I listened to the recording as well and wrote all coherences, statements and explanations  from the audio in a text file too (thus, no word-by-word transcript). Having my in-interview notes and my from-memory-notes and what I got by going through the recordings I could compare the results of each step.

What I found

Going though the audio made a difference – but not as big as I assumed. What was added was mostly minor details. Among the five Interviews there was one in which I relied heavily on the recordings. In one of the interviews I had seemingly conflicting statements; I was able to understand and clarify this by listening to the recordings.

The main points were already in the written notes and/or their complements.

Conclusion

Overall, the notes and their completions after the interview already provide a usable basis for user research, even if no audio was recorded. However, if it is possible, you should record nevertheless. It can happen that your notes are not useful (as it happened to me in one of the interviews) and you may need to review some of the recording to resolve conflicting statements or get a better understanding.

Open Source Social Reserach

I rely mainly on open source Software. I would use proprietary software as well but I think open source has  some advantages:

  • to be able to just install software on several computers
  • being able to share data with a team easily.
  • ease the entry: nobody is enthusiastic about trying something that comes with a hefty price tag.

However while there is open source software for almost anything it gets a bit sparse in regard to software for qualitative methods. Here is what I use:

Easytranscript: A little software for transcribing audio. Audio can be controlled by shortcuts. Especially useful is the setting of timestamps in the text and that clicking on that parts of the text go to the according part in the audio file.
Before using Easytranscript I used VLC together with some editor. When doing this, best use global shortcuts in VLC (e.g. some F-Key for start/stop and jump back) and word autocompletion in the editor.

RQDA: A R-based software for coding texts and retrieving text parts which have been coded. It has a simple GUI although its not totally conforming to standards. The installation requires R and GTK+ (the installation is described here –  needs some dependencies, but no manual config required)
RQDA only takes plain text and not the .rtf written by Easytranscript, but copy and paste to a text editor can resolve the problem.
An alternative to RQDA might be CATMA  (http://www.catma.de/) which runs on every computer that has Java installed. I have not tried it yet.

Open Office Writer: Serves me well for writing reports, consent forms and whatever else I want to print on paper.
Together with Zotero it is quite good for writing scientific text as well. Many Linux-Users will opt to LaTex for that though it is harder to get into.

little hack

When shopping in my local supermarket I always wondered about one thing at the checkout: The cashier was always pushing one of these separation-bars back, along the movement direction of the conveyor belt. It seemed tedious: after a few movements of the belt the separation-bar was at the belt’s end and was pushed back again.

So I asked why. It turned out is it a little hack: The conveyor belt stops automatically, when something passes a light barrier.

Probably this is implemented in order to make moving the belt easier (no need for finding a good point to stop); it will prevent as well that items get accidentally pushed on the scanner.

However, the light barrier is not very reliable. To assure that the barrier is triggered the cashiers use a plastic separation stick that assures blocking the light barrier. When it blocked the light barrier, it is pushed back again.

A nice example how people overcome insufficient technology  with a simple but effective solution.

Bildschirmfoto vom 2014-03-01 14:40:01

Brainstorming: it sucks (says science)

Brainstorming is a rather well known Method for generating ideas. The participants generate as many as possibly ideas without judging the ideas.  Since Osborn introduced it in his work »Applied Imagination« it is in use and its rather popular [1] and part of the design thinking approach [2].

But already a few years after the technique has been introduced, empirical tests casted doubt on the method’s usefulnesses for generating ideas: Neither in terms of quality nor quantity it could could compete with ideas generated by single individuals [3].

This has been attributed by other researchers to several reasons (all from [4]):

  • In a group, just one person can talk, while others have to wait (supported by [5])
  • Fear of evaluation (despite of brainstorming’s credo »defer judgement«) (not supported by [5])
  • Group members don’t actually inspire each other; instead a group is prone to discuss more of the same.

Nevertheless, methods for creating alternative ideas are needed – at least research on prototyping strongly suggests that generating and testing multiple alternatives improves designs [6]. So what are viable alternatives for brainstorming? There are a lot of creativity techniques out there and I sadly can’t tell which is THE BEST. However, I made good experiences with the design studio method. (which is explained by Todd Warfel here). It is a process that uses sketching, individual idea generation and group feedback as its foundations; my experiences so far have been very well. As the process is less known than brainstorming and has admittedly a more complex structure, I made a little web app (At the moment only in German;  availiable on github too; you can compile it into a phonegap app) which hopefully guides a newbie through the process. Have fun being creative!

References:

[1] Idea Generation Techniques among Creative Professionals by Scarlett R. Herring, Brett R. Jones, Brian P. Bailey, hicss, pp.1-10, 42nd Hawaii International Conference on System Sciences, 2009
[2] Use our methods, http://dschool.stanford.edu/use-our-methods/
[3] Taylor, D., Berry, P., Block, C.: Does Group Participation When Using Brainstorming Facilitate or Inhibit Creative Thinking? Adm. Sci. Q. 3, 23–47 (1958).

[4] Lamm, H., Trommsdorff, G.: Group versus individual performance on tasks requiring ideational proficiency (brainstorming): A review. Eur. J. Soc. Psychol. 3, 361–388 (1973).
[5] Diehl, M., Stroebe, W.: Productivity loss in brainstorming groups: Toward the solution of a riddle. J. Pers. Soc. Psychol. 53, 497–509 (1987).
[6] Dow, S.P., Heddleston, K., Klemmer, S.R.: The Efficacy of Prototyping Under Time Constraints. Proceedings of the Seventh ACM Conference on Creativity and Cognition. pp. 165–174. ACM, New York, NY, USA (2009).

learning with videos

I love to use (and to contribute) to free educational resources. It was and is a big trend to use videos as a ›modern‹ way to get information across. There are quite a lot of these online. Since some time, you can download lectures from several (high-class) universities, later on we got the MOOCs like on coursera and on udacity.

As much as I still like these resources – after being overly motivated I came to use the video lectures very few. I e.g. tried to learn some more about data analysis. I quickly switched to using just the slides and the example data.

Why? Because some stuff I did already know. It was annoying to hear it again. So I needed to find where that-what-I-know ends. Most stuff I did not know, so I needed to find where what-I-don't-know starts. Not too difficult. But as soon as I came to a concept that was new interesting but hard to understand for me I needed to pause – otherwise I hear that lecturer going on about something else. The constant speed of a video does not match the naturally rather variable speed learning.

Thats not surprising, but a good reason for me to prefer (online) books with images, simulations or even short clips (like the classes from CMU or the online stat book). No need to pause, easy to skim.

However, I do not in general think videos are bad even if they are used as the main way to get knowledge across. I love Khan Academy. The videos are very short, so I immediately can choose what I need to know (So like: »I got that log-something here in the equation. What does that mean again?«). Thus I only get interesting content which develops step by step so I get what I need to. Because it is so focused I seldom had the feeling that it developed too fast

So in brief: For lecture style kind of information with multiple concepts of varying complexity I think a text with accompanying material suites – at least my style of learning – far better that a video. For very focused, bit-sized learning, short videos shine.

security and usability: message encryption

Recently I began to do a bit of research on security and usability in order to make it a little project for students to work on. I was well aware that the most secure system is not effective if people can't use it and that security (lets say: a very long password) and human preferences (a rather short one if at all) are not matching. However I was amazed how big the problems are.

First I was looking at message encryption. It seemed the most likely scenario: Write something that only you and a particular other person should see in plaintext. Well known for archiving this is PGP. During the recent revelations about the NSA’s  practices (time of writing: Oktober '13)  Crypto-Partys were flourishing all over the place and teaching people how to use PGP was seemingly the way out of the trouble. But as far as research and my personal experience is concerned it leads to another problem: the one of using PGP. In a classic study on PGP it took the participants quite a long time to get encryption working, several people broke the security by sending their private, secret key by accident. Security Guru Bruce Schneier says: »[My tips for online security] are not things the average person can use. […] Basically, the average user is screwed.«

I started to look at an alternative method for message encryption: Off-The-Record-Instant-Messaging. The protocol is designed with usability in mind (and there is a little paper on the topic) . You don’t need to manage keys by yourself, authentication works via a answering a question and the like. However, I still run into problems: To negotiate keys you both need to be online, so just sending an encrypted message into the blue does not work; To tell somebody that »encryption works but, let’s authenticate« results in a »WTF?« on chat (if you get your non-nerd-friends to play the game that far).

And despite of some background knowledge I myself am still confused about keys and their management– and that is just the part which has some interaction with the users: Which keys may be exchanged, which may never be exchanges and which are… whatever, it is complicated and it is no wonder that the mental model and the actual system diverge. (You get in contact with keys even on OTR if you can’t use the authentication-by-shared-answer [added])

What works probably fairly well is locking data via a password and unlocking it (Though I have no empirical study on the subject).  The user’s likely mental model of a locked box matches the one of the system fairly well. And that box is not some mysterious whatsoever but a file, something that most users will know. If they send it, the password needs to be shared via a second channel – but that’s it. But for sending many messages it is not very comfortable.

So encrypting messages is rather difficult, and even a relatively usability-aware protocol like OTR is noticeable less easy to use than plain text. To say that this is how it is and that one should RTFM is no solution. In general, because than, every unusable thing could be justified with that. And, more specific, because if only those who have something to hide encrypt, they are easily spotted and if one side can use encryption, but it seems to much for the other, both will stay without.

Update 05.11.13: Spelling and typo. Key management on OTR clarification. Link to the definition of a crypto-party,

from research to requirements: user need tables

There is a lot written about qualitative user studies but in my eyes there is few material out there which helps a practitioner to apply the methods. Even the more practical works depend on a large team and lots of time and resources. So the interested designer or researcher may decide to skip formal user studies. But I think there is a need for doing that in order to have a valid foundation for discussing possible features and how they must play together.

Because of the difficulties a practitioner may face when attempting to do user studies, I was pretty happy when I recently found the research of S. Kujala, a researcher whose research deals (among other fields) with methods for user studies. The research uses multiple case studies in which several methods and involvement of industrial partners is used to determine suitable methods of gathering and analysing needs and generating requirements for the product be be created.

In the suggested method, data is gathered as usual in
interviews and observations with focus on predefined critical topics (e.g. context of work, existing tools etc.)
In addition two other complementing methods are suggested:

  • Think-aloud usage of a (possible imaginary or prop-like)  product
  • The "interactive feature conceptualization": Items and Processes the user mentions in the interview are written on sticky notes; the user that is asked to arrange the notes in a system meaningful for him/her.

The interesting point for me is how the data is analysed. There is already a kind of pre-analysis existing: The categories from the main interview and observation. However, this structure was not very practical, so Kujala e.t al. developed an interesting tool, called "User Needs Table". (I hope my depiction is coherent with the authors suggestions; This is a description of my understanding of it)

It is aimed at bridging the gap between user studied and user requirements. The table has two columns, one for the tasks of a sequence, one for problems an possibilities associated with this task. In Kujala’s work they especially serve the construction of use cases which can be easily derived, since sequence and possible problems are linked together in the user need table.

I found the suggestions and the research design very interesting and useful. Neither does that author do an exclusively theoretical analysis of methods nor a quantitative comparison of  »treatment groups« using different methods (whether the latter would be any practical may be questioned). Instead the research is based on several case studies building upon each other. This, I find, is a very useful approach to develop empirically based and practical methods – and I have rarely seen such an approach. The most literature I read on qualitative methods is rather theoretical, so I think such a empirical and practical approach should be praised.

What remains unsolved for me is still a better and more practical transition from data to an analysis. As far as I understood, the user needs tables are created after the first report has been written. The predefinition of a focus is an important step, but at least I'd love to have a analysis methods beyond that.

Nevertheless, check out Kujala’s Papers, they are really interesting and illuminating and contain useful information on why the methods are designed the way they are:

Example user need table for a file upload to a learning management system (created for illustrative purposes, so it is not based on a 'real' project of mine)

Task Sequence Problems and Possibilities
Step 1: Selecting an “area” to which the upload belongs to (e.g. study group) Problem: The difference between the private file collection and the public files is frequently misunderstood.

Problem: For teachers who manage only one course, it is seemingly useless work

Step 2: Choosing a file Problem: The folder structures on the web portal and the local computer may be not the same; disorientation can occur.

Problem: Without instant feedback, Users may upload files twice

Step 3: … … (etc.)

Promises and reality: file exchange as popular E-Learning technology

E-Learning promises a lot. And seemingly it can live up to its promises – contrary to many other "future topics" there is a lot of empirical research about what to do and what to avoid to make it work.

There is a rather wide range of possibilities to use E-Learning: Recorded Lectures for »flipping the classroom«, (self) assessment tests, digital workspaces and and and…

Recently I evaluated some interviews on the use of E-Learning technology in the teaching practice. It turned out that, despite of all the possibilities, E-Learning is mainly sharing files –  and other tools and technologies were rarely used.
Nobody would doubt that sharing documents, links, code etc. can be an important part of teaching. But it seems surprising that all the neat other stuff is rarely used. It could probably improve learning and teaching in quality and efficiency to do si. So why is it not used? And why is the exchange of files that popular in contrast?
I don't have any well tested conclusions but some educated guesses, aahmm, theories.

1) Adaption (to E-Learning) is hard…

That there are good tools out there doe's not mean that they are used. How many of us have ever thought about changing something in our lifes and have been sticking with some "bad habit" nevertheless? And while stopping to smoke or doing more sports has clear benefits, the gains (and losses) of E-Learning are not that clear for the professors (who are experts in their field and no experts in E-Learning). And creating online-self-assesment test or recording a lecture and publishing it is new for most of the teaching staff.

2) …but if you can continue what you do anyway, it is far easier.

In contrast to many other ways to use technology for learning, exchanging files is already known. Not everybody uses the same way: Mails, USB-Sticks, Dropbox, the learning-management system’s  sharing function… This leads to several logins and possible chaos – but in contrast to many other possibilities to use technology for teaching, file exchange is actually used.
Exchanging files has another property that makes people adapt easily: They can use the applications they use know. No matter how you created the content – as long as you can create a file, you can share it.  So if they want to share information they can created the way they are used to. Or just use the files they have already created for other uses.

So in brief: In the use of technology for teaching, sharing files is more often used than other methods because it requires few effort to adapt to it since almost everybody does create files and shares them in some way it already.

confusing: line charts for values in categories

Reading papers and reports I often see diagrams that are used to visualize values of different categories – e.g. the average hours spend for university-work per week of students of different subjects. It seems rather intuitive for me to use a bar chart: One category (e.g. subject), one value (e.g. 34 hours) and one height of the bar to visualize the value.

However, I often see line graphs being used for that purpose.

suggestiveNominalDataChart

same data; two visualizations

In the example below, the line graph suggests that there students who study partly Computer Science, partly Media Arts and that these students work roughly 35h per week. As well one could assume some kind of order or continuous value on the x axis like one is used from diagrams that put the time on the x-axis.

So I see no reason to use line graphs for values-by-category-visualizations. They rather confuse and mislead the reader.

Usability improvements unleashed…

Easing the interaction by using visual interface elements instead of a syntax that needs to be learned and remembered is a fairly common principle ("recognition rather than recall") It is a major step forward if the interface is (hopefully) self-explanatory instead of having to read the ***** manual – first for learning and hereafter for remembering, if you did not interact with the system for a while.

However, recently an improvement that took away the burden of the syntax brought me other problems in exchange. And not the nerds were complaining. Actually it was the very opposite.

For improving our faculties Mediawiki, I did research on common problems students have when using it. After observations and questions with several users a distinct pattern emerged: editing in Mediawiki-Syntax was a major problem, especially when it came to code that triggered functions (make a link, use a picture – in contrast to merely visual changes like making a word italic or the like)

I was  happy to  see that the current Wiki-Editor has dialogs for inserting a picture. They are fairly simple, having no visual selection for a media library. But instead of remembering the Syntax, one can put in the file name and caption and generate the link to the picture from that.  So what I thought want along the lines of this: The user often forgets the syntax (as my tests strongly suggested) so I would like to take the need for this. The dialogue shows the possibilities of the syntax, you need just put in your values, click o.k. and there it is without the hassle of recalling the syntax.

The dialog which is *not* for drag'n'drop-uploading

The dialog that is *not* for drag'n'drop-uploading

So I did some tests. And it turned out it had some side effects: The simple interface that ought to suggest putting two values in and clicking o.k. to get a correct link generated was thought to have some more capabilities. Seemingly the dialogue suggested other stuff too, namely uploading pictures by drag and drop – which did not work at all, and left the users wondering about it.

Seemingly, the functions of flickr and facebook already established drag and drop fairly well (brief remark: it is not very practical actually imho, as you need to rearrange and call up two windows). And they combine upload and image use, making these two steps one.

So my idea to add some visible, interactive representations of (existing) functions were not wrong. But they were mistaken for being far more. That they did not work this way was the cause of trouble. So the overall usability may not have been improved at all and one problem was put in place of another. But as the idea still makes sense, I'll redesign the dialogue to make it's capabilities clearer and to anticipate it's use as a upload dialogue.

Teaching Material Usability Tests

Testing Interview Analysis

As I found out in the research on last term's class, some things were easy and enjoyed by the students, while some were difficult, like analysing interviews. So I tried out way I explain interview analysis : I recorded a short interview and let people analyse it after explained it to them. I just wanted to see if it was used like I expected, so I did not have a control group or the like; it was more design testing than sciency testing. Nevertheless I was quite happy when people were able to grasp the basic concept and apply it. All were able to listen through the interviews, extract meaningful pieces and to connect those in a way that revealed some data-based insights for them.

...and the Workbook

In the class I use some self-written a workbook to remind the students of important things when preparing, doing and analysing user research.It is not a long document, so I asked people to read through it while telling me what they think. This is basically a think alout-usability test.Others may call it a live-proofreading ;-) Like usual proofreading it was incredibly useful: I wondered how some sentences ended up being written in such a difficult-to-get way. As well different people pointed out different problems, so it was good to have several looking at it. To sum it up:  I applied  usability-methods to teaching-materials and it turned out to be quite similar to what happens when you test software.

Qualitative Meta-Research for Education

During the last months I was designing a class for Human Centered Design. I try to practice what I preach, so the class is designed with human centered methods itself. (Pretty meta, hmm?)

So I started the project by doing  interviews with students of a former HCD class. As well I incorporated student's self-documentation and my observations during that class. So I ended up with a large amount of data that needed to be organized. I choose to create an affinity diagramm. Such a diagram is created by printing out every piece of data that can meaningful stand on its own on little notes. Than they are sorted after common topics they concern. If a topic is identified it gets a kind of heading on a note in a different color. It is pretty helpful but takes a lot of time and energy and space. Anyhow, I now hope that I have a clearer look on the former participants activities, motivations and problems.

affinity_HCDR1

Part of the affinity-diagram

It turned out that data analysis was a problem. Few students managed to get something out of the interview data that made sense for them – except for what was  accessible directly from  interviews itself. This gives me some room for improvement, though I am not surprised. In many occasions I saw, that upfront user research is hard to get across, not to mention to do for a newcomer. In addition, even "rapid" ways that are described in books require a team with some experienced members and quite some resources. On the other hand, if this kind of research is described in brief, it is usually on one page and such suggestions often skip analysis.

I hope that using techniques like role-playing, giving well crafted examples and explaining how one can organize findings will result in an improvement here.

Prototyping on the other side seemed to be a highlight among the class activities. It was told to me in the interviews that it was a lot of fun. So possibly I try to get them do prototypes earlier in the process. Less pondering, more creation! I hope this can relieve common problem of designers as well that I was occurring as well: being fixated on one’s own design.

Eric Reiss’ Web Dogma – deutsche Übersetzung

Eric Reiss' Web Dogma besteht aus zehn Regeln für Webdesign – mit dem Anspruch unabhängig von Moden und technologischen Weiterentwicklungen zu sein. Ich habe die englische Fassung ins Deutsche übersetzt.

  1. Alles, was lediglich existiert, um die interne Politik des Seiteneigners zu erfüllen, muss eliminiert werden
  2. Alles, was lediglich existiert um das Ego des Designers zu befriedigen, muss eliminiert werden.
  3. Alles, was im Kontext der Seite irrelevant ist, muss eliminiert werden.
  4. Alle Features oder Techniken, welche die freie Navigation des Besuchers einschränken müssen überarbeitet oder eliminiert werden.
  5. Jedes interaktive Element, dessen Bedeutung für den Nutzer nicht klar ist, muss überarbeitet oder eliminiert werden.
  6. Außer dem Browser selbst sollte keine zusätzliche Software nötig sein, um die Seite korrekt darzustellen.
  7. Inhalte müssen erstens lesbar, zweites druckbar und drittens downloadbar sein.
  8. Usability sollte niemals den Regeln von Gestaltungsrichtlinien geopfert werden.
  9. Kein  Besucher sollte gezungen sein, sich zu registrieren oder persönliche Daten preiszugeben, außer es ist dem Seiteneigner andernfalls unmöglich, eine Leistung anzubieten oder eine Transaktion zu tätigen.
  10. Breche diese Regeln, bevor sie irgendetwas grausames tun*

* schamlos aus George Orwells "Regeln für Schriftsteller" gestohlen

Übersetzt von: http://www.fatdux.com/how/our-web-dogma/

canvas and no brush

Imagine you are a painter and a salesman tells you that you should get rid of your tools in order to have more space in your studio. "Your workspace is not cluttered by all the other things like brushes and colors. What matters is the painting, not the tools!"

This seems silly but it is analogue to what is popular in GUI design nowadays. Freeing the space for what you want to see or create seems reasonable. But the goal of somebody is not to just see something but to see what is interesting and to create great stuff. In offering the functionality to do that a visible GUI is superior in terms of efficiency and learnability to many alternative ways like hidden elements, edit modes or gestures.

But with the rise of the touch driven mobile devices the design decisions that were made for these devices are spreading. And screenspace is very, very scarce there, and it was tried to eliminate as much GUI as possible. But since mobile touchscreen devices and applications have the reputation to be instantly easy to use we apply their designs to other domains. On touchscreens – regardless of their size – we invent complex multitouch gestures and on big displays we get rid of scrollbars

GUI for a good reason: common WIMP Interface (Free Software; ImageSource)

But they ground on a conception of a GUI that merely takes away space for triggering actions that can be accomplished with hiding them via modes or using gestures instead. What should software do is to enable us to reach our goals. We want to read the right stuff and we want to create content efficient and without learning about gestures and hiding places of functions. The conventional select-command GUI paradigm does a pretty good job in doing so, even in contrast to other established techniques. But whether you use the trusty old WIMP paradigm or alternative approaches: don't hide the tools!

UPDATE (7.4.2011):
The unity design team came up with a solution for scrollbars that does save screen real estate and offers visibility. I strongly recommend to have a look at their design!