…some more libre ethnography videos under CC Licenses. this means you can use them for your teaching and even cut them, (re-) dub them, remix them with other videos for comparison, whatever – as long as you show a credit to the creator and a link to the license.
What is it like to be 16?
How do people choose the meat they buy?
A presentation of in-context research.
»Audiovisual ethnography on systematic precarisation of labour migrants in Munich« A longer film (~30min), its mostly in German.
German, Türkish (?)
The Craft of Surgery
A tailor and a surgeon talk about the similarities or surgery and tailoring.
A colleague and I organized a small barcamp-like meetup for qualitative researchers at the Bauhaus University Weimar. We met on the 21st of May 2015at the neudeli, the Bauhaus University’s startup incubator (»Gründerwerkstatt«).
We held several sessions run by participants. I’ll give a brief summary here.
We started of with theory and philosophy of qualitative research. Actually our planning session kinda transitioned into it, so we used the opportunity and made this our first session after the planning.
The main topic was doing the ›right‹ research. There are several slightly different approaches of how you can show that you do your research ›right‹ and naturally different views of what ›right‹ means anyway. We brought in our approaches, discussed bringing in (or keeping out) personal views and interpretations, what one should state about oneself and one’s context and about the virtues of describing messy realities or showing possible, plausible interpretations of experiences.
We talked about »defending the qualitative approach« too. The most common scenario is defending it against critique of quantitative researchers. However, the case we discussed was defending it against deconstructivist, postmodern critique (so that’s for those who already crafted the perfect defence against critique from quantitative researchers and feel bored now). After this rather abstract session we moved to a practical topic every researcher touches nowadays: Software. And literature (for some reason, I don’t know what the connection was)
So, here a list:
Software for Data Analysis
- MaxQDA (around 100€ for students) and has a nice interface
- AtlasTi (around 100€ for two Years)
- RQDA though we thought that is might not be the right choice for big projects and/or not for the ones who are neither into computers nor R. But: RQDA is Open Source.
For editing audio recordings – like cutting, or enhancing the quality – the Open Source Tool Audacity was used by participants
Adding notes and highlights to PDFs is possible via PDF X Change Viewer (Freeware). It includes a very useful ›OCR‹ Function to recognize text in PDFs which are created from scans (OCR-ing that scanned PDFs enables you to mark the text there too and to find the PDFs via your Computers full text search)
Collecting Ideas was done via the Open Source Zettelkasten, Open Source Freemind, the Freemium Webservice Evernote or Microsoft OneNote. However, Pen and Paper might be preferable anyway for creative work.
LibreOffice, OpenOffice (both Open Source) or MS Office were popular tools for writing the thesis. Some might try the Open Source classic LaTex as well, though this is probably to much learning as long as your work does not contain math (which looks pretty in LaTex.
The mentioned books were:
- Successful Qualitative Research by Brown and Clarke
- The Discovery of Grounded Theory, the classic by Glaser and Strauss (who afterwards started their own interpretations of this approach)
- I should have mentioned "Shane the Lone Ethnographer" by Galman – a comic on ethnography.
- Pandora’s Hope by Bruno Latour
After jointly collecting so many tools, we did a writing workshop (with pen and paper!). We first created a mind map of topics and than wrote for 10 minutes – without ever stopping, even if the writing seems stupid or whatever: just continue to write. After the 10 minutes we marked parts we liked about our own text, read them to each other and got feedback:
1) What they liked,
2) What they like to know more about,
3) Which color the text would have.
Thanks to everybody who participated for making this a very enjoyable event!
A collection of free-as-in-freedom resources for user research.
Each resource has a brief description, a source and the license. However, to be sure you should look up the license on the source (this is what matters).
User Research Templates and Examples
- Idno User Research
A nice collection of possible questions, guidelines and pretext
- Usability.gov Templates
Lots of templates like reports, release forms, screeners…
License: Public Domain
- System Usability Scale
The System Usability Scale (SUS) is a well evaluated, time proven and easy-to-use short questionnaire to determine the usability of a product and to compare it to other designs.
License: Public Domain
Source English: http://www.usability.gov/how-to-and-tools/methods/system-usability-scale.html
Source German: https://experience.sap.com/skillup/system-usability-scale-jetzt-auch-auf-deutsch/
General Resources with section(s) on User Research
- Usability in Free Software
A guide to improving the usability of (open source) software – including sections or recruiting and research. Superpower: Can add a 5th essentail freedom to Open source software: »The freedom to use the program effectively, efficiently and satisfactorily«
License: Creative Commons BY-SA
User Need Research/Ethnography
- Beginners Guide to finding User’s needs
Disclaimer: I wrote this, so I am not impartial here. I hope it helps along nevertheless
A Beginner’s Guide to Finding User Needs is a book (~100p equivalent) on how to interview and observe future users, how to analyze data and how to report it.
License: Creative Commons By
- How to do a research interview
Since conducting research interviews relies on the »right« behavior, a video is a great way to learn how to do it. This video shows what to avoid and how it can be done better.
License: Creative Commons BY-NC-SA
- The Challenges of Small Business Owners
A visual summary of an ethnographic research, showing participants, their statements and observations. Nice example of what an ethnographic research can deal with and which data matters.
License: Creative Commons BY 3.0
- Nookie tour - Commonwealth July 2013
An interview/observation of a man walking us through a shell construction which is going to be his house while describing how he wants it to be like when its finished. This too is a nice example of what an ethnographic research can deal with and which data matters.
License: Creative Commons BY 3.0
If you know of further resources, please share them in the comments. I’d be happy about any open resources, however, this list is specifically concerned with user research (not interaction design, wireframes or design theory) so links on that topic would be particularly great
Changes: 26.4.2015 – added Usability in Free Software
TL;DR: beginners in user need research have trouble dealing with the openness of possible outcomes in research.
User need research is used for discovering the motivations, activities and problems of potential users. For this, the researcher often has a conversation (»Interview«) with participants and asks them questions about their experiences:
Researcher: »Can you describe, how you use your smartphone when you are on the go?« or even: »can you describe how you spend your time in transit?«
asking closed questions
To ask such questions seems to be tough for beginners. Usually, designers and programmers I tought struggled with this way of asking questions. A typical first try sounds more like »Do you play games when you use your smartphone on the go?«. This question is closed: Its answer is predictable either »yes« or »no«. In contrast, an open question like the initial ones can lead to an infinite variation of answers.
grouping by surface similarity
When the gathered data is analyzed, other problems emerge: Beginners seem to prefer grouping data by surface similarity, same words which are mentioned, same object referred. It is much harder to group by possible concepts and meanings. But it leads to principles like »Being on the go is a time out for me«, which are very useful in design.
embracing openness is hard
In general, it seems to be hard for beginners to embrace situations in which the outcome is open or ambiguous: The answer to an open question may be almost anything; Possible results when analyzing concepts may be in flux and changed after additional data was acquired: the result is not right or wrong but "just" improving.
Why is embracing the openness in user research hard? Two possible reasons:
1. Not knowing what will happen
If the outcome in asking questions and analyzing is open it is hard to (mentally) prepare for what might happen.
This may be particularly tough for designers and developers. The result they aim for is to have an implementation at the end. What they find out in research may or may not require (big) changes to what they have in mind. And throwing away work does not feel great. As a proponent of Human Centered Design you may argue that it is just wrong to think of implementations when still researching needs. But it is a known behavior (Sharp, 2013, see this post), and overcoming it is probably hard.
2. The one right information
In quantitative (statistics) research you put up an hypothesis, and show that you tried to falsify it. If you don’t succeed, the hypothesis is corroborated. There ought to be a clear cut criteria, like a p<0.05 if it is more you reject, if it is less you take your hypothesis. You as a researcher should not influence the outcome.
In qualitative, need finding research, this is different. Instead of avoiding any individual influence made by the researcher, his/her interpretations play a vital role. And instead of having a one-shot approve-or-disapprove, you may as well iterate, refine ideas, hypothesis and their rejections or corroboration over the course of your studies. It is fine to be surprised. In a qualitative study, wondering about something happen can be a sign that you get into trouble. In qualitative it is the sign that you are onto something.
idea of a research method that is empirical but not deductive, is hard to grasp, since it is contrary of the idea of research many beginners bring along: Research ought to be precise, »right« and quantifiable. Integrating another paradigm into this view or research is not easy.
Human Centered need finding research poses difficulties for beginners. One could blame the »wrong« attitude, education or prejudices, but doing so will not help to improve future designs. Understanding the problems makes it possible to address them in a more productive way.
- Hein, Serge F. "I Don’t Like Ambiguity": An Exploration of Students' Experiences During a Qualitative Methods Course." Alberta journal of educational research 50.1 (2004): 22-38.
- Sharp, Helen, et al. "A protocol study of novice interaction design behaviour in Botswana: solution-driven interaction design." Proceedings of the 27th International BCS Human Computer Interaction Conference. British Computer Society, 2013.
In my posts on Visual Interviews and Visual analysis I had only a brief section on analysis. It was fairly conventional: search for patterns and list them as text.
However I thought that it might be easier and more accessible (for you and your peers) to keep the analysis in a visual form too.
The process is a bit like creating personas: You gather your data, analyze it and create a prototypical representation.
The starting point is still having the visual descriptions you created with your participants.
The first step is to compare the diagrams and make annotations. For this you can copy the diagrams and write on the copies. If you like to keep it digital, you can use any application which can add text over images. Look for similar or interrelated information, mark these and write down your thoughts. If you asked questions and made additional notes, use these too to supplement the information in the diagrams.
Creating a visual summary
Create a visual summary of the data using informations which are consistent or similar across your participants’ diagrams and/or your other notes.
The following diagram condensed the information of the diagrams above:
The diagram is based on things that were at least in two of the participant’s diagrams. Nevertheless, don’t hesitate to use information which is not on the diagrams: Conversations or other diagrams (e.g. workflows) can be used as well to make this overview.
The »right« result?
There is not one "ideal" representation for a visual summary of your participant’s diagram s and mappings. It should be data based but this does not mean that two people will create exactly the same visual summary out of the same data.
This is obvious in this example in which I summarized the typical experiences of a shift (ca. 2h) who has duty in the student’s café:
Time wise it was clear that the shift changes where friction points. They happen at the begin and the end of the shift. It was as well clear, that most participants were happy when customers came. However, the point in time when friends come by, when a nice song is played or when an annoying customer comes is not set – it can happen at any time. So I included them at some plausible point – though it could happen at another time too.
a plausible, but not a thats-how-it-alwas-is-representation
I suppose this is recognizable in the time-diagram above that some things are fixed, and some others may just happen. But using the above diagram someone could indeed suppose that the nice song is always played in the middle of the shift. Just like personas, the visual summaries need to be interpreted with common sense. They are are a plausible example of how an experience may be.
Enjoy the method – if you have any examples or want the share experiences, write a comment below!
⚠ Exploratory Research ahead
This is a little study that explores Designer’s attitudes to steps in the design process
The idea was based on the observation that some steps are part of almost any design process while some are left out. So I wanted some data on this. In addition there were two additional hypothesis: Designs steps which are less fun are done less often and design steps in which the designer does not feel in control are less fun.
From books and some interviews I created a list of design steps, and created a survey. For each of the design steps I asked
- how frequently the participant does the step in design (ranging from »never« to »in every project«)
- how much fun the activities for this step in the design process are for the participant.
- how much control the participant feels to have in this step of the design process.
These attributes were rated on a 7-step scale. Higher values indicated more experienced fun, more perceived control and doing the activity more often. In a little text before the survey questions, I explained what I understood as »(not) having fun« and »(not) being in control«.
To find out how core activities of design fare compared to research activities, three Designers coded the steps as being: research, design-itself, something else¹. I created the average of the steps which were coded »design« or »research« by all three designers.
Design-itself activities are done more often, seem more controllable and more fun to do. Some ideas in relation to this are using methods for research that feel more productive and controllable. Co-Design and research-by-mapping (mapping emotions and social life, describing workflows) may be ways to do this.
There is a significant correlation between control, fun and frequency – so if designers felt they are in control they had (in average) more fun and if they had more fun, they did the step more frequently.
Since the research was not experimental, we can not draw conclusions about causation here. This would need further experiments, though drawing from interviews I made with graphic designers and participants of a human centered design class, it seems to be a promising direction.
One interpretation that can be drawn is this: In design-itself-tasks, the designer feel in control of the situation and that the outcome is mainly depended on ones own actions, while in research tasks the outcome is not sure (you can ensure a scientifically sound experiment or the like, but the outcome is open).
As I said the research was not experimental and explored only correlations. In addition there are some shortcomings:
- The data is based on answers of only 20 people about half being graphic designs, half being product designers.
- The order of the questions was not randomized
- The questions were ordered by the design process steps, and asked for the three attributes of a design task (Control, Fun, Frequency) in each step. If someone would choose the same rating in each of these subsequent questions (out of boredom or to complete quicker) there would be a seemingly perfect correlation.
When I continue the research I would thus
- Get a bigger sample
- get a more diverse sample
- Order by attributes as main category first, than ask for the attribute for each step. the order should be randomized.
If you are interested in doing research on the topic too, I’d be greatly interested in cooperating.
¹ »something else« was often described as »communication«.
In a previous blogpost I demonstrated some methods for mapping social relations as well as emotions/liked-and-less-liked-parts-of-a-process. In this blogpost, I will show some ideas on documenting the processes with your participants by writing step-by-step instructions together.
Here is what I came up with:
Step 1: List tasks
First we need an overview of what the participant does. It would be ideal to collect these activities³ together with the participant by observing him/her, however this is not always possible and even if we, lets say, would do a few hours of observation, we still might miss important activities (like the setup when starting work or the cleanup routine when ending it).
My first ideas was to ask what the participant does and to write it down with the participant.
However, this was a bit too general in my eyes. To make it more concrete, I asked for things that the participant was doing just now and some minutes ago, too. This makes it somehow more reliable since it is likely that recent activities are remembered. In addition, these remembered recent activities have actually been done in (possible) contrast to activities in general which may be written down because they should have been done.
To ease this for me, I created a little template:
You see that there is a second column, asking for “effects if the activity in not done”. This can help to focus on the (seemingly) most relevant activities first.
Before creating the list, I show an example how the result may look like. The example is concerned with a very different domain (to avoid biasing the participant), so people working in a cafe got a »How I repair a computer«-Example from someone who volunteers in a hackerspace repairing computers (hmm, that someone is me).
Step 2: Describing the activity(s)
After having an overview of activities I choose the one that appears most relevant and ask the participant to document this activity with me. I frame it as possible instructions which could be used to teach somebody else the activity. I show an example for this too.
I provide a template, with an »activity name« and a »notes/sketches« column, and fields for basic infos like the name of the activity, the trigger, the result, the risks/problems and the motivations to do the activity.
When documenting the process, the participants only write down the general steps (I hoped they would include some sketches, hints etc. too). After they are satisfied I go through their list and ask questions:
You wrote “Distributing team work” – how do you do this?
This point “Let the milk cool down” – what happens if you skip this?
In addition, I may ask for a demonstration:
Can you show me that coffee-container-thingy you need to clean?
…How is that thing attached to the coffee machine?
I add the information I asked for as notes and sketches, either in another color or by creating references to another note sheet using (latin) numbers.
The latin numbers link to tasks the participant wrote on the template sheet. We ran out of space, so we used another sheet.
At the end we created a documentation of the work process on one or two pages.
The data is mainly written text (as opposed to audio or video or researcher’s memories) so there is no transcription required. However, I strongly recommend to review your notes afterwards, add possibly forgotten data from your memory and to rewrite hard-to-read words so that the document is useful at some future point.
I still think that a interview/observation gives more opportunities for experienced researchers. However, the method described above may provide beginners with some structure and still leaves room for own questions and ideas. A motivational advantage might be that an actual artifact is created (the sheet of paper on which the process is documented) instead of just having collected “more data”.
Both in German. Write me a comment, if you need a template in English.Linked templates are now in English.
- Tudor, Leslie Gayle, et al. "A participatory design technique for high-level task analysis, critique, and redesign: The CARD method." Proceedings of the Human Factors and Ergonomics Society Annual Meeting. Vol. 37. No. 4. SAGE Publications, 1993.
- Herrmann, Thomas, et al. "Semistructured models are surprisingly useful for user-centered design." Designing Cooperative Systems (Coop 2000) (2000): 159-174.
- »Activity« is used here like in everyday language, it does (most likely) not exactly match the definition of an »activity« in activity theory
Text and images are licensed under a Creative Commons BY 4.0 License
I think that teaching-by-examples is an awesome and empowering method. Instead of a universal principle given by some authority a way to solve a problem is suggested.
Example for an Example (meta, yey!):
While the interface design principle »Visibility of the system status« may be a great advice for interface designers, the statement can be interpreted in many ways (what is this "status" actually? How can it be shown?). But by showing examples of the principle in action, it becomes clear what is meant by the principle.
The solution seen in the example can be applied to one’s own work. Thus the learner has a sense of achievement (since one can easily make one’s own example-derived solution work). If the suggested method works, the learner can start to experiment and find other or better solutions and can compare to the method that was taught in the example.
In particular in design where much of the knowledge that should be acquired by future designers is tacit and thus can’t be just learned by heart from books. Teaching by example makes much sense and is already practised. However, often this is done by dissecting and and discussing great, finished designs: A particular chair, a building etc. This has advantages over only discussing abstract principles of good design but for teaching how to design it has a major shortcoming: Great designs don’t just fall from the sky. They are created in a process (as messy as such a process might be). Such a process is can’t be (easily) reconstructed by looking at a final product.
To learn »how« to design it makes sense to show how other people design not just what in particular they came up with.
Building upon the »show system status« example above: It is not only interesting to see an example of the principle itself but of the context in which such knowledge may be important. Let’s say, an interface is tested an a certain function is not found, in which case it could be worth a try to check if there is visibility of the system’s status.
While finished designs are shared in great number and on several platforms, work-in-progress examples are rare. Often it is only stated that this-and-that process works great; In-progress-designs, visual examples, solutions developed in parallel or discarded ideas remain hidden.
There are some reasons why we have few examples in design which show work in progress and explain the decisions and actions involved in progressing in the design process:
1) The Genius Designer
Design is sometimes framed like art and presented as a mythical process consisting mainly of incubation of ingenious thought, thus there is no process to present, at least not one from which non-geniuses may learn.
2) Portfolio culture and perfectionism
Caring for every detail is sometimes said to be the hallmark of great design. But in progress work is not work has perfect details yet.
3) Investments and Benefits
While sharing a great example may benefit many, the benefits for the one who shares it are small: A design- in progress does not look great and it does not make a great page in the portfolio.
However, depending on the area one wants to work on, these obstacles may be overcome. Partly by a culture in which designers are no longer associated with supernatural genius-powers and working always perfectly; partly by employers who care for the awareness of process, the reflection of ones work process and the peer-to-peer education among designers.
I think that the advantages of sharing examples are to big and the available material to few to ignore the unused potential in sharing more of our designs and thus educating each other in a way which is instructive as well as empowering for the learners.
a beginner friendly method for user need research
While interviews and observations are popular methods for data gathering they can be hard to handle for beginners: Asking open questions, avoiding influences and managing the flow is hard. Graphical templates on which the reserach participant sketches or writes can help the researcher and the participant alike by providing a scaffold for the process.
The basic process
One example would be graphing good and bad phases of an experience over time – like this:
The template is just a sheet of paper with the axis and their labels (and possibly a miniature example of a finished graph)
The researcher needs to introduce the process to the participant:
I am interested in the activities and experiences you like or don’t like and how they line up.
Could you draw a graph of how you felt during the project and write which activities were connected to these feelings? A finished diagram can look like this [shows example]. While you draw, just explain me what you draw so I can understand it better.
Or more abstract:
- State your interest
- Explain how to use the diagram
- Ask for explanations during the drawing
While the diagrams itself are an important outcome, don’t forget to note the participants utterances and/or to make an audio recording.
Make it suit your needs
There are many different forms of mappings which you can use for user research.
Just like in the example above you can ask your participants to draw the course of their feelings over a specific time and to annotate this graph.
Ask to map the connections and tasks of people the participant works with.
Note that the diagram does not just include individuals the participant directly worked with; Books are mentioned too. If participants ask if they may include something (e.g. books instead of only persons) encourage them to do it and use the additional data.
Ask to draw and annotate a diagram of the workflow: What tasks need to be done? Why? How are decisions made?
If possible ask for demonstrations of the activities while they are added.
… your own creations
It is a great idea to create your own template is existing ones do not suit your needs. Just keep in mind that it should easy to fill out and test it at least by using it yourself or ideally in a pre-reserach session.
I think that this has several advantages for beginners in user research since the template provides some predefined structure:
- Participant and Researcher alike feel more secure it is rather clear what can happen and what is expected
- Asking about processes and motivations can be tough for beginners; the template can help to get to know about these.
In addition there are some advantages because of the graphical nature of research
- It is easy to point to data like »you wrote/said/drew… this – what does it mean?«
- The data can be analysed graphically by comparing patters.
Annotate the diagrams
Annotate the mappings directly after the reserach session when your memory is still fresh. Supplement the drawings from you memory and add utterances which you remember and rewrite annotations which are unclear. If you don’t you miss out some data and the diagram will be hard to understand when you can’t decipher the handwriting of the participant.
Search for patterns
To find patterns across participants, put all diagrams of the same kind side by side. See if there are similarities or contrasting patterns, find reoccuring data as well as unique events.
Example Analysis on the diagrams above (click to enlarge)
- In all but one diagram the onset of a project seems to be a good experience
- In three of the five diagrams there is a significant decline after the projects first, motivated phase. The reasons are: Seemingly unsolvable problems, project not going according to the plan and unhappy clients. A commonality is that the named reasons for the bad mood are seemingly out of the control of the participants.
- for all participants the project ends good, while four of them seem to be very happy at the end: Finishing seems to be a good experience.
reasons for being happy:
- project onset
- design works like hoped or expected
- project end
reasons for being unhappy
- »unsolvable problems«
- client does not like the design
- self critique
- insufficient starting material
I hope that the examples for possible diagrams and their analysis gives you an idea of how to use diagrams for your own research. If you do, write some comments on your experiences.
Edit (2.1.2015): The text and the photos and illustrations may be used according to the Creative Commons 4.0 BY License
My Beginner’s Guide to Finding User Needs is online.
The text is aimed at students and professionals who want to learn about qualitative user research. It is written in a hands-on manner and the described methods factor in that you might neither have a big team nor a big budget.
If you want to suggest improvements, you can write me (dittrich.c.jan AT gmail.com) or file an issue on github.
Update: 2.11.'14: Tippo
I have a Love/Hate relationship with design method frameworks, these diagrams and flows showing what happens afters which step in a design process: »Human Centered Design« or »Design Thinking« to name two.
Design method frameworks provide an overview of the process and give suggestions of how design can (or should) be approached. This includes researching user needs before creating solutions and testing prototypes before investing lots of time. They might be particularly useful for beginners who can use them as a guide for their design work.
But these models may not match what designers actually do: Having a model of how a process can or should be like is one thing, putting it into practice is another. Thus it would be interesting to know if these suggestions are followed and if not why.
To find this out one could observe designers and protocol their actions. Such »protocol studies« are an established method in the field of empirical design research. Helen Sharp and her co-authors used such a method to study novice interaction designers. All students were taught »the definition of usability and user experience goals, exploring the problem space and challenging any underlying assumptions before starting to produce a solution«. Despite of this, the findings suggest that »Participants immediately suggest a solution
when they are given a design problem.« 
The study is not big, but it fits into the findings concerning other creative and problem-solving activities: Mechanical Engineering, Circuit Design, Architecture etc. In these fields, studies suggest that design does not happen in clear cut, sequential steps. In particular, the problems which are solved are often seen as »moving targets« : The problem which is to be solved is often only preliminary defined, and not exactly depended on the external requirements (Client x needs y) but co-evolves with the solutions . So design is highly opportunistic and it is opportunistic not just in areas where something should look stylish but in seemingly »hard« areas like mechanical engineering or programming .
The usefulness of models is thus debatable: They may help (novice) designers to include all necessary steps in a meaningful order, so that they don’t get stuck (e.g. in deciding which colors to use before they know if they want to design a magazine or a website)  but on the other hand they may wrongly suggest that steps in design are not interwoven. In addition, following a model may add mental bookkeeping costs  (e.g. we are now in the prototyping phase with part x but part y is still… so we need…).
Design method frameworks can help (novice) designers to give their project a meaningful structure. But to be more useful they should consider as well the opportunistic and and intertwined kind of work that seems to be part of most design practice.
- Helen Sharp, Nicole Lotz, Richard Blyth, Mark Woodroffe, Dino Rajah, and Turugare Ranganai. 2013. A protocol study of novice interaction design behaviour in Botswana: solution-driven interaction design. In Proceedings of the 27th International BCS Human Computer Interaction Conference (BCS-HCI '13), Steve Love, Kate Hone, and Tom McEwan (Eds.). British Computer Society, Swinton, UK, UK, , Article 18 , 10 pages.
- Willemien Visser. 1990. More or less following a plan during design: opportunistic deviations in specification. Int. J. Man-Mach. Stud. 33, 3 (August 1990), 247-278. DOI=10.1016/S0020-7373(05)80119-1 http://dx.doi.org/10.1016/S0020-7373(05)80119-1
- Cross, Nigel: Design Thinking : Understanding How Designers Think and Work. Bloomsbury Academic, New York, 2011. ISBN 978-1-847-88846-4.
- Added References on 28.10.2014
A major problem of prototyping in code is that the prototype may be mistaken for a (nearly) finished product and that the focus shifts from the interactions towards colors and fonts.
Some mockup tools (Pencil, Balsamiq) provide sketchy stencils to create mockups which look not finished. I tried to recreated similar style which can be applied to Bootstrap Designs as a theme, including styles for buttons and widgets.
UPDATES: 21.10.14: Added Freifunk-Example
Reading in the literature on Interviewing almost everybody recommends audio recording  as well as additional video . Nevertheless I wondered how important these recordings might be especially considering that some people never learn to value a user centered design process if they find their first steps too tedious.
So: Is it worth it? (considered that you might have few time/money)
The answer is »it depends« (as usual) but I wanted to get some insight into what it depends on. Sadly I could not find the key to the rocket science building, so all you get are my experiences in several small case studies.
What I did
I did several interviews as part of three different projects. In each interview I took notes and recorded the audio. After all interviews I complemented the notes from memory as soon as possible. I filled in gaps and extended bullet points to more verbose descriptions of what was explained to me or observed. Than I transferred the information in a text file.
I listened to the recording as well and wrote all coherences, statements and explanations from the audio in a text file too (thus, no word-by-word transcript). Having my in-interview notes and my from-memory-notes and what I got by going through the recordings I could compare the results of each step.
What I found
Going though the audio made a difference – but not as big as I assumed. What was added was mostly minor details. Among the five Interviews there was one in which I relied heavily on the recordings. In one of the interviews I had seemingly conflicting statements; I was able to understand and clarify this by listening to the recordings.
The main points were already in the written notes and/or their complements.
Overall, the notes and their completions after the interview already provide a usable basis for user research, even if no audio was recorded. However, if it is possible, you should record nevertheless. It can happen that your notes are not useful (as it happened to me in one of the interviews) and you may need to review some of the recording to resolve conflicting statements or get a better understanding.
I rely mainly on open source Software. I would use proprietary software as well but I think open source has some advantages:
- to be able to just install software on several computers
- being able to share data with a team easily.
- ease the entry: nobody is enthusiastic about trying something that comes with a hefty price tag.
However while there is open source software for almost anything it gets a bit sparse in regard to software for qualitative methods. Here is what I use:
Easytranscript: A little software for transcribing audio. Audio can be controlled by shortcuts. Especially useful is the setting of timestamps in the text and that clicking on that parts of the text go to the according part in the audio file.
Before using Easytranscript I used VLC together with some editor. When doing this, best use global shortcuts in VLC (e.g. some F-Key for start/stop and jump back) and word autocompletion in the editor.
RQDA: A R-based software for coding texts and retrieving text parts which have been coded. It has a simple GUI although its not totally conforming to standards. The installation requires R and GTK+ (the installation is described here – needs some dependencies, but no manual config required)
RQDA only takes plain text and not the .rtf written by Easytranscript, but copy and paste to a text editor can resolve the problem.
An alternative to RQDA might be CATMA (http://www.catma.de/) which runs on every computer that has Java installed. I have not tried it yet.
Open Office Writer: Serves me well for writing reports, consent forms and whatever else I want to print on paper.
Together with Zotero it is quite good for writing scientific text as well. Many Linux-Users will opt to LaTex for that though it is harder to get into.