Over kinderen, hun psychische problemen, hun ouders en hun behandelaars.

OPEN – reality check: first prototype test with researchers

We conducted a medium-fidelity working prototype test of OPEN with two psychiatrists and two other professionals in the role of researcher. The good news: each saw the value of OPEN, and all immediately proposed ways of using OPEN in their practice.

Two important learnings:

  • researchers are skeptical about the ‘findability’ of relevant OPEN research activities
  • a single population of volunteer respondents, even with sophisticated  tools for selecting sub-populations, will not be adequate for many researchers

We’ve basically now uncovered the real obstacles hiding inside the problem. 

In the previous four posts, I described the concept of OPEN and our design research using projective cards. I described how we’re focusing on emotion as the key to getting the online tools right. And I talked about our vision of the role of discovery, re-framing and the strength of weak ties.

How do I know it’s there? How do I know it’s useful?

To paraphrase one respondent: “You don’t need to convince me of the value of qualitative research. If research done in OPEN has produced valuable results, I don’t see how I could discover them if they’re done with entirely different populations, and in a different context than my own research.”

Supposing, say, the results of an OPEN investigation about the perception of teachers contained something of great relevance for a researcher investigating the empathic capacity of young people in treatment for a specific period. How would the researcher become aware of the OPEN investigation’s existence and relevance?

OPEN is designed to enable the owners to freely build profiles and themes and link them to research in such a way that a connection like this can be easily made. This will have to be done with a combination of smart bottom-up metadata and human beings with the necessary expertise. They will have to build semantic and editorial guidelines, and pro-actively ask researchers to identify, rate and describe the relevance of seemingly unrelated results. In other words, this is where technology ends and content strategy begins.

We were right to wait to create the thematic linking and profiling tools until we had seen more of these real ‘scenarios in action’. We now have a much better idea how to go about it.

Several kinds of respondents

From one respondent, we learned that researchers are more bound to specific time-frames and populations of respondents than we thought. So it will have to be possible to ‘import’ a population while respecting specific constraints (including varying requirements of institutions for recruitment of respondents). In practice, this means creation of several different types of respondent, varying from volunteers who don’t mind being approached by any and all researchers, to those who are limited to single researcher and activity. Respondents will also have to be offered ways of changing their status, if they want to.

Rough edges, but insight

We generally test with more visually refined prototypes. This one was medium-fidelity (most functionalities actually were up and running, not just mock-ups). It was an interesting experience, because it was easy to see past the obvious usability glitches, and focus on the goals and concerns of the four respondents.

The next tests will be with research respondents of OPEN. In addition to basic usability, we hope to find out if our idea of how they use mobile and desktop media is correct.

James Boekbinder About James Boekbinder

interaction designer | filmmaker | self-educated system thinker | first US citizen on the payroll of Lenfilm | researcher | in love with partner from Russia | content strategist | in love with Amsterdam | educator of young designers at Rotterdam University of Applied Science | in love with cat from Greece

Reageren