Highlights from UXRConf 2023

Key insights from one of the most important UXR conferences of the year.

Beverly Vaz
UX Collective

--

One of the most important UX research conferences of the year took place this past week. The conference was held remotely over three days (June 7–9) with the first day being used to demo various tools and technologies used in industry. Over the next two days, we got to hear from some amazing speakers on a variety of topics. The speakers came from varied professional backgrounds and included industry leaders and key research groups in the area.

UXRConf logo
UXRConf was held from June7–9 this year (Image Credit: UXRConf2023)

Democratizing Research: Take 2

The first session of the day was by Lauren Ruben. Lauren is a Staff Researcher at Slack. She has over 10 years of experience with user research, is passionate about research education and has built research programs at AirBnB and Slack.

Her talk focused on enabling product and design partners at organizations to conduct their own research and how we, as researchers, can go about achieving this.

Lauren told us that as the sole researcher on her team she was having to make the difficult decision of which projects she would take up by considering how strategic it would be to work on it. As a result, the surplus demand of evaluative research would go unanswered. I’m sure this resonates with many of us, given the competing priorities and short timelines of many research requests.

The solution to this dilemma is research democratization. Democratizing research involves providing the scaffolding to enable research partners to conduct their own research. However, her first attempt at creating a research program for this at Slack did not end up with the results she hoped for, causing her to take a step back and try again. Fortunately, she was successful the second time around. In this talk, she shared her nuggets of wisdom and walked us through the key ingredients to create a successful research program around democratization.

Key Principles:

  • 🚧Establish guardrails: Identify the research demand and outcome you want to build for.
    In Lauren’s case, it was actionability. Having this chalked out enabled her to decide what methods and audience she wanted to focus on in the program. In her previous attempt, the research program took a broad approach at educating people on research methods and this resulted in people getting lost in the ambiguity of research. Scoping down on the methods helped enforce the actionability of the program. It’s also important to ask yourself who your program should best serve, to decide on the audience you want to build for.
  • 📝Tailored Education: Teach only what your audience needs to be effective.
    The second iteration of her program resulted in a 2 hour long training session (compared to a day long workshop earlier). She ensured that this time the program was less abstract and tailored to the participant’s view by incorporating real research questions used at Slack, grounding it in real life, making it more actionable.
  • 📑Documentation & Process: Having good documentation and a streamlined process helps with the self-serve aspect of a program, ensuring that the output of research is consistent.
    To achieve this, she created templates and guidelines for research and how to share it, which ensured that people were held accountable through the process and standardized it, legitimizing the program. Scoping down the research methods helped with creating templates, which wasn’t possible in the broad approach of the first iteration of the program.
  • 🔍Role of researchers: Define the role researchers play in the program.
    In Lauren’s case, there is a codified support system that program participants can rely on to seek help as they go about conducting their own research. Moreover, all researchers at Slack contribute to the program.

Rogue research is probably happening around you …. we in research get a choice, do we want to … turn a blind eye to what might be happening as a result or do we want to teach safe research and empower those around us? The choice is yours.

Metadata for Mega Impact

This talk was presented by Hannah Barbosa, the Head of Research Operations at AWS. Hannah has previously held various leadership positions while working on the Amazon web app.

Hannah spoke of the importance of metadata on projects and how it helps track metrics and the impact of user research in an organization.

Hannah spoke of a time when she had to get buy-in from stakeholders on the impact and validity of research, and after a year of efforts, where she ensured the stakeholders were involved in the research process, she managed to get them onboard as research advocates. However, she had spent a whole year on just two stakeholders and there were a lot more she collaborated with.

It got her thinking of a more efficient way to make a case for research, and leveraging metadata on research projects to tell a story about the bigger picture of research is one such way.

Key Highlights:

A table to keep track of various project data points across all projects
Keep track of metadata across projects using a table shared across the research team (Image Credit: Beverly Vaz)
  • 📝Keep track of various project data points across projects.
    The project data points to keep track of is unique to an organization. However, if you have had situations where stakeholders have run into issues as they discuss research, metrics that might help them out in these situations would be a great addition!
  • 📈Analyze your metadata to uncover insights
    The metadata from your projects is like your research data — analyze it for trends across projects. Are you reliant on a few methodologies? Why have other methods not been explored? How much time is spent on a project?
    These gaps can then be used to make a case to management. Moreover, putting forth requests in this manner helps communicate with leadership in a way they understand best — through data.
    Keeping track of the amount of time taken across projects will also help with crafting research plans.
  • 📢Socialize your metadata
    Spread awareness about your metadata by talking about it in research readouts. Don’t wait to nail it perfectly before you start sharing your insights.

We see metadata as an investment in our current and future selves.

Roadmapping your way to research success

Paige Bennett spoke to us on creating effective research roadmaps that elevate career success on research teams. Paige is a Senior Research Manager at Affirm and has over 15 years of research experience, having previously worked at both Dropbox and Medium.

Her talk explored the inputs a research roadmap should have for it to be user-centered and influential. She provided us with 3 key principles and various activities supporting each principle to help flesh out an impactful roadmap.

Key Principles:

An overview of a roadmap focused around company initiatives
Plan your roadmap keeping key company initiatives in mind (Image Credit: Screenshot of Paige Bennet’s presentation)
  • 🕒Timing: This principle revolves around being aware of what planning looks like at your organization and what timeline they follow.
    Activities:
    Become aware of the constraints you have to deal with such as the planning cadence of your organization. Use company initiatives as well as input from pod level leads to plan the direction of your roadmap.
  • 👥Team: This principle involves being mindful about the current capabilities and future goals of your team.
    Activities:
    Ask your team members to note their current strengths and growth areas. When planning out your roadmap, pick projects that leverage their existing strengths and provide them with opportunities to grow in areas they want to.
  • 💰Transformation: This principle focuses on knowing the current product investments of the company and what future investment opportunities exist.
    Activities:
    The activities for this principle would involve learning about the major areas the company is investing in and gauging future investment work. This principle also involves aligning with research partners, getting to know the questions they have and the business decisions those questions impact.

Once you have the questions of your research partners, you can explore any overlap among them, which can help scope the projects you need to plan out and the resources it would require, such as cross-collaboration among teams. You can also categorize the questions that emerged (as primary, secondary or tertiary) to help focus the research efforts of your team.

Once you have completed your roadmap, you can check to see if the projects you planned out make use of research methods from a variety of areas (like exploratory, strategic, tactical and operational).

Paige also suggested checking that the completed and implemented roadmaps have the 5 major qualities of a successful one: that it is collaborative, strategic, sophisticated, influential and user-centered.

What I loved about this approach was that it took company objectives into consideration, as research roadmaps should, but it built the projects around the strengths and needs of the team members, ensuring the growth of people who put the work into these projects.

Building a Values-Aligned Research Career

The last talk for the day was by Heather Breslow. Heather is the Head of UX Research for Google’s Firebase and has a background in behavioral psychology.

Her talk focused on identifying our values and baking it into the research career paths we create for ourselves. Some time in the recent past, Heather experienced an unfortunate life event that forced her to put her life in review. Through this talk, she helped shed light on how we can identify values important to us and use them to reframe what growth and success mean to us.

A sign saying “Difficult roads lead to beautiful destinations”
Heather Breslow’s talk focused on how incorporating values into our career paths benefits all (Photo by Nik on Unsplash)

Key Takeaways:

  • 🌱Values are what keep us grounded. They help us understand our motivations, help us decide what we do and shape how we do them. Moreover, facing our difficult emotions than ignoring them can help us understand what our values are.
  • ⚖️Making value-based decisions is liberating as it helps you define your success and find your center.
  • 🫱🏽‍🫲🏽Collaboration enables better work to be produced. It relies on a sense of community, where you can talk about your successes and challenges in ways that make you feel safe and seen. Having this as part of our practice further helps us as researchers advocate for our customers.

Heather mentioned how she uses values as a guiding principle in the teams she manages and how it is reflected in the way her team collaborates with others. I personally admire this approach as I believe it helps us as researchers to be more empathetic to our end-users and in the ways we build products that cater to them.

Day 2 (June 9th) saw talks on a diverse range of topics, from the impact of AI on the field of UX research to improving data collection on research projects. Here’s a recap of some of the sessions I attended on the final day:

How AI models will change UX Research

Savina Hawkins, a senior UX researcher specializing in artificial intelligence, kicked off the first session of the final day of the conference with a talk that outlined how new developments in AI will impact the field of UX research. Savina has over 10 years of experience in both applied and academic research and 7 years of experience specializing in artificial intelligence.

Here are some of the key highlights from her talk:

ChatGPT user interface
AI will soon become a skilled colleague you can work with at the workplace (Photo by Jonathan Kemper on Unsplash)
  • 🧠AI will become our smartest colleague.
    Given the current capabilities in the latest advancements we’ve seen with AI, it’s likely that these AI models will increase in linguistic intelligence in the near future and permeate into workspaces.
  • ⚠️However, these models can’t do everything.
    We have already heard of instances where large language models (LLMs) have produced factually incorrect and biased data. This happens because these models produce statistically likely data based on prior input. And at their current levels they aren’t exactly reliable sources of information.
  • ⚡They will impact tasks centered around language input and output.
    This would impact our workflows as UX researchers, but not our jobs, as these models become mainstream in our workplaces. What are some ways this could manifest in? For one, it will become easier to create deliverables as the task of wordsmithing can be passed on to such tools, allowing us to focus more on collaborative decision making.
  • 🔮Constitutional AI might be the promise of a harm-free, pro-social future.
    Constitutional AI is a set of ethical principles used to train chatbots so they are more socially and emotionally aware. Once trained, this could result in AI agents becoming mediators, thus enabling human dialog.
  • 💡LLMs will begin to write API queries on their own.
    APIs are a standardized way different pieces of software use to communicate with each other. The results of this capability could mean that all tools will soon be accessible through AI agents, and hence the agent would be a single resource to rely on than using multiple tools to explore data.
    What this would also imply is that these agents could run data queries for us researchers, automating the manual effort required in analysis. They might also be able to take in more inputs into analysis than what we are currently constrained to.

My takeaway from this talk was that the use of AI in the workplace is inevitable. Rather than seeing it as a threat to our job functions, it is more important to see it as a tool that can be leveraged to improve the work we do.

If history has taught us anything, it’s that resisting technological advancement is ultimately pointless. And if we don’t take a seat at the table and start weighing in on the design of these tools, we could lose the ability to advocate for the designs that actually (em)power us to do our best work.

Garbage In, Garbage Out? Getting Good At Data Collection

Rachel Ceasar talked to us about reflecting on our data collection processes and made a case for using a scientific method approach to how we go about conducting UX research. Rachel is a UX researcher at the Culture of Health + Tech Consulting and is an Assistant Professor at the University of Southern California’s Keck School of Medicine.

She walked us through some of the elements we should try baking into our process and things to be mindful about as we go about planning and carrying out research. She contrasted the typical UX research process against the scientific method and pointed out how the scientific method allows room for error. She further drove the point home by telling us about how using the scientific method in one of her projects helped her team with finding the appropriate sources of information during data collection.

Key Highlights:

  • 🚨Errors are possible at every stage of the research process.
    Breaking down the research process into multiple steps and reflecting on each step can help you mitigate possible errors.
    Once you have come up with your research questions, reflect on whether you have the right ones? When creating your protocol, ask yourself if your team has the life experiences of the target population or if it would be better to bring someone in to consult on your work? Are you asking the right questions, in the right way? Are you excluding anyone based on the methods you plan to use to collect data? Are you protecting your participants?
Steps in the scientific method
The steps in the scientific method (Image Credit: ThoughtCo.; https://www.thoughtco.com/steps-of-the-scientific-method-p2-606045)
  • 🪴The scientific method allows room for error.
    Another way to obtain error-free results, is to give yourself room to make errors. Are such opportunities present in your process? Rachel pointed out that the typical UX research process assumes biases are not present and doesn’t allow room for error. However, the scientific method provides us with the opportunity to make errors and to improve on it.
  • 🎯Aim to minimize biases throughout the process.
    Does your process include steps to reflect or refine your biases? Rachel mentioned that she makes use of existing frameworks, such as a social identity map, on her team to identify the biases they bring to their work. Leveraging existing tools can help improve the quality of data you work with.

This talk resonated with things I heard throughout my HCI degree — test and iterate! It made me realize that these are principles you learn about in the design thinking process, and if you come from a background like mine, you are always encouraged to create that low-fidelity prototype and test it before you create something more solid. However, when it comes to the research process, there isn’t a well-defined counterpart to this, and in the industry setting, especially with tight timelines, it’s likely that we don’t plan for opportunities to make errors. In this situation, the suggestion for adopting the scientific method in UX research does feel like a valuable lesson and one that I will try incorporating in my future research projects.

It’s ok to get it wrong and to create opportunities for that.

Researching Personality in Automation

Lauren Stern, Director of Global Insights at iRobot, talked to us about researching personality in automation. Lauren has held various research positions throughout her career with iRobot. Through this talk, Lauren spoke to us on the importance of measuring social perception in automation and if we’re listening to the correct emotional experiences based on the context users are in.

She demonstrated that designing for the personality people create for their robots has business impact. However, most products that leverage autonomy are not designed this way, even though these products are fostering more social thoughts and experiences for us. Below are some of the key highlights from her talk.

Key highlights:

Stages in social perception research in robotics
The various stages in researching social perception in robotics (Image Credit: Screenshot of Lauren Stern’s presentation)
  • 📏Measuring social perception in robotics is studying a robot’s social role in their interactions with humans, and the reaction of users to it based on the attributes of the robot. Incorporating this into UX research can have business impact as it leverages the fact that people tend to give their robots a personality (like a name/nickname), making for a more intuitive experience.
  • 🔍Discovery: In this stage, you are thinking about experiences that don’t exist yet and can thus heavily rely on your imagination. Conducting activities that are not based in technology, such as puppetry (where the robot is a puppet), can help elicit more feedback from end users interacting with it at this stage.
  • 📃Definition: At this point, you are outlining requirements that define a product’s direction. Ask yourself, what should users feel when they interact with your product? This can help come up with personality characteristics of the product that function as requirements for it.
  • 🧠Model Development & Training: This stage involves model refinement through the ML training process. Incorporating user feedback at this stage can improve confidence in a model, and one way to do so is by collaborating with your data team to get this feedback. A potential activity to use here is device perspective taking, where a participant behaves as a device, letting you know about the ways in which they expect it to react.
  • ✏️Design Iteration: As you iterate on your designs at this stage, incorporate social perception feedback into the usability testing you perform. You can do this by asking participants to compare between design ideas using criteria like ‘helpfulness’.
  • 📢Ongoing feedback: As you continue to refine your product, test it periodically against the personality characteristics you came up with earlier, and ensure you see improvements against those metrics.

I really liked learning about this framework as it focused on building intuitive human-robot interactions by keeping the human at the center of how the product is to be designed and packing it into the way research is conducted for such products.

It was wonderful to be a part of this conference and to listen to people’s thoughts on such a wide array of topics, see where our industry is headed and meet more people from the community. It’s definitely served as a lot of food for thought for me!

--

--

Designer, researcher | MS-HCI @Georgia Tech alum | Passionate about UX and people-centered design