I orignally wrote this on RemoteBase. You can now read it here because RemoteBase is not live.

Recently I redesigned the filters on RemoteBase to improve the user experience. In the process, I was able to see how our cognitive bias towards the status quo might be preventing us from improving our users’ experience.

In this post, I would like to share my story of coming up with the new design, and relate the story to endowment effect to show why we sometimes choose not to improve our user experience, and how we can break free of such mindset.

The Original Design

The purpose of the filters on RemoteBase is simple: allow users to filter remote companies and jobs using the criteria that are important to them. Yet, the filters were not completely true to that simple purpose, because they did not easily accommodate the common use case of majority of RemoteBase users.

an animated gif showing the original design
The original filters on RemoteBase

As you can see in the picture above, the filters were a collection of clickable buttons and text inputs. As users click the filters, or change the input values, the result at the bottom would automatically get updated.

To view more filters, users were able to expand the filters by clicking the “MORE FILTERS” button at the bottom right. The whole filters would get revealed after users expand the filters twice.

Redesigning the Filters

From the beginning of RemoteBase, some things about the filters bothered me. I sensed that I could do something to improve it, but I was not sure what it was that I could do.

The important features seemed to be not easily accessible. For instance, many users of RemoteBase, including me, are developers. It is important for us to be able to search the remote companies and jobs by the technology we want to use.

But what features could be considered ‘important?’ Without statistics illustrating the usage pattern, no sensible answer can be provided to such question.

With no answers in mind, I was discouraged from implementing a new UI for filters. The possible gain from shipping a new version was not enough to justify the possible loss of shipping a less optimal version. In this light, I decided to stick with the original design.

Measuring the Usage Pattern

Even though the filters were functional, the series of disappointing user test sessions distressed me more and more. My complacent attitude about the status quo turned into a eager curiosity. What can I do to improve the filters? So I decided to answer the golden question of what features were important to my users.

I started anonymously recording which filters were being used and which were not. It took me not more than an hour to implement the measuring solution. The following illustrates how I did it:

Whenever users click on a filter, RemoteBase searches for the companies that match the current state of the filters. Technically speaking, the following Redux action creator named fetchCompanies is dispatched every time a user clicks a filter.

export function fetchCompanies(client = defaultClient, isServer) {
  return (dispatch, getState) => {
    dispatch(startLoadingCompanies());

    const { companyFilters: { sort, selection, search } } = getState();
    const query = getQuery(selection, search);
    const params = { ...query, sort };

    client.get('/companies', { params })
      .then(() => {
        // update the result
      })
      .then(() => {
        if (!isServer) {
          client.post('query_stat', { data: { ...selection, ...search } });
        }
      })
      .catch(e => console.log('Error fetching companies', e.stack));
  };
}

I added a bit of a code to fetchCompanies to send the search data to the server. The code can be found in the second .then chain. There, the client posts the current state of the filters to the /query_stat. The current state is represented by JavaScript objects selection, and search.

selection is a key value pair of filter name and a boolean value. The value is set to true if the filter is selected, and false otherwise. search holds information about the freely typed user inputs.

// selection
{
  is_hiring: false,
  official: false,
  has_retreats: false,
  vc_funded: false,
  bootstrapped: false,
  ...
}

// search
{
  name: '',
  ...
}

These objects were already internally used by the app, so I did not have to do any extra coding. I could simply post those to an API endpoint.

Once /query_stat endpoint received this data, it simply saved them into MongoDB. MongoDB was a perfect solution in this case, because I needed to dump an unstructured data into some storage. I did not care about the schema, because I could always figure out how to use the data later.

After two weeks, I was able to accumulate about 14,000 search patterns.

> db.filters.count()
14489
> db.filters.find()
{ "_id" : ObjectId("57cf7879eaef3cfd6c650d8f"), "hiring_region" : "Worldwide", "vc_funded" : false, "collaboration_methods" : { "Trello" : false, "Jira" : false, "Github" : false, "Dropbox" : false, "Google Drive" : false }, "name" : "", "has_retreats" : false, "is_agency" : false, "technology" : "", "is_hiring" : true, "is_standalone" : false, "asynchronous_collaboration" : false, "official" : false, "team_size" : { "gte25lte50" : false, "gt50" : false, "lt10" : false, "gte10lte25" : false }, "communication_methods" : { "Skype" : false, "FlowDock" : false, "Slack" : false, "HipChat" : false, "Email" : false }, "bootstrapped" : false }
...

It was time to extract some information from the data. I wrote a simple script to eliminate inactive filters from the result set. And I got a nice JSON array representing the active filters at the time of search.

[
	{
		"is_hiring" : true
	},
	{
		"technology" : "Salesforce",
		"is_hiring" : true
	},
	{
		"is_hiring" : true
  },
  ...
]

I thought a bar graph would be nice to make sense of the above array. So I wrote another script:

import json
import matplotlib.pyplot as plt

count = {}

def increment_count(key, val):
 if key in count:
     count[key] = count[key] + 1
 else:
     count[key] = 1

def main():
    with open('analytics.json') as f:
        data = json.load(f)

    # session represents a single search
    for session in data:
        if session == None:
            continue

        for key, val in session.items():
            increment_count(key, val)

    plt.bar(range(len(count)), count.values(), align='center')
    plt.xticks(range(len(count)), count.keys())
    plt.show()


if __name__ == '__main__':
    main()

And the script gave me a graph:

A histogram showing the usage statistics
The histogram shows that users tend to use some filters more than the others.

From Data to UI

It seemed that ‘Hiring’, ‘Technologies’, and ‘Team size’ filters were used the most, and other filters were mostly ignored. Also, I could tell that ‘Technologies’ filter was important to my users because many searches included it despite the fact that it was not shown by default.

Based on this observation, I dropped most of the infrequently used filters, and condensed the filters into a single UI component. This way, users can see and use all the filters without having to expand the list of filters.

An animated gif showing the new filter
The new filter

After deploying the new version, I had carefully observed the audience statistics on Google Analtycs for the next few days. The new design did not negatively affect the statistics, and I decided to scrap the original design in favor of the new one.

Endowment Effect

While thinking back on the process of redesigning the filters, I cannot help but think of endowment effect.

In behavioral economics, endowment effect suggests that people tend to value something more just because they own it. Even when we might be better off changing options, we sometimes are too busy avoiding the possible losses rather than capitalizing on the potential gains.

I succumbed to such bias for status quo by being reluctant to change the original design for the filters. Although I knew that the filters were inefficient, I did not have enough information and was afraid of breaking whatever that was working. I started finding a way to improve the designs only after my disappointment for the design grew too strong as a result of user test sessions.

My initial reluctance and subsequent motivation for change illustrate the possible cause of endowment effect in user experience design, and a way to overcome such bias that can stagnate the evolution of a product.

The Cause

We might be biased to keep the status quo in user experience design, because we do not have enough data to motivate us. The lack of data means either that we do not know that something is broken, or that we know that something can be done, but are not motivated enough to take risks and invest efforts into improving the design.

Our minds tend to be loss averse because we hate loss more than we like gains. When it comes to user experience design, such loss aversion can cause us to settle with a suboptimal design, instead of trying to improve the design.

A Solution

To avoid being a victim of such bias, I think we can begin by collecting two kinds of data: empirical and statistical. These data will motivate us to overcome the endowment effect and start improving our user experience.

Empirical data is a data that indicates that there might be a room for improvement. It is a subtle signal that something is inefficient. In the case of RemoteBase, empirical data came from the user test sessions. The accumulation of empirical data gave me an impression that something was broken, and finally motivated me to take actions.

Statistical data is a data that can be used to improve the design in a scientific manner. It helps us to objectively assess the potential gain from the redesign and overcome our instinctual aversion to potential losses.

Like the empirical data, statistical data motivates us to take actions. It does so by providing a clear argument about why and how we should improve our users’ experiences. For instance, the bar graph of active filters for RemoteBase provided a clear argument for redesigning, and I was able to follow its direction to get the work done.

Conclusion

Even when some interfaces are functional, there is often no good reason for why they are the way they are. Some user interfaces are likely the way they are because they have always been that way. And they have always been that way, because they have always been so, ad infinitum.

Such circular explanation leads to nowhere. But I feel that we sometimes choose to put up with the banality of it because we do not have the data to show us why and how we should take the next step and improve the user experience.

Yet, as we begin collecting the data-both empirical and statistical-, we might be able to get enough motivation to overcome our natural inclination to ascribe more values to the current design just because we have it.