Buy-A-Feature is a great tool to source interest from your end-users about a set of features, and understand how they make trade-offs between them. It’s effective at providing a high-level prioritization based on your customers’ preferences, perceived value, and expectations. There are many versions of this tool, and many ways it can be used.
This method can be used to understand what customers value more in a set of competing features. This is the case when the business has a list of features, alternative options, or attributes of a product and wants to validate with customers which set of options is perceived as more valuable.
Buy-A-Feature can be employed as a relatively quick method to evaluate a large set of features with multiple people, in a limited time. Customers “buy” features that they would like in the next release using virtual money you give them.
Luke Hohmann provides a great description of it in the book Innovation Games.
In-person, team-based format
This is useful when you can work with a group of end-users or stakeholders together in the same room. In its simplest form, it’s composed a few steps:
Preparation: Start with a list of features (that you have prepared upfront). Assign each feature a “price”. This is a number that represents the development effort, time or cost required to build that feature.
Step 1: Give each participant $100 (or 100 Euro, or whatever currency you prefer). This is not real money, but a virtual currency. Think of it as Monopoly money.
Step 2: Ask your customers to buy as many, or as few, features as they like. They can spend the money on multiple features, or put all the money on one. They can also pull resources to buy features that would be too expensive for just one person.
Step 3: Once finished, identify the features that were purchased (those that received at least enough money to cover for their “price”), and those that were rejected.
TIPS
A few tips:
– Prices should vary so that not all the features have the same price.
– The total cost of all features should be higher than all the money you give your customers combined, so that not all features can be purchased, forcing prioritization. As a ballpark, assume your users can only buy one-third to one-half of all features.
– You can force collaboration by having some features cost more than what a single user has to spend, so that customers need to pool their money together to buy them.
– Optimum group size is between 4 to 8 to ensure collaboration. If you have more than 10 people, split them in two groups.
– The list of priorities you get tells you what customers value, not necessarily what you should build next. You may need to evaluate other decisions before proceeding to development.
Online format
If you cannot get your users together in a room, you can create an alternate version of the tool using online surveys. Make a list of all features, and add a box next to each feature to enter a point value. You can invite your users to invest their virtual money on as many or as few features, making sure the total invested by each participant is not greater than $100.
Be careful though because this method has several limitations:
– Since this is done online, you don’t get direct feedback or the opportunity to discuss alternative solutions that your users may come up with
– There can be inherent bias depending on the description of each item, or the understanding by each of the participants
Despite the limitations, this can be an easy and quick way to get a sense of what your users care about. You can then take it from there for further investigation.
Ranking features for two competing sets of users at Capital One
At Capital One we used this method to learn about priorities for a new tool we were planning. The end-users where the bankers in our retail locations, so we invited a few of them to provide their input. We also invited some of their executive management. We expected their priorities to diverge as each group had different objectives and incentives. The tool highlighted these differences.
We had purposely selected two different groups of people as we expected their input to be different. In fact, the preferences they expressed for the features they wanted in the new tool were pretty much in competition between the two groups. Bankers wanted features that made their job easier and helped them serve the customers better. The management wanted features to streamline processes and improve efficiencies.
We built the Buy-a-Feature tool using an online survey system, and then asked everyone to indicate how much money they’d be willing to spend for each of the features, up to a maximum of $100 total.
We discarded those features that had not received many investments, and instead focused on those that had been selected by either group. We then checked each feature for feasibility and effort, and ranked them. Some features had received a substantial high approval by the users, but also required a very big effort to build.
We decided to focus on those features that had received a high approval, and required relatively small effort to build (high Value-over-Effort ranking). These made our MVP list. The rest was prioritized for future releases.