This is a series of articles about how to think about a BI tool evaluation and selection, and how to create a weighted scorecard to guide your decision.
Functional Requirements is a category of capabilities that tools have or do not have, or rather, it’s a categorization of what you’d like your users to be able to do. This is likely the category of requirements that your Business Users are most interested in.
- Advanced Analytics
- Augmented Analytics
- Dashboarding and Data Visualization
- Data Management
- Data Querying
- Embedded Analytics
- Geospatial Visualizations and Analysis
- Internet of Things (IoT) Analytics
- Mobile BI
Technical Requirements is a category of capabilities about how the tools are deployed, accesses, and secured. This is likely the category of requirements that your IT and Engineering Teams are most interested in.
Vendor Qualification Requirements cover the stability, viability and ease of working with a tool vendor. This is likely the category of requirements that your Finance Team is most interested in.
Why you should trust us
We've spent years gathering and refining our knowledge, traveling the world to learn about new products, trends and new versions of tools we already know. It’s a lot to keep up with and although there’s a lot we don’t know, we know enough to build a company that solely does product research and reviews, but we'd rather actually build stuff.
"Those who can't do [consult]" so since we actually can help with tool selection and implementation, we’ll keep our pencils sharp and our sleeves rolled up so we can keep helping companies to get the analytics they deserve.
- We scour to find the best categories and criteria. Some might be openly available while others might be behind "email walls". We add in other things that we feel like are missing, and most importantly, we listen to the community to see what we still needs to be added.
- We fill out the information on tools and vendors based on our experience as developers and technologists, not product marketers. (They have their place, but we like brass tacks.)
- With this in hand, we reach out to the vendors to fill in any remaining blanks.
- We scrutinize their responses for "marketing fluff" like some vendors saying they have live data connections or real time data when they actually don't, it doesn't function well at scale, or it technically works but it's a poor user experience.
- We build a Weighted Scorecard. Each evaluation is documented to include at least Evaluation Date, Evaluation By, Version number, Product name (since some companies have a suite of products), Priority / Importance.
- We finalize the scorecard by multiply out the results as Rating x Priority to get an absolute score.
- We review the results with your team and determine next steps with demos, testing, or a pilot project with your actual data.
Our Scorecard Methodology
Each feature is rated on a scale of 0-3:
0 - Does not apply, or cannot do it
1 - Does it, but not very well
2 - Does it well and/or is an expensive add-on product
3 - Best example in the market
Each feature is scored on a scale of 0-3, with a weight applied according to how critical each is for your team and use case.
Our default is to use every other digit in a Fibonacci sequence since the difference between “Nice to have” and “Want to have” is significant, and the difference between “Want to have” and “Mission critical” is even more significant. (0), 1, 1, 2, 3, 5, 8…
0 - Not applicable = x0 points
1 - Nice to have = x1 point
2 - Want to have = x3 points
3 - Mission critical = x8 points
We finalize the scorecard by multiply out the results as Priority x Rating to get an absolute score. These scores can be compared against other products but should not be evaluated as e.g. 475/650 points which would imply a score of 73% “good”. We feel like this is faulty and may lead to an unfair advantage or disadvantage a since a lot of the potential features may not be relevant to you. It’s much better to evaluate absolute scores against the others in your comparison set and see what stands out.
“All models are wrong, but some are useful.”
At the end of the day, you will be making a decision that is largely subjective. Saying that a feature is “nice to have” vs “want to have” or that a tool does something “but not very well” are all subjective inputs. However, if after the exercise you see that one vendor/tool scores significantly lower than the others, that may be a good reason to drop it from your short list. But if two tools score similarly, we advise to not just select the highest scoring tool. Rather, go back and look at why and where each of those tools scored as they did and make a decision that’s right for your use case.
A note about money
From time to time we may refer to how a particular tool in the market works, but we don't get paid by any of these vendors to get included in any of this. In fact, it costs us a lot of time and money to compile this research.
So… Why are we giving this away for free?
Because if we can hop on a call for a few minutes and help you, or if you can review these materials and go on to successfully implement analytics at your company, then you don't need Mashey. If you feel like you could use some help, give us a shout.
Enjoy the series, and please, reach out to us if you feel like something is missing or misrepresented. We consider this a living set of information that we will come back to and update frequently.
Need help with your BI tool selection? Book a call with us and we’ll see if we can help. If we can point you in the right direction with a short phone call, great! If you’d like to hire us to do an evaluation and selection for you, contact us and we’ll make a selection together.