Avoiding the pitfalls of dashboard data

Richard Tregaskes
Those who rely on dashboards must have absolute trust in the integrity of the data populating the dashboard, as well as in the dashboard designers to display the data accurately. Unfortunately, many times, well-meaning dashboard designers and those populating it with data make mistakes that challenge the integrity of the dashboard, which should be a single source of truth for the users.

Well-designed dashboards are an excellent tool for providing an overview of complex interrelated data. As users drill down and interrogate the data, they use the information provided in a visual manner to inform their decision making.

Inspired by famous statistician Nate Silver and his book The Signal and the Noise: Why So Many Predictions Fail—but Some Don’t, I would like to share a few thoughts on how data manipulation—what goes into the dashboards and how it is displayed—can negatively impact the data provided by the dashboards.

The first area of manipulation is the “fitting” of data. This occurs when dashboard creators or sponsors have preconceived views of what trends the data should follow. Whether a straight line, a curve or something more complex, they may try to fit that trend to the data points to reinforce their preconceptions. If a dashboard is then used to make predictions, they are being made based on the individual’s agenda, rather than on what the data shows. Closely tied to fitting data is throwing away data because it doesn’t fit your models. Every data point that has been collected in a reproducible way is valid, even though the current environment may be different from the one in which it was collected. In his book, Nate Silver shared an example where the US Navy in World War II examined aircraft that returned with battle damage to determine where they should be made more resilient. They only had data from aircraft that was damaged yet survived. How could that data tell them where aircraft needed upgrades? It would tell them where upgrades were not required.

Paradigm shifts can also distort data. If you used data to help you design your website before the release of the iPhone, then 10 years later when mobile device users are the majority, you would not be well positioned to provide information in a manner accessible to most users. While a number of cellphones provided data and even allowed internet browsing, the iPhone was the paradigm shift that started the increase in consumption of mobile data and forced website designers to become aware of mobile browsing audiences. Another paradigm shift that may impact your forecasting data could be the move from on-site construction to incorporating offsite, prefabricated and modularized elements that are assembled on site. This shifts most of the quality control (QC) to controlled conditions, like in a factory, and reduces the number of workers on a construction site. This change impacts costs and potentially improves safety statistics as well. As the data is presented to the decision-makers, historical shifts would be identified on the dashboard, so the decision-makers understand the differing bases, and so that if data is extrapolated, it is done so correctly. After all, how many parabolic growth curves are just the start of an s-curve? How do you forecast something as disruptive as a pandemic?

Studies show crowd predictions are more accurate than an individual forecaster prediction. So, when a forecast is too far outside of the crowd, some dashboard designers or populators may adjust a forecast to be closer to groupthink. This is also known as rational bias and occurs more frequently when the forecaster’s identity is known, and they want recognition if they are right or they want to avoid embarrassment if wrong. If manual forecast adjustments are made, identify the sources and corrections to explain how they may impact decision making. The only necessary corrections are systemic data input errors, such as a decimal point in the wrong place or a typographical error.  

Never rely on dashboard data blindly. Users should trust the data is accurate but verify and question outlying information that might inform decision making. The rigorous quality assurance process Faithful+Gould employs when developing dashboards ensures the initial accuracy of data loaded and displayed so we can avoid these pitfalls.

With more than 25 years’ experience managing complex multi-disciplinary projects and programs in the USA, Europe and Asia, Richard Tregaskes is a senior project manager in Faithful+Gould’s Consult Group, leading the Public Private Partnership practice for North America. Richard is a Chartered Engineer and Fellow of the Institution of Engineering and Technology and holds a bachelor’s degree in Electrical Engineering from the University of Southampton, an MBA from the Open University and certifications from the New York Institute of Finance and FEMA.

Written by