Select Page

How to Successfully Use Quality Assurance Reviews for Visualizations

How to Successfully Use Quality Assurance Reviews for Visualizations

When building new visualizations, or changing existing ones, a peer review of the work can help ensure quality standards are maintained. I’ve created a number of these processes over time, and the first challenge is how to identify the right tests. Then, you want to make the process efficient enough that it doesn’t slow down the team’s responsiveness to business needs.

A plan to successfully implement quality assurance tests for visualizations includes 1) defining the scope and reviewers, 2) creating the checklist, 3) establishing the process, and 4) monitoring the data.

Quality Assurance (QA) vs Quality Control (QC)

To get started, let’s cover off on some of the terminology since it can vary across organizations. I use Quality Assurance to indicate the act of reviewing work in-line before releasing to the user. Conversely, Quality Control is a test that is performed after the visualization work is already completed and in use. I’ve seen Quality Control tests performed either internally or by an external, independent group.

Here, my focus is on the QA review because it is the most important one for the BI team. First, we should always try to catch issues with a new visualization before the user does. In addition, there can be data issues the user cannot see, which creates a risk of incorrect insights being visualized. There are of course cases where a QC test also makes sense. For example, some teams may only have stale subsets of data in a development environment to work with. Consequently, the visualized data and dashboard performance can only be fully vetted once signed off and in production. That’s a great topic for a future post though!

Why Are QA Tests Important?

The obvious answer here is that we are human and even the best data visualization expert can make mistakes! We don’t want those mistakes to impact the insights which the business is leveraging to make key decisions. Having a second pair of eyes on development work is a best practice to catch those inevitable mistakes when they occur.

You will find other, less obvious benefits as well over time. For example, sometimes the data changes without our knowledge, impacting a dashboard that is running in production. The QA review can pick up issues in the visualization unrelated to the new development work being performed. Another benefit is the knowledge sharing among team members. Reviewing someone else’s work provides exposure to new techniques the reviewer can learn from. Conversely, the reviewer might have a suggestion for the developer to improve their developer work.

Define Scope for QA

Not every new dashboard or change to a dashboard should go through your QA review process. The QA review itself can quickly become a burden to the team if performed on literally everything they do. Imagine updating the heading grammar on a low impact dashboard, and then having to send that through QA review! Subsequently, the challenge becomes identifying those reports which should have a QA review at all times.

I recommend solving for this by looking at the criticality of the dashboard. Here are a few considerations to make when identifying which dashboards should be have a QA review:

  1. Is there a legal, financial, or reputational risk if the dashboard is wrong?
  2. Is it being used by your Audit department?
  3. Is it highly time sensitive?
  4. Is it being used by the C-Suite or other company executives?
  5. Is it being sent out to key external partners?
  6. Is it supporting a high-risk process?

Level up by including a field in your Report Inventory identifying those reports which are subject to a QA review. In this way you can validate that the QA reviews are happening as expected for the reports in scope.

Identify Your QA Reviewers

The strategy here is to leverage the same people who build visualizations to QA each other’s work. You will also want to avoid having consistent developer-reviewer pairs because not all reviewers have the same skill sets. It is common that something one reviewer catches, could be missed by another due to lack of experience in that area. Treat the review process as another learning opportunity where issues are found and the developer learns by correcting them.

Select a QA Checklist Format

This might seem like an arbitrary decision, after all, it’s the actual checks being performed that are most important. However, the format will directly impact efficiency, ease of maintenance and your ability to aggregate review results. Here are a few common options:

  • Excel – The easiest method is to put your checklist into Excel. However, it will be very difficult to merge for analysis in the future, especially as the form changes over time.
  • SharePoint/Web Form – A stand-alone form is moderately easy to create and efficient to use. It also allows you to download the review data for analysis.
  • Intake Integration – The more complicated, but preferred, method is to integrate the QA checklist into your request intake platform. For example, when the request is in QA Review, an on screen form expands where the review can be completed. This allows you to create rules as a control to ensure the review is completed and stores everything in one place.

Create the QA Checklist

This is one of the hardest parts of the process, but hopefully it is a one-time effort! I say “hopefully” because for one reason or another I’ve ended up creating many versions over the years. The main driver for that is typically caused by changing teams. When I’m in a new team, the reporting environment and structure is typically also different. That means the checklist has to change as well. For example, prepping your data in a SQL Database requires different QA checks than if you used an Alteryx Workflow.

Because the actual tests can vary, I’ll cover the key areas you will want to focus on from a high level.

1) The Data

Having clean data is critical to the business users making decisions off your insights. As such we want to focus on three very important tests on the data sets feeding your visualization:

  • Uniqueness – Ensure there are no duplicates in your dataset. In SQL we do this by placing a unique index on the dataset using the fields that represent uniqueness. This gives us a simple pass/fail test. Level up by requiring analysts to use a unique index on data sets for ALL reports.
  • Accuracy – This refers to the values which appear in the fields, and ensuring they are as expected. A simple example of this is a text field where you expect five values to appear. In SQL, we do a group by and validate that those are the only five values present. Any new values or spelling variations of existing values will be immediately apparent.
  • Completeness – Are you capturing all the records that you intended to? This test looks at the filters being applied in your data set to ensure they are correct. For example, you might need to exclude all open records in a pipeline view by looking at a status field. If the filter is a WHERE clause in SQL, then this check would confirm all the correct statuses were included.

2) The Visualization

These tests are related to how the data is visualized for the business to use. The goal is to ensure that the underlying data is visualized correctly. Your data might be pristine, but if the dashboard is displaying it incorrectly, then there is still a risk of the business drawing incorrect conclusions.

  • Filter Settings – Similar to the completeness check on the data, this test looks at the visualizations and confirms that the data intended to be included is present. For example, this could be happening via dropdowns or calculated fields created in the BI software.
  • Data Refresh – Ensure that the method set up to refresh the data in the visualization is done so correctly. It seems like a simple thing, but unless the developer specifically displays a data as of date on the dashboard it can be difficult to know if the data is stale or not.
  • Formatting (Optional) – Some teams have a very specific design for their dashboards. In those cases, it may be helpful to have the reviewer confirm that those standards are in place. This can be things like the font used, color palette, header/footer design, etc. These checks are typically most important on new visualizations and can also be handled outside of the QA review process if desired.

3) The Infrastructure (Optional)

These tests are important, but not necessarily tied to a QA review. You can decide to do them with the QA Review, just when a new visualization is created or as a periodic (Annual or Semi-Annual) review that isn’t linked to a change with the dashboard. Here are examples of things to review in this section:

  • Naming Conventions – Are data sources, connections, folders, files, etc. all named correctly. I like to weave the name of the dashboard into all the related areas so that it is easier to manage in the future when change requests come in or there are file clean up initiatives.
  • Metadata – Check to see if all the metadata is correct and current, such as field definitions, report inventory values, analyst contact info, etc.
  • Version Control – If you use a version control platform like GIT, are all the files stored correctly.

Establish the QA Review Process

So, you have your checklist, you know who will be doing the reviews and you know which dashboards are in scope. Now it is time to create a process that reliably and efficiently moves the work from the developer to the reviewer and back. The best way to do this is by leveraging your request intake process. If you don’t have one, then you should start thinking about setting one up!

Use Request Type to identify requests for new reports or changes to existing reports. Then, use Request Status by adding values such as “QA Review”, “QA Review Complete – Fail” and “QA Review Complete – Pass”. Finally, add fields for the reviewer name and completed date.

I also like to conduct the QA Review prior to giving the updated dashboard to the business to review. This is mainly because I want to catch any development issues before the requester does. It is also common for the requestor to come back with more enhancement requests (scope creep), resulting in more QA Reviews. However, subsequent reviews are typically faster since the QA Reviewer is already familiar with the prior changes.

Monitoring your Review Trends

As data analysts, we can certainly understand how tracking the data behind a process can lead to new insights and business decisions. The QA Review process is really no different and mature BI Teams might be interested in knowing:

  • Does one analyst fail significantly more reviews than the rest?
  • What is the most common checklist item that fails during review?
  • How much time is it taking on average to go through the review process?

Gathering data to answer these questions can help you provide targeted training, modify the checklist, or streamline the review process. All efforts which will ultimately level up your team’s skills and efficiencies.

I feel like this is a long post, yet I’ve barely scratched the surface on this one! In the future I’ll look to provide a deeper dive into the checklist itself using real world examples. Until then, please feel free to add your comments and questions, I’d love to get your feedback!

About The Author

Brian Barnes

Passionate data visualization professional focused on Business Intelligence Analytics within Document Services for Bank of America. A well-seasoned and results-oriented leader with over 12 years of experience bridging the gap between business and technology to provide solutions in data, analytics and reporting for a highly complex and highly regulated organization.