top of page

Blogs & Past Meetups


This is the third part of a series of articles about effective analytics implementation. The first part “The Five Faces of Analytics” explores the roles you need on your team. The second “Pluck the Low-hanging Fruit” helps you discover and prioritize projects.

An analytics project is a house of cards. Each project is really three: a change management project stacked on top of a software development project stacked on top of a research project.

So. How does one prevent the whole thing from collapsing under its own weight? To start with, we’re going to assume you have the right team. You have also picked the right project. This article is going to talk about the right process - particularly for the “research project” portion of the endeavor (the bottom card).

The problem is, there is no right process. No single template will guarantee success on every analytics project. Instead, successful projects have common elements. These elements are the glue that will hold your house of cards together.

Diagnose Before You Prescribe Before you ever fire up your computer, spend some time exploring the business problem. Many projects fail because you successfully answer the wrong question.

For example, we developed a tool to optimize school bus routes (and thus reduce the number of buses needed). It worked great. It reduced bus needs by about 5 per cent and reduced student ride times by on average 2 minutes. We were pumped.

But it turns out, the planners weren't just interested in bussing costs and average ride times.

They wanted to reduce the number of complaints they fielded from angry parents. Our new routes meant some students would be riding less, but a few would be riding more.

In some cases, substantially more. Unfortunately, every student with a longer commute is a potential angry parent phone call. Our solution reduced buses, but increased parent complaints.

The project was unsuccessful because we failed to understand the nuances of the business problem. We didn’t fully diagnose the condition.

You can mitigate this risk by talking more. Identify and interview the stakeholders early and often. Ask them what success would look like to them. Ask them what “partial success” (i.e., failure) would look like. Mine them for hypotheses and analytical direction. Often, their guts have an idea of what’s really going on. You’ll save a tonne of time by leveraging their expertise.

Build the Dials We all want to move the dial. But to do so, we actually need a dial to move. If your goal is to improve satisfaction, or decrease costs, or increase profitability, then you need to be tracking these things before your start intervening. And you need to track them in a way that allows you to isolate your impact.

Early in your project, you should be creating a baseline report that shows the metrics of interest. You should revisit these metrics throughout the project, and ideally, use them to demonstrate that your analysis and associated interventions are working.

In our bus example, we were missing a dial. Fewer busses required: check. Shorter average ride times for students: check. Minimal parental complaints…wait, I don’t see that on my control panel.

With more thought and foresight early in the project we could have identified a way to measure or predict this, like change in ride time for instance, and then incorporate that in our model.

Don’t Reinvent the Wheel Once you understand the business problem and you have a way to measure it, it’s time to convert that problem into an analytical approach. But before you do, look at what other approaches have been used and to what degree of success.

Look at what academics have done. Look at what practitioners have done. Consulting companies love to brag about how they are solving tough problems. Read Interfaces. Read McKinsey Quarterly. Search the Google.

Pick the Simple Model Often, more than one approach will work. Maybe machine learning, optimization, and simulation will all get you an answer. Which of these is the most accurate? It’s hard to say. And really, it rarely matters. The value of going from a good analytical approach to the best analytical approach is nothing compared to going from gut feel to some kind of analytics.

With this in mind, you should choose the simplest approach that works. It’s easier to explain to stakeholders, and you’ll probably finish the work much more quickly. You can always add complexity later on. But you should wait until it’s been adopted before you do that.

Stakeholders will be reluctant to base decisions on something they don’t understand. Nobody likes a black box.

Plan for Interaction Additionally, you want the approach that will allow a decision-maker to play with the answer. They will almost always want to conduct their own “what-if” analyses. So whatever your model, you should be able to change the inputs (the date, the location, the assumptions) and instantly see the new results. This will get you thinking about building a tool for exploration as opposed to a single right answer.

When we deliver a project, we create something we call “the sandbox”. It starts with the right answer, but allows a decision-maker to make adjustments. She can then test her assumptions, and familiarize herself with the model. Done right, this leads to that magical state where the decision-maker believes the whole thing was her idea to begin with.

Explore With Your Eyes, Not Math Okay, so you’ve identified your problem and you’ve decided on an analytical approach. You have some data. What should you do first?

Explore the data using visualization. This exploration is for two purposes: to understand how the data reflects the business, and to identify the type and magnitude of data errors. (There will always be errors in your data, and it’s better to find them early.)

Look at your data elements through time, by category, or by location. Then look at the relationships between your data elements to look for correlations. If a correlation looks fishy, then look closer. (For example, customer birth month shouldn’t be correlated with customer profitability.)

Don’t rely on summary statistics like average, correlation, or standard deviation. As Anscombe demonstrated in his famous quartet, relying on summaries can lead you astray. This is why old school Excel pivot tables and Tableau are so useful. They allow you to visually inspect your data in hundreds of ways, very quickly.

At a minimum, plot the frequency distribution of each variable and the relationship between each variable pair. Take note of outliers or irregularities, and follow up on them. Then, revisit your business problem. Are you still on the right track?

Iterate and Communicate At this point, the process can branch in many directions. Each modeling exercise has its own rhythm and needs to be managed accordingly. Nevertheless, a few things will improve your odds of success. First, iterate on your model. Build simple versions first and add functionality later. You’ll find the errors you missed in the exploration, and you’ll get a better feel for the data.

Next, communicate your interim findings with your stakeholders. You should be talking to them every week. They’ll suddenly remember changes in policy that impact your data and model. They’ll provide direction that they couldn’t otherwise. They’ll prevent you from wandering down the wrong path for months on end.

Gluing It Down So there you have it, a set of guidelines that will improve your odds of success. You’re answering the right question, you’re measuring your progress, you’re exploring your data, and you’re working on the right kind of model. You’re also interacting with decision-makers and stakeholders throughout the endeavor. The bottom card in your stack is now glued tightly to the table, and you’re ready to tackle the software development and change management parts of your project.

We hope you like out our tri-article series and would keep visiting us!

Recent Posts

See All
bottom of page