True data-driven marketing is still “just a dream” for many marketers, rather than a reality. Under this vision, systems data mine autonomously, and present fresh actionable insights at your desktop in the morning.

For about 99% of marketers, this may sound too good to be true –and in all candor it usually is.

But it is important to know and recognize that the intelligent application of mathematics and statistics, and the creation of purpose specific algorithms, have been quietly creating value for years now. Yet the typical marketer still struggles to find enough time to get the mail out, or execute well thought website marketing experiments against a control. (see “Analytics Isn’t Reporting”)

So there have never been more skeptics of the legitimate power of the intelligent application of data, even as the c-suite expectations of a data strategy that creates competitive advantage grows. Sound like your experience, industry, or career? Sure it does.

But as investment continues to grind higher and competition grows, progress continues to be made.

The “Amazon” of Data, is of course, Amazon.

You may know that Amazon.com elected to release to the public some technology they use internally in making recommendations and determining what you’d be likely to buy and when. They took the same toolset they use and published it on Amazon Web Services. “Pretty neat” you might say…

 

Mike Ferranti infographic

Since we get so many questions about how Amazon does it, and how all this actually works, we’ll break down the AWS Machine Learning and Prediction toolset so that qualified organizations have an idea of what’s possible.

For the purposes of this article a “qualified organization” is one that has development talent, experience with data, and at least a basic working knowledge of statistical methods. Of course, experience developing models is very helpful as well.

We call these “requirements” because Amazon’s tools, and every tool like it (Google has a similar toolset for the Google Cloud Platform) requires significant programming to use. They also have a learning curve for inexperienced developers and organizations that haven’t developed competencies in structuring and transforming their data to a treatment that is readily ingested and workable with these tools.

What AWS Tools Do

AWS offers a “Machine Learning” and “Prediction” tool set. These are two related components. Machine Learning is used to ingest large amounts of data, and identify patterns in that data. A typical example is extracting promotional history and responses, and utilizing it to identify what customers are most likely to respond to a marketing promotion or offer.

When Should You Use Machine Learning & Prediction?
Generally speaking machine learning works best when a simple “logic based” algorithm doesn’t work, or doesn’t work consistently. Simple (or even complex) logic defines a set of rules or requirements for a decision the algorithm makes to be determined. This is also called a deterministic or rule-based approach.

If there are a lot of variables, say hundreds, or more –you can’t realistically develop ‘brute force’ rules that cover every scenario that you’d need to create value. You may determine a favorite color of a buyer with a simple rule that says if the majority of their purchases are in red, then they like red. But each purchase is influenced by more than just color… there is style, season, price, and category of product, material, size, and discount to name a few. As the permutations of these combinations of variables grow more complex, a simple deterministic rule-based approach can break down, and make a prediction that doesn’t work more and more of the time.

If and when business rules begin to collide with one another and discrepancies require more rules to manage these logical collisions, Machine Learning can help sort through your data in ways rule-based algorithms cannot.

 

“In short, you can’t realistically create or code all the permutations and business logic cost effectively.”

 

If your data set is very large and the diversity of variables you have is high, any “brute force” approach is destined to fail. Running through a set of rules on a sample of a few thousand cases may still work. Now what if you have millions of raw records? This can be possible even without a multi-million record customer file, given we may be looking at the colors and other attributes of items purchased over a period of years. Machine learning can help make the task scaleable, and when you’re using Amazon’s computing power to do it, scale becomes the easy part.

Here’s An Overview of How The “Prediction Process” Works

So here’s an executive level overview of how we use Machine Learning, and how it works if you build your solution on top of AWS, or Google’s developer API’s.

  1. Problem Definition – Begin with The End in Mind
    Here’s the step too many really don’t get right. If you’re going to venture into Machine Learning with AWS, or anywhere else, first you must define the core problems or opportunities you wish to pursue. You’ll have to do so describing that which you can observe (through your data) and an “answer” a model is expected to predict.
  2. Data Preparation
    Your data is going to go into a “training algorithm” where the tools will identify patterns in the data that will ultimately be used to predict the answers you’re looking for on a like dataset. Look at your data before it goes in. Be curious. Do some logical testing on it. If it is not adding up to the common sense “sniff test,” odds are very good it won’t add up later either.
  3. Transformation
    Input variables and the answers you seek from models, also called the “target,” are not tidy such that they can be used to train an effective, predictive model. So you have some heavy lifting to do to get the data into new variables, “transforming” it to a more prediction-friendly input. For example you may have a set of transactions that a customer had with your brand, but you need to summarize that into a count of transactions for that customer, and an average time between purchases. These two new fields will be more predictive and useful. A command of logic and statistics helps make these calls, as does experience.
  4. Implement a Learning Algorithm
    Your input variables have to be fed into an algorithm that can sort and find patterns in your data –also called a “learning algorithm.” These algorithms are specialized to help establish models (statistical relationships) and evaluate the quality of the models on data that was held out from model building.
  5. Run The Model
    We generate predictions against a new or holdout sample of the same format of the same source of data. You can’t run this predictive model on the same sample you used to build the model. This begins the iterative process
  6. .. Then Do It Again. As is any process where your engineering new outcomes for the first time, this process is generally iterative. It’s usually not realistic to expect a killer result on the first pass. You’ll likely massage inputs and training methods a number of times before the output starts looking good. More on what a good output looks like in a future column though. For now, you need to know that the first product won’t likely be the final product.

 

The Bottom Line “ Easier” Still Isn’t Quite “Easy” for the Average Marketing Organization

While Amazon and Google may be among the easiest websites to use, and has made tremendous contributions to the proliferation of data science by providing structure and programming tools to develop new capabilities with, using Amazon AWS for Machine Language and Prediction is not for the creative marketer or even the “traditional” web marketer.

There is also a rising category of upstarts in data-driven and database marketing apps, that add intelligence to the process, and can provide marketers with a significant head start in advancing their marketing intelligence.

Data Science requires a combination of technical, mathematics/statistics, and marketing/ business skills. This combination is in great demand the world over, and so it’s not easy to hire top contributors to implement all of this. But for organizations with the programming bench, or external experienced business partners, tools like AWS and Google Cloud Platform can provide a substantial leap forward in using data to make superior decisions.

Remember, the outputs of the predictive process don’t have to be “right” 100% of the time –and they won’t be. They only need to make the numbers break in your favor enough to have a material impact on your revenue and profit now –and over time.
After all, that’s really what the data science discipline is really all about.