[ This article originally appeared in the Summer 2019 edition of OTT Executive Magazine. ]
AI is everywhere. Permeating our lives throughout the day. From the mobile devices we’re constantly tapping and swiping, to more subtle uses, like that “customer service agent” you may be chatting with on your favorite website. There is no shortage of industries that AI has the potential impact, through increased efficiency, reduction in error, and even increased creativity by freeing up people’s time. In fact, the number of job postings with “AI” or “machine learning” increased by 100% from 2015 to 2018 according to Indeed. And AI and ML skills dominated in the fastest-growing jobs of 2018.
But while AI has already transformed many areas of our lives, including the workplace, it currently still needs the human touch to be useful. On social media, for instance, AI cannot always easily judge the tone or intent of an interaction between individuals and sometimes a conversation between friends can be flagged and taken down. And while Facebook, Twitter, and others are wrestling with this issue and taking steps to improve the policing of bullying, hate speech, violence, etc., the software still has a ways to go, failing at least one out of five times.
Arriving at “Intelligence”
It is helpful to understand the subtle differences between artificial intelligence, machine learning, and deep learning and the application of these to streaming video. The analogy of a set of Matryoshka, or nesting Russian dolls, is a good visual representation of how they relate to each other. In this scenario, artificial intelligence would be the largest doll in the set. Followed by machine learning. Lastly, deep learning as the smallest doll inside. (Fig. 1) In this case, all machine learning is AI but not all AI is machine learning.
AI at its very basis is mimicking the cognitive functions of humans to achieve a goal that can be explicitly defined or induced. This most often makes use of algorithms, a set of instructions that a computer can execute to achieve its goal efficiently. Simpler algorithms can be stacked or more complex ones can even write simpler algorithms to execute themselves. Artificial intelligence has become an integral component of movie development; predicting opening weekend revenues at the box office, long-term gross sales revenues, and creating targeting profiles for marketing and creation efforts.
Machine learning is a subset of AI. It describes the way that computers can learn from data to make predictions. Because data, not humans, drive the learning process, machine learning models can change and adapt without a human to modify computer code. Machine learning models can have a variety of uses, such as recommending a show that a user might like, classifying whether an image is a cat or not, detecting fraudulent credit card transactions, or parsing speech.
Diving another layer into deep learning, a subset of machine learning. A deep learning architecture contains multiple layers which each learn about patterns in the data. Models can be trained via “backpropagation” — that is, recognizing when the model has made a mistake, and correcting it. The most typical example of a deep learning algorithm is a neural network with many hidden layers. Because of the huge amounts of data and computational power required, deep learning has increased in popularity with the rise of cloud computing. An example of an application of deep learning is in image processing. This can be incredibly useful in cleaning up imagery or footage automatically without human intervention.
Can AI Revolutionize Video?
As AI technology continues to improve, does it have the power to revolutionize streaming video? According to Adobe, it does, “driving more intelligent production, delivery, and engagement — and a better experience for both enterprise brands and their customers.”
Let the Machines Do the Work
Personalization, production, and data analytics are some areas where a video business can harness the capabilities of AI at scale. But the problem we see is that most AI in the streaming video space is a solution looking for a problem. It’s interesting, sometimes informative but rarely actionable for video businesses. A lot of this is a result of the ‘black box’ nature of machine learning. Feed the data in, let the models determine what is important, and an answer spits out the other side.
Secure your Investment
Investment in an AI-powered product, like the Customer Happiness Index (CHI®), for example, requires a tangible ROI to make sure the time, effort, and money to bring it into service are really worth it. Following are suggestions for three considerations to help you make a more informed decision and meet your objectives for analyzing the data, gleaning insights, and taking action.
Define the actions you want to take as a result of the machine learning model completing its analysis. As the model makes its way through the dataset, determining which elements are the most impactful to how it sorts the data, do you know what you will do with it once you have it? Unless actionability is applied, data is just data. It’s what you do with it that matters. In the case of an OTT video business, this means taking save-actions on video subscribers the machine learning model determines are at-risk of churning out of their subscription. This could be voluntary or involuntary. A properly trained model will determine which elements are the most important and rank them accordingly.
Consider a model’s interpretability when choosing your machine learning algorithms. Without being able to understand why your model flagged a user as at-risk for churning, you can’t easily make a targeted intervention. Models that are easy to interpret, like decision trees or logistic regression, may not perform as well as black-box models like neural networks and gradient boosted trees. If the accuracy boost from more complex models is necessary, you can apply an “interpretability engine” at the end of the machine learning pipeline. At Wicket Labs, we “translate” the output of our black box models using more interpretable machine learning models. These estimate the impact of a customer’s behavior on churn risk at the individual level. We convert this data into primary reason codes, such as “no recent viewing activity”, to explain why the model made its decision. In addition, we identify which behaviors are the best targets for decreasing a user’s churn risk. This makes it simple to take data-driven actions for customer retention.
Skip the AI when a correlation will do. Machine Learning is inherently expensive. Hiring the right data scientists and developers to write the algorithms, process the data, and compute and interpret the results. Humans have an evolved neocortex that allows for massively parallel pattern recognition and it’s something that still gives us an edge over the machines in certain instances. There are many cases where correlation analysis can lead you to a confident action or decision without the time and expense of applying AI.
The team at Wicket Labs believe CHI meets these considerations. In addition to revealing the happiest customers in a subscription service, there are Save Actions to take for each at-risk customer segment based on easy to understand reason codes. Subscriber churn is a complex problem for video services where we have found over 60 interrelated factors, or features in machine learning parlance, that have causal relationships with a customer leaving a subscription video service. Sometimes intentional, sometimes not. CHI was developed to Identify users that fit these patterns and although it is very complex, it has a clear payoff as it reduces churn and increases a key indicator of the health of a subscription-based business, audience lifetime value.
Tags: artificial intelligence • Customer happiness • data scientists • machine learning