SPOTIFY

How Spotify uses ML to create the future of personalization: Spotify Engineering


December 2, 2021
Published by Spotify Engineering


Machine learning that drives personalization in Spotify. We may have a single platform with 381 million different users, but it would be more accurate to say that Spotify has 381 million individual versions, each filled with different home pages, playlists and recommendations. But with a library of over 70 million tracks, how can our ML models actually make these decisions?

Well, Spotify’s personalization VP Oscar Stall recently gave a talk on TransformX, a summit for ML and AI leaders, to discuss this. Read on to get a glimpse of how learning ML and reinforcement helps us make music and podcast recommendations, and be sure to check out Oskar’s presentation here (or below!) To hear more about ML’s future on Spotify.

How do we use ML?

It starts with all the information. At the most basic level, all kinds of user information – playlists, listening history, interactions with Spotify’s UI, etc. – are given in our ML models, with trust and responsibility in mind. Every day, nearly half a trillion events are processed, and the more information our models collect, the smarter they become about creating relationships between different artists, songs, podcasts and playlists.

But our ML models go beyond incorporating other factors into their decision-making processes. What time of day is it? Is this playlist to work out or chill out? Are you on mobile or desktop? By incorporating some of these ML models across Spotify’s infrastructure, we’ve been able to offer increasingly intelligent, specialized recommendations that Oscar says can “serve even narrower tastes”.

Although we are not just looking for instant gratification for our users. We want to give listeners a great audio experience of a lifetime and be with them every step of the way. And that brings us to what we are now working on.

Future Strengthening Education

Reinforcement learning, or RL, is a type of ML model that responds in its current environment to the ultimate, long-term reward, whatever it may be, in an effort to maximize. In our case, that reward is the long-term satisfaction of our users with Spotify. RL is not about short term solutions. It’s always been a long game.

In a general sense, our RL model tries to predict how satisfied our users will be with their current experience, and will try to get them to eat more fulfilling content in their audio diet so that they are happier with the service. In other words, instead of handing users the “empty calories” of a content diet that will only satisfy them in a moment, RL pushes them toward a more sustainable, varied and fulfilling content diet that will last a lifetime.

It could mean playing a new dance track that we think might fit a user’s current mood, or it could mean suggesting a quiet, enclosed part to help them study. Guessing what a user will want 10 minutes from now, one day from now, one week from now means creating a ton of simulations and running RL models against those simulations to make it smarter, like a computer playing against itself in chess. Good at the game.

With ML and RL, we strive to create a more holistic audio experience by focusing on recommendations that ensure long-term satisfaction and enjoyment. Our approach to personalization not only benefits the audience: better and more satisfying recommendations help artists, exposing their work to a wider audience that is more likely to enjoy it. After all, there is a reason for the discovery of 16 billion artists every month on our platform. And the best still comes.

Tags: machine learning





Source link

Related Articles

Back to top button