Information Retrieval (IR) systems for search and recommendations often utilize Learning-to-Rank (LTR) solutions to prioritize relevant items for user queries. These models heavily depend on user interaction features, such as clicks and engagement data, which are highly effective for ranking. However, this reliance presents significant challenges. User Interaction data can be noisy and sparse, especially for newer or less popular items, resulting in cold start problems where these items are ranked poorly and receive no attention. Exploring item recommendations may address cold start issues, but negatively impacts key business metrics and user trust.
Existing methods to address cold start in recommendation systems depend on heuristics to boost item rankings or use additional information to compensate for the lack of interaction data. Next, non-stationary distribution shifts are managed through periodic model retraining, which is costly and unstable due to varying data quality. Last is the Bayesian modeling that offers a principled approach to handle the dynamic nature of user interaction features, allowing for real-time updates as new data is observed. However, Bayesian methods are computationally intensive, as exact estimation of the posterior distribution is intractable. Also, recent advancements in variational inference using neural networks to simultaneously address cold start and non-stationarity in recommendation systems at scale remain unexplored.
To this end, researchers from Apple have proposed BayesCNS, a unified Bayesian approach that holistically addresses cold start and non-stationarity challenges in search systems at scale. The method is formulated as a Bayesian online learning problem, utilizing an empirical Bayesian framework to learn expressive prior distributions of user-item interactions based on contextual features. The approach interfaces with a ranker model, providing ranker-guided online learning to explore relevant items based on contextual information efficiently. The efficacy of BayesCNS on comprehensive offline and online experiments, including an A/B test shows a 10.60% improvement in overall new item interactions and a 1.05% increase in overall success rate compared to the baseline.
BayesCNS utilizes a Thompson sampling algorithm for online learning under non-stationarity, allowing continuous updates of previous estimates and learning from new data to maximize cumulative reward. BayesCNS is evaluated on three diverse benchmark datasets addressing cold start in recommender systems: CiteULike, LastFM, and XING. These datasets cover user preferences for scientific articles, music artists, and job recommendations, respectively. For comparison, five state-of-the-art cold start recommendation algorithms are KNN, LinMap, NLinMap, DropoutNet, and Heater. These algorithms use different techniques such as nearest neighbor algorithms, linear transformations, deep neural networks, dropout methods, and a mixture of experts to generate recommendations and solve cold-start issues.
The performance of BayesCNS is evaluated using metrics such as Recall@k, Precision@k, and NDCG@k for k values of 20, 50, and 100. Results show that BayesCNS performed competitively compared to other state-of-the-art methods across all datasets. An online A/B test introduces millions of new items, comprising 22.81% of the original item index size. The test ran for one month, comparing BayesCNS with a baseline that introduced new items without considering cold start and non-stationary effects. BayesCNS consistently outperformed the baseline, showing statistically significant improvements in success rate and new item surface rate across most cohorts.
In conclusion, researchers from Apple have introduced BayesCNS, a Bayesian online learning approach, that effectively addresses cold start and non-stationarity challenges in large-scale search systems. This method predicts prior user-item interaction distributions using contextual item features, utilizing a novel deep neural network parameterization to learn expressive priors while enabling efficient posterior updates. The efficacy of BayesCNS has been demonstrated through comprehensive evaluation showing significant improvements in critical metrics such as click-through rates, new item impression rates, and overall user success metrics. These findings use the potential of BayesCNS to enhance the performance of search and recommendation systems in dynamic, real-world environments.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. If you like our work, you will love our newsletter.. Don’t Forget to join our 50k+ ML SubReddit
[Upcoming Event- Oct 17 202] RetrieveX – The GenAI Data Retrieval Conference (Promoted)
Sajjad Ansari is a final year undergraduate from IIT Kharagpur. As a Tech enthusiast, he delves into the practical applications of AI with a focus on understanding the impact of AI technologies and their real-world implications. He aims to articulate complex AI concepts in a clear and accessible manner.
Leave a comment