Xue Xia, Software program Engineer, Homefeed Rating; Neng Gu, Software program Engineer, Content material & Person Understanding; Dhruvil Deven Badani, Engineering Supervisor, Homefeed Rating; Andrew Zhai, Software program Engineer, Superior Applied sciences Group
On this weblog put up, we are going to reveal how we improved Pinterest Homefeed engagement quantity from a machine studying mannequin design perspective — by leveraging realtime consumer motion options in Homefeed recommender system.
The Homepage of Pinterest is the one in every of most essential surfaces for pinners to find inspirational concepts and contributes to a big fraction of general consumer engagement. The pins proven within the high positions on the Homefeed have to be personalised to create an attractive pinner expertise. We retrieve a small fraction of the massive quantity of pins created on Pinterest as Homefeed candidate pins, in response to consumer curiosity, adopted boards, and so on. To current essentially the most related content material to pinners, we then use a Homefeed rating mannequin (aka Pinnability mannequin) to rank the retrieved candidates by precisely predicting their personalised relevance to given customers. Subsequently, the Homefeed rating mannequin performs an essential function in enhancing pinner expertise. Pinnability is a state-of-the-art neural community mannequin that consumes pin indicators, consumer indicators, context indicators, and so on. and predicts consumer motion given a pin. The excessive degree structure is proven in Determine 3.
The Pinnability mannequin has been utilizing some pretrained consumer embedding to mannequin consumer’s curiosity and choice. For instance, we use PinnerFormer (PinnerSAGE V3), a static, offline-learned consumer illustration that captures a consumer’s long run curiosity by leveraging their previous interplay historical past on Pinterest.
Nonetheless, there are nonetheless some features that pretrained embeddings like PinnerSAGE doesn’t cowl, and we are able to fill within the hole through the use of a realtime consumer motion sequence characteristic:
- Mannequin pinners’ short-term curiosity: PinnerSAGE is skilled utilizing hundreds of consumer actions over a long run, so it largely captures long-term curiosity. Alternatively, realtime consumer motion sequence fashions short-term consumer curiosity and is complementary to PinnerSAGE embedding.
- Extra responsive: As an alternative of different static options, realtime indicators are in a position to reply quicker. That is useful, particularly for brand new, informal, and resurrected customers who would not have a lot previous engagement.
- Finish-to-end optimization for advice mannequin goal: We use a consumer motion sequence characteristic as a direct enter characteristic to the advice mannequin and optimize instantly for mannequin goals. In contrast to PinnerSAGE, we are able to attend the pin candidate options with every particular person sequence motion for extra flexibility.
So as to give pinners real-time suggestions to their latest actions and enhance the consumer expertise on Homefeed, we suggest to include the realtime consumer motion sequence sign into the advice mannequin.
A steady, low latency, realtime characteristic pipeline helps a strong on-line advice system. We serve the newest 100 consumer actions as a sequence, populated with pin embeddings and different metadata. The general structure might be segmented to occasion time and request, as proven in Determine 2.
To attenuate the appliance downtime and sign failure, efforts are made in:
ML facet
- Options/schema power validation
- Delayed supply occasion dealing with to forestall knowledge leakage
- Itemized actions monitoring over time knowledge shifting
Ops facet
- Stats monitoring on core job well being, latency/throughput and so on.
- Complete on-calls for minimal utility downtime
- Occasion restoration technique
We generated the next options for the Homefeed recommender mannequin:
Determine 3 is an summary of our Homefeed rating mannequin. The mannequin consumes a <consumer, pin> pair and predicts the motion that the consumer takes on the candidate pin. Our enter to the Pinnability mannequin consists of indicators of varied sorts, together with pinner indicators, consumer indicators, pin indicators, and context indicators. We now add a singular, realtime consumer sequence indicators enter and use a sequence processing module to course of the sequence options. With all of the options remodeled, we feed them to an MLP layer with a number of motion heads to foretell the consumer motion on the candidate pin.
Current literature has been utilizing transformers for advice duties. Some mannequin the advice downside as a sequence prediction job, the place the mannequin’s enter is (S1,S2, … , SL-1) and its anticipated output as a ‘shifted’ model of the identical sequence: (S2,S3, … , SL). To maintain the present Pinnability structure, we solely undertake the encoder a part of these fashions.
To assemble the transformer enter, we utilized three essential realtime consumer sequence options:
- Engaged pin embedding: pin embeddings (realized GraphSage embedding) for the previous 100 engaged pins in consumer historical past
- Motion kind: kind of engagement in consumer motion sequence (e.g., repin, click on, conceal)
- Timestamp: timestamp of a consumer’s engagement in consumer historical past
We additionally use candidate pin embedding to carry out early fusion with the above realtime consumer sequence options.
As illustrated in Determine 3, to assemble the enter of the sequence transformer module, we stack the [candidate_pin_emb, action_emb, engaged_pin_emb] to a matrix. The early fusion of candidate pin and consumer sequence is proved to be essential in response to on-line and offline experiments. We additionally apply a random time window masks on entries within the sequence the place the actions had been taken inside sooner or later of request time. The random time window masks is used to make the mannequin much less responsive and to keep away from range drop. Then we feed it right into a transformer encoder. For the preliminary experiment, we solely use one transformer encoder layer. The output of the transformer encoder is a matrix of form [seq_len, hidden_dim]. We then flatten the output to a vector and feed it together with all different options to MLP layers to foretell multi-head consumer actions.
In our second iteration of the consumer sequence module (v1.1), we made some tuning on high of the v1.0 structure. We elevated the variety of transformer encoder layers and compressed the transformer output. As an alternative of flattening the total output matrix, we solely took the primary 10 output tokens, concatenated them with the max pooling token, and flattened it to a vector of size (10 + 1) * hidden_dim. The primary 10 output tokens seize the consumer’s most up-to-date pursuits and the max pooling token can characterize the consumer’s long run choice. As a result of the output dimension grew to become a lot smaller, it’s reasonably priced to use an specific characteristic crossing layer with DCN v2 structure on the total characteristic set as beforehand illustrated in Fig.2.
Problem 1: Engagement Fee Decay
By on-line experiments, we noticed the consumer engagement metrics regularly decayed within the group with realtime motion sequence remedy. Determine 6 demonstrates that for a similar mannequin structure, if we don’t retrain it, the engagement achieve is way smaller than if we retrain the mannequin on contemporary knowledge.
Our speculation is that our mannequin with realtime options is kind of time delicate and requires frequent retraining. To confirm this speculation, we retrain each the management group (with out realtime consumer motion characteristic) and the remedy group (with realtime consumer motion characteristic) on the similar time, and we evaluate the impact of retraining for each fashions. As proven in Determine 6, we discovered the retraining advantages within the remedy mannequin way more than within the management mannequin.
Subsequently, to deal with the engagement decay problem, we retrain the realtime sequence mannequin twice per week. In doing this, the engagement price has develop into way more steady.
Problem 2: Serving Giant Mannequin at Natural Scale
With the transformer module launched to the recommender mannequin, the complexity has elevated considerably. Earlier than this work, Pinterest has been serving the Homefeed rating mannequin on CPU clusters. Our mannequin will increase CPU latency by greater than 20x. We then migrated to GPU serving for the rating mannequin and are in a position to maintain impartial latency on the similar value.
On Pinterest, one of the essential consumer actions is repin, or save. Repin is among the key indicators of consumer engagement on the platform. Subsequently, we approximate the consumer engagement degree with repin quantity and use repin quantity to judge mannequin efficiency.
Offline Analysis
We carry out offline analysis on totally different fashions that course of realtime consumer sequence options. Particularly, we tried the next architectures:
- Common Pooling: the best structure the place we use the typical of pin embedding in consumer sequence to current consumer’s quick time period curiosity
- (Convolutional Neural Community (CNN): makes use of CNN to encoder a sequence of pin embedding. CNN is appropriate to seize the dependent relationship throughout native info
- Recurrent Neural Community (RNN): makes use of RNN to encoder a sequence of pin embedding. In comparison with CNN, RNN higher captures long run dependencies.
- Misplaced Brief-Time period Reminiscence (LSTM): makes use of LSTM, a extra subtle model of RNN that captures longer-term dependencies even higher than RNN through the use of reminiscence cells and gating.
- Vanilla Transformer: encodes solely the pin embedding sequence instantly utilizing the Transformer module.
- Improved Transformer v1.0: Improved transformer structure as illustrated in Determine 4.
For Homefeed floor particularly, two of a very powerful metrics are [email protected] for repin and conceal prediction. For repin, we attempt to enhance the [email protected] For conceal, the aim is to lower [email protected]
The offline outcome reveals us that even with the vanilla transformer and solely pin embeddings, the efficiency is already higher than different architectures. The improved transformer structure confirmed very sturdy offline outcomes: +8.87% offline repin and a -13.49% conceal drop. The achieve of improved transformer 1.0 from vanilla transformer got here from a number of features:
- Utilizing motion embedding: this helps mannequin distinguish constructive and adverse engagement
- Early fusion of candidate pin and consumer sequence: this contributes to nearly all of engagement achieve, in response to on-line and offline experiment,
- Random time window masks: helps with range
On-line Analysis
Then we performed an internet A/B experiment on 1.5% of the whole visitors with the improved transformer mannequin v1.0. Through the on-line experiment, we noticed that the repin quantity for general customers elevated by 6%. We outline the set of latest, informal, and resurrected customers as non-core customers. And we noticed that the repin quantity achieve on non-core customers can attain 11%. Aligning with offline analysis, the conceal quantity was decreased by 10%.
Lately, we tried transformer mannequin v1.1 as illustrated in Determine 4, and we achieved a further 5% repin achieve on high of the v1.0 mannequin. Cover quantity stays impartial for v1.0.
Manufacturing Metrics (Full Visitors)
We wish to name out an attention-grabbing statement: the web experiment underestimates the facility of realtime consumer motion sequence. We noticed larger achieve after we rolled out the mannequin because the manufacturing Homefeed rating mannequin to full visitors. It is because the educational impact of constructive suggestions loop:
- As customers see a extra responsive Homefeed, they have an inclination to have interaction with extra related content material, and their habits modified (for instance, extra clicks or repins)
- With this habits change, the realtime consumer sequence that logs their habits in realtime additionally shifted. For instance, there are extra repin actions within the sequence. Then we generate the coaching knowledge with this shifted consumer sequence characteristic.
- As we retrain the Homefeed rating mannequin with this shifted dataset, there’s a constructive compounding impact that makes the retrained mannequin extra highly effective, thus, a better engagement price. This then loops us again to 1.
The precise Homefeed repin quantity improve that we noticed after transport this mannequin to manufacturing is larger than on-line experiment outcomes. Nonetheless, we won’t disclose the precise quantity on this weblog.
Our work to make use of realtime consumer motion indicators in Pinterest’s Homefeed recommender system has drastically improved the Homefeed relevance. Transformer structure seems to work greatest amongst different conventional sequence modeling approaches. There have been numerous challenges alongside the way in which and are non-trivial to deal with. We found that retraining the mannequin with realtime sequence is essential to maintain up the consumer engagement. And that GPU serving is indispensable for giant scale, advanced fashions.
It’s thrilling to see the large achieve from this work, however what’s extra thrilling is that we all know there’s nonetheless way more room to enhance. To proceed enhancing Pinner expertise, we are going to work on the next features:
- Function Enchancment: We plan to develop a extra fine-grained realtime sequence sign that features extra motion sorts and motion metadata.
- GPU Serving Optimization: That is the primary use case to make use of GPU clusters to serve massive fashions at natural scale. We plan to enhance GPU serving usability and efficiency.
- Mannequin Iteration: We are going to proceed engaged on the mannequin iteration in order that we totally make the most of the realtime sign.
- Adoption on Different Surfaces: We’ll strive comparable concepts in different surfaces: associated pins, notifications, search, and so on.
This work is a results of collaboration throughout a number of groups at Pinterest. Many because of the next people who contributed to this mission:
- GPU serving optimization: Po-Wei Wang, Pong Eksombatchai, Nazanin Farahpour, Zhiyuan Zhang, Saurabh Joshi, Li Tang
- Technical help on ML: Nikil Pancha
- Sign technology and serving: Yitong Zhou
- Quick controllability distribution convergence: Ludek Cigler
To be taught extra about engineering at Pinterest, try the remainder of our Engineering Weblog and go to our Pinterest Labs web site. To discover life at Pinterest, go to our Careers web page.