deepctr.models.multitask.ple module

Author:

Mincai Lai, laimc@shanghaitech.edu.cn

Weichen Shen, weichenswc@163.com

Reference:
[1] Tang H, Liu J, Zhao M, et al. Progressive layered extraction (ple): A novel multi-task learning (mtl) model for personalized recommendations[C]//Fourteenth ACM Conference on Recommender Systems. 2020.(https://dl.acm.org/doi/10.1145/3383313.3412236)
deepctr.models.multitask.ple.PLE(dnn_feature_columns, shared_expert_num=1, specific_expert_num=1, num_levels=2, expert_dnn_hidden_units=(256, ), tower_dnn_hidden_units=(64, ), gate_dnn_hidden_units=(), l2_reg_embedding=1e-05, l2_reg_dnn=0, seed=1024, dnn_dropout=0, dnn_activation='relu', dnn_use_bn=False, task_types=('binary', 'binary'), task_names=('ctr', 'ctcvr'))[source]

Instantiates the multi level of Customized Gate Control of Progressive Layered Extraction architecture.

Parameters:
  • dnn_feature_columns – An iterable containing all the features used by deep part of the model.
  • shared_expert_num – integer, number of task-shared experts.
  • specific_expert_num – integer, number of task-specific experts.
  • num_levels – integer, number of CGC levels.
  • expert_dnn_hidden_units – list,list of positive integer or empty list, the layer number and units in each layer of expert DNN.
  • tower_dnn_hidden_units – list,list of positive integer or empty list, the layer number and units in each layer of task-specific DNN.
  • gate_dnn_hidden_units – list,list of positive integer or empty list, the layer number and units in each layer of gate DNN.
  • l2_reg_embedding – float. L2 regularizer strength applied to embedding vector.
  • l2_reg_dnn – float. L2 regularizer strength applied to DNN.
  • seed – integer ,to use as random seed.
  • dnn_dropout – float in [0,1), the probability we will drop out a given DNN coordinate.
  • dnn_activation – Activation function to use in DNN.
  • dnn_use_bn – bool. Whether use BatchNormalization before activation or not in DNN.
  • task_types – list of str, indicating the loss of each tasks, "binary" for binary logloss, "regression" for regression loss. e.g. [‘binary’, ‘regression’]
  • task_names – list of str, indicating the predict target of each tasks
Returns:

a Keras model instance.