deepctr.models.difm module¶
- Author:
- zanshuxun, zanshuxun@aliyun.com
- Reference:
- [1] Lu W, Yu Y, Chang Y, et al. A Dual Input-aware Factorization Machine for CTR Prediction[C] //IJCAI. 2020: 3139-3145.(https://www.ijcai.org/Proceedings/2020/0434.pdf)
-
deepctr.models.difm.
DIFM
(linear_feature_columns, dnn_feature_columns, att_embedding_size=8, att_head_num=8, att_res=True, dnn_hidden_units=(256, 128, 64), l2_reg_linear=1e-05, l2_reg_embedding=1e-05, l2_reg_dnn=0, seed=1024, dnn_dropout=0, dnn_activation='relu', dnn_use_bn=False, task='binary')[source]¶ Instantiates the DIFM Network architecture.
Parameters: - linear_feature_columns – An iterable containing all the features used by linear part of the model.
- dnn_feature_columns – An iterable containing all the features used by deep part of the model.
- att_embedding_size – integer, the embedding size in multi-head self-attention network.
- att_head_num – int. The head number in multi-head self-attention network.
- att_res – bool. Whether or not use standard residual connections before output.
- dnn_hidden_units – list,list of positive integer or empty list, the layer number and units in each layer of DNN
- l2_reg_linear – float. L2 regularizer strength applied to linear part
- l2_reg_embedding – float. L2 regularizer strength applied to embedding vector
- l2_reg_dnn – float. L2 regularizer strength applied to DNN
- seed – integer ,to use as random seed.
- dnn_dropout – float in [0,1), the probability we will drop out a given DNN coordinate.
- dnn_activation – Activation function to use in DNN
- dnn_use_bn – bool. Whether use BatchNormalization before activation or not in DNN
- task – str,
"binary"
for binary logloss or"regression"
for regression loss
Returns: A Keras model instance.