deepctr.models.deepfefm module

Author:
Harshit Pande
Reference:

[1] Field-Embedded Factorization Machines for Click-through Rate Prediction] (https://arxiv.org/pdf/2009.09931.pdf)

this file also supports all the possible Ablation studies for reproducibility

deepctr.models.deepfefm.DeepFEFM(linear_feature_columns, dnn_feature_columns, use_fefm=True, dnn_hidden_units=(256, 128, 64), l2_reg_linear=1e-05, l2_reg_embedding_feat=1e-05, l2_reg_embedding_field=1e-05, l2_reg_dnn=0, seed=1024, dnn_dropout=0.0, exclude_feature_embed_in_dnn=False, use_linear=True, use_fefm_embed_in_dnn=True, dnn_activation='relu', dnn_use_bn=False, task='binary')[source]

Instantiates the DeepFEFM Network architecture or the shallow FEFM architecture (Ablation studies supported)

Parameters:
  • linear_feature_columns – An iterable containing all the features used by linear part of the model.
  • dnn_feature_columns – An iterable containing all the features used by deep part of the model.
  • fm_group – list, group_name of features that will be used to do feature interactions.
  • use_fefm – bool,use FEFM logit or not (doesn’t effect FEFM embeddings in DNN, controls only the use of final FEFM logit)
  • dnn_hidden_units – list,list of positive integer or empty list, the layer number and units in each layer of DNN
  • l2_reg_linear – float. L2 regularizer strength applied to linear part
  • l2_reg_embedding_feat – float. L2 regularizer strength applied to embedding vector of features
  • l2_reg_embedding_field – float, L2 regularizer to field embeddings
  • l2_reg_dnn – float. L2 regularizer strength applied to DNN
  • seed – integer ,to use as random seed.
  • dnn_dropout – float in [0,1), the probability we will drop out a given DNN coordinate.
  • exclude_feature_embed_in_dnn – bool, used in ablation studies for removing feature embeddings in DNN
  • use_linear – bool, used in ablation studies
  • use_fefm_embed_in_dnn – bool, True if FEFM interaction embeddings are to be used in FEFM (set False for Ablation)
  • dnn_activation – Activation function to use in DNN
  • dnn_use_bn – bool. Whether use BatchNormalization before activation or not in DNN
  • task – str, "binary" for binary logloss or "regression" for regression loss
Returns:

A Keras model instance.