deepctr.estimator.models.fwfm module

Author:
Weichen Shen, weichenswc@163.com Harshit Pande
Reference:
[1] Field-weighted Factorization Machines for Click-Through Rate Prediction in Display Advertising (https://arxiv.org/pdf/1806.03514.pdf)
deepctr.estimator.models.fwfm.FwFMEstimator(linear_feature_columns, dnn_feature_columns, dnn_hidden_units=(256, 128, 64), l2_reg_linear=1e-05, l2_reg_embedding=1e-05, l2_reg_field_strength=1e-05, l2_reg_dnn=0, seed=1024, dnn_dropout=0, dnn_activation='relu', dnn_use_bn=False, task='binary', model_dir=None, config=None, linear_optimizer='Ftrl', dnn_optimizer='Adagrad', training_chief_hooks=None)[source]

Instantiates the DeepFwFM Network architecture.

Parameters:
  • linear_feature_columns – An iterable containing all the features used by linear part of the model.
  • dnn_feature_columns – An iterable containing all the features used by deep part of the model.
  • fm_group – list, group_name of features that will be used to do feature interactions.
  • dnn_hidden_units – list,list of positive integer or empty list if do not want DNN, the layer number and units

in each layer of DNN :param l2_reg_linear: float. L2 regularizer strength applied to linear part :param l2_reg_field_strength: float. L2 regularizer strength applied to the field pair strength parameters :param l2_reg_embedding: float. L2 regularizer strength applied to embedding vector :param l2_reg_dnn: float. L2 regularizer strength applied to DNN :param seed: integer ,to use as random seed. :param dnn_dropout: float in [0,1), the probability we will drop out a given DNN coordinate. :param dnn_activation: Activation function to use in DNN :param dnn_use_bn: bool. Whether use BatchNormalization before activation or not in DNN :param task: str, "binary" for binary logloss or "regression" for regression loss :param model_dir: Directory to save model parameters, graph and etc. This can

also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
Parameters:
  • config – tf.RunConfig object to configure the runtime settings.
  • linear_optimizer – An instance of tf.Optimizer used to apply gradients to the linear part of the model. Defaults to FTRL optimizer.
  • dnn_optimizer – An instance of tf.Optimizer used to apply gradients to the deep part of the model. Defaults to Adagrad optimizer.
  • training_chief_hooks – Iterable of tf.train.SessionRunHook objects to run on the chief worker during training.
Returns:

A Tensorflow Estimator instance.