deepctr.estimator.models.autoint module

Author:
Weichen Shen, weichenswc@163.com
Reference:
[1] Song W, Shi C, Xiao Z, et al. AutoInt: Automatic Feature Interaction Learning via Self-Attentive Neural Networks[J]. arXiv preprint arXiv:1810.11921, 2018.(https://arxiv.org/abs/1810.11921)
deepctr.estimator.models.autoint.AutoIntEstimator(linear_feature_columns, dnn_feature_columns, att_layer_num=3, att_embedding_size=8, att_head_num=2, att_res=True, dnn_hidden_units=(256, 256), dnn_activation='relu', l2_reg_linear=1e-05, l2_reg_embedding=1e-05, l2_reg_dnn=0, dnn_use_bn=False, dnn_dropout=0, seed=1024, task='binary', model_dir=None, config=None, linear_optimizer='Ftrl', dnn_optimizer='Adagrad', training_chief_hooks=None)[source]

Instantiates the AutoInt Network architecture.

Parameters:
  • linear_feature_columns – An iterable containing all the features used by linear part of the model.
  • dnn_feature_columns – An iterable containing all the features used by deep part of the model.
  • att_layer_num – int.The InteractingLayer number to be used.
  • att_embedding_size – int.The embedding size in multi-head self-attention network.
  • att_head_num – int.The head number in multi-head self-attention network.
  • att_res – bool.Whether or not use standard residual connections before output.
  • dnn_hidden_units – list,list of positive integer or empty list, the layer number and units in each layer of DNN
  • dnn_activation – Activation function to use in DNN
  • l2_reg_linear – float. L2 regularizer strength applied to linear part
  • l2_reg_embedding – float. L2 regularizer strength applied to embedding vector
  • l2_reg_dnn – float. L2 regularizer strength applied to DNN
  • dnn_use_bn – bool. Whether use BatchNormalization before activation or not in DNN
  • dnn_dropout – float in [0,1), the probability we will drop out a given DNN coordinate.
  • seed – integer ,to use as random seed.
  • task – str, "binary" for binary logloss or "regression" for regression loss
  • model_dir – Directory to save model parameters, graph and etc. This can also be used to load checkpoints from the directory into a estimator to continue training a previously saved model.
  • config – tf.RunConfig object to configure the runtime settings.
  • linear_optimizer – An instance of tf.Optimizer used to apply gradients to the linear part of the model. Defaults to FTRL optimizer.
  • dnn_optimizer – An instance of tf.Optimizer used to apply gradients to the deep part of the model. Defaults to Adagrad optimizer.
  • training_chief_hooks – Iterable of tf.train.SessionRunHook objects to run on the chief worker during training.
Returns:

A Tensorflow Estimator instance.