deepctr.layers module¶

class
deepctr.layers.
AFMLayer
(attention_factor=4, l2_reg_w=0, keep_prob=1.0, seed=1024, **kwargs)[source]¶ Attentonal Factorization Machine models pairwise (order2) feature interactions without linear term and bias.
 Input shape
 A list of 3D tensor with shape:
(batch_size,1,embedding_size)
.
 A list of 3D tensor with shape:
 Output shape
 2D tensor with shape:
(batch_size, 1)
.
 2D tensor with shape:
 Arguments
 attention_factor : Positive integer, dimensionality of the
attention network output space. l2_reg_w : float between 0 and 1. L2 regularizer strength
applied to attention network. keep_prob : float between 0 and 1. Fraction of the attention net output units to keep.
 seed : A Python integer to use as random seed.
 References
 [Attentional Factorization Machines : Learning the Weight of Feature
Interactions via Attention Networks](https://arxiv.org/pdf/1708.04617.pdf)

call
(inputs, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
BiInteractionPooling
(**kwargs)[source]¶ BiInteraction Layer used in Neural FM,compress the pairwise elementwise product of features into one single vector.
 Input shape
 A 3D tensor with shape:
(batch_size,field_size,embedding_size)
.
 A 3D tensor with shape:
 Output shape
 3D tensor with shape:
(batch_size,1,embedding_size)
.
 3D tensor with shape:
 References
 [He X, Chua T S. Neural factorization machines for sparse predictive analytics[C]//Proceedings of the 40th International ACM SIGIR conference on Research and Development in Information Retrieval. ACM, 2017: 355364.](http://arxiv.org/abs/1708.05027)

class
deepctr.layers.
CIN
(layer_size=(128, 128), activation='relu', split_half=True, seed=1024, **kwargs)[source]¶ Compressed Interaction Network used in xDeepFM.This implemention is adapted from code that the author of the paper published on https://github.com/Leavingseason/xDeepFM.
 Input shape
 3D tensor with shape:
(batch_size,field_size,embedding_size)
.
 3D tensor with shape:
 Output shape
 2D tensor with shape:
(batch_size, featuremap_num)
featuremap_num = sum(self.layer_size[:1]) // 2 + self.layer_size[1]
ifsplit_half=True
,elsesum(layer_size)
.
 2D tensor with shape:
 Arguments
 layer_size : list of int.Feature maps in each layer.
 activation : activation function used on feature maps.
 split_half : bool.if set to False, half of the feature maps in each hidden will connect to output unit.
 seed : A Python integer to use as random seed.
 References
 [Lian J, Zhou X, Zhang F, et al. xDeepFM: Combining Explicit and Implicit Feature Interactions for Recommender Systems[J]. arXiv preprint arXiv:1803.05170, 2018.] (https://arxiv.org/pdf/1803.05170.pdf)

call
(inputs, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
CrossNet
(layer_num=2, l2_reg=0, seed=1024, **kwargs)[source]¶ The Cross Network part of Deep&Cross Network model, which leans both low and high degree cross feature.
 Input shape
 2D tensor with shape:
(batch_size, units)
.
 2D tensor with shape:
 Output shape
 2D tensor with shape:
(batch_size, units)
.
 2D tensor with shape:
 Arguments
 layer_num: Positive integer, the cross layer number
 l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix
 seed: A Python integer to use as random seed.
 References
 [Wang R, Fu B, Fu G, et al. Deep & cross network for ad click predictions[C]//Proceedings of the ADKDD‘17. ACM, 2017: 12.](https://arxiv.org/abs/1708.05123)

call
(inputs, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
FM
(**kwargs)[source]¶ Factorization Machine models pairwise (order2) feature interactions without linear term and bias.
 Input shape
 3D tensor with shape:
(batch_size,field_size,embedding_size)
.
 3D tensor with shape:
 Output shape
 2D tensor with shape:
(batch_size, 1)
.
 2D tensor with shape:
 References
 [Factorization Machines](https://www.csie.ntu.edu.tw/~b97053/paper/Rendle2010FM.pdf)

class
deepctr.layers.
InnerProductLayer
(reduce_sum=True, **kwargs)[source]¶ InnerProduct Layer used in PNN that compute the elementwise product or inner product between feature vectors.
 Input shape
 A list of N 3D tensor with shape:
(batch_size,1,embedding_size)
.
 A list of N 3D tensor with shape:
 Output shape
 3D tensor with shape:
(batch_size, N*(N1)/2 ,1)
if use reduce_sum. or 3D tensor with shape:(batch_size, N*(N1)/2, embedding_size )
if not use reduce_sum.
 3D tensor with shape:
 Arguments
 reduce_sum: bool. Whether return inner product or elementwise product
 References
 [Qu Y, Cai H, Ren K, et al. Productbased neural networks for user response prediction[C]//Data Mining (ICDM), 2016 IEEE 16th International Conference on. IEEE, 2016: 11491154.](https://arxiv.org/pdf/1611.00144.pdf)

call
(inputs, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
InteractingLayer
(att_embedding_size=8, head_num=2, use_res=True, seed=1024, **kwargs)[source]¶ A Layer used in AutoInt that model the correlations between different feature fields by multihead selfattention mechanism.
 Input shape
 A 3D tensor with shape:
(batch_size,field_size,embedding_size)
.
 A 3D tensor with shape:
 Output shape
 3D tensor with shape:
(batch_size,field_size,att_embedding_size * head_num)
.
 3D tensor with shape:
 Arguments
 att_embedding_size: int.The embedding size in multihead selfattention network.
 head_num: int.The head number in multihead selfattention network.
 use_res: bool.Whether or not use standard residual connections before output.
 seed: A Python integer to use as random seed.
 References
 [Song W, Shi C, Xiao Z, et al. AutoInt: Automatic Feature Interaction Learning via SelfAttentive Neural Networks[J]. arXiv preprint arXiv:1810.11921, 2018.](https://arxiv.org/abs/1810.11921)

call
(inputs, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
LocalActivationUnit
(hidden_size=(64, 32), activation='sigmoid', l2_reg=0, keep_prob=1, use_bn=False, seed=1024, **kwargs)[source]¶ The LocalActivationUnit used in DIN with which the representation of user interests varies adaptively given different candidate items.
 Input shape
 A list of two 3D tensor with shape:
(batch_size, 1, embedding_size)
and(batch_size, T, embedding_size)
 A list of two 3D tensor with shape:
 Output shape
 3D tensor with shape:
(batch_size, T, 1)
.
 3D tensor with shape:
 Arguments
 hidden_size:list of positive integer, the attention net layer number and units in each layer.
 activation: Activation function to use in attention net.
 l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix of attention net.
 keep_prob: float between 0 and 1. Fraction of the units to keep of attention net.
 use_bn: bool. Whether use BatchNormalization before activation or not in attention net.
 seed: A Python integer to use as random seed.
 References
 [Zhou G, Zhu X, Song C, et al. Deep interest network for clickthrough rate prediction[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018: 10591068.](https://arxiv.org/pdf/1706.06978.pdf)

call
(inputs, training=None, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
MLP
(hidden_size, activation='relu', l2_reg=0, keep_prob=1, use_bn=False, seed=1024, **kwargs)[source]¶ The Multi Layer Percetron
 Input shape
 nD tensor with shape:
(batch_size, ..., input_dim)
. The most common situation would be a 2D input with shape(batch_size, input_dim)
.
 nD tensor with shape:
 Output shape
 nD tensor with shape:
(batch_size, ..., hidden_size[1])
. For instance, for a 2D input with shape(batch_size, input_dim)
, the output would have shape(batch_size, hidden_size[1])
.
 nD tensor with shape:
 Arguments
 hidden_size:list of positive integer, the layer number and units in each layer.
 activation: Activation function to use.
 l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix.
 keep_prob: float between 0 and 1. Fraction of the units to keep.
 use_bn: bool. Whether use BatchNormalization before activation or not.
 seed: A Python integer to use as random seed.

call
(inputs, training=None, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
OutterProductLayer
(kernel_type='mat', seed=1024, **kwargs)[source]¶ OutterProduct Layer used in PNN.This implemention is adapted from code that the author of the paper published on https://github.com/Atomu2014/productnets.
 Input shape
 A list of N 3D tensor with shape:
(batch_size,1,embedding_size)
.
 A list of N 3D tensor with shape:
 Output shape
 2D tensor with shape:
(batch_size,N*(N1)/2 )
.
 2D tensor with shape:
 Arguments
 kernel_type: str. The kernel weight matrix type to use,can be mat,vec or num
 seed: A Python integer to use as random seed.
 References
 [Qu Y, Cai H, Ren K, et al. Productbased neural networks for user response prediction[C]//Data Mining (ICDM), 2016 IEEE 16th International Conference on. IEEE, 2016: 11491154.](https://arxiv.org/pdf/1611.00144.pdf)

call
(inputs, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.

class
deepctr.layers.
PredictionLayer
(activation='sigmoid', use_bias=True, **kwargs)[source]¶  Arguments
 activation: Activation function to use.
 use_bias: bool.Whether add bias term or not.

call
(inputs, **kwargs)[source]¶ This is where the layer’s logic lives.
 Arguments:
 inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
 Returns:
 A tensor or list/tuple of tensors.

get_config
()[source]¶ Returns the config of the layer.
A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.
The config of a layer does not include connectivity information, nor the layer class name. These are handled by Container (one layer of abstraction above).
 Returns:
 Python dictionary.