deepctr.layers.core module

Author:
Weichen Shen,weichenswc@163.com
class deepctr.layers.core.DNN(hidden_units, activation='relu', l2_reg=0, dropout_rate=0, use_bn=False, output_activation=None, seed=1024, **kwargs)[source]

The Multi Layer Percetron

Input shape
  • nD tensor with shape: (batch_size, ..., input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim).
Output shape
  • nD tensor with shape: (batch_size, ..., hidden_size[-1]). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, hidden_size[-1]).
Arguments
  • hidden_units:list of positive integer, the layer number and units in each layer.
  • activation: Activation function to use.
  • l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix.
  • dropout_rate: float in [0,1). Fraction of the units to dropout.
  • use_bn: bool. Whether use BatchNormalization before activation or not.
  • output_activation: Activation function to use in the last layer.If None,it will be same as activation.
  • seed: A Python integer to use as random seed.
build(input_shape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Arguments:
input_shape: Instance of TensorShape, or list of instances of
TensorShape if the layer expects a list of inputs (one instance per input).
call(inputs, training=None, **kwargs)[source]

This is where the layer’s logic lives.

Arguments:
inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
Returns:
A tensor or list/tuple of tensors.
compute_output_shape(input_shape)[source]

Computes the output shape of the layer.

If the layer has not been built, this method will call build on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.

Arguments:
input_shape: Shape tuple (tuple of integers)
or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.
Returns:
An input shape tuple.
get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns:
Python dictionary.
class deepctr.layers.core.LocalActivationUnit(hidden_units=(64, 32), activation='sigmoid', l2_reg=0, dropout_rate=0, use_bn=False, seed=1024, **kwargs)[source]

The LocalActivationUnit used in DIN with which the representation of user interests varies adaptively given different candidate items.

Input shape
  • A list of two 3D tensor with shape: (batch_size, 1, embedding_size) and (batch_size, T, embedding_size)
Output shape
  • 3D tensor with shape: (batch_size, T, 1).
Arguments
  • hidden_units:list of positive integer, the attention net layer number and units in each layer.
  • activation: Activation function to use in attention net.
  • l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix of attention net.
  • dropout_rate: float in [0,1). Fraction of the units to dropout in attention net.
  • use_bn: bool. Whether use BatchNormalization before activation or not in attention net.
  • seed: A Python integer to use as random seed.
References
  • [Zhou G, Zhu X, Song C, et al. Deep interest network for click-through rate prediction[C]//Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 2018: 1059-1068.](https://arxiv.org/pdf/1706.06978.pdf)
build(input_shape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Arguments:
input_shape: Instance of TensorShape, or list of instances of
TensorShape if the layer expects a list of inputs (one instance per input).
call(inputs, training=None, **kwargs)[source]

This is where the layer’s logic lives.

Arguments:
inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
Returns:
A tensor or list/tuple of tensors.
compute_mask(inputs, mask)[source]

Computes an output mask tensor.

Arguments:
inputs: Tensor or list of tensors. mask: Tensor or list of tensors.
Returns:
None or a tensor (or list of tensors,
one per output tensor of the layer).
compute_output_shape(input_shape)[source]

Computes the output shape of the layer.

If the layer has not been built, this method will call build on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.

Arguments:
input_shape: Shape tuple (tuple of integers)
or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.
Returns:
An input shape tuple.
get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns:
Python dictionary.
class deepctr.layers.core.PredictionLayer(task='binary', use_bias=True, **kwargs)[source]
Arguments
  • task: str, "binary" for binary logloss or "regression" for regression loss
  • use_bias: bool.Whether add bias term or not.
build(input_shape)[source]

Creates the variables of the layer (optional, for subclass implementers).

This is a method that implementers of subclasses of Layer or Model can override if they need a state-creation step in-between layer instantiation and layer call.

This is typically used to create the weights of Layer subclasses.

Arguments:
input_shape: Instance of TensorShape, or list of instances of
TensorShape if the layer expects a list of inputs (one instance per input).
call(inputs, **kwargs)[source]

This is where the layer’s logic lives.

Arguments:
inputs: Input tensor, or list/tuple of input tensors. **kwargs: Additional keyword arguments.
Returns:
A tensor or list/tuple of tensors.
compute_output_shape(input_shape)[source]

Computes the output shape of the layer.

If the layer has not been built, this method will call build on the layer. This assumes that the layer will later be used with inputs that match the input shape provided here.

Arguments:
input_shape: Shape tuple (tuple of integers)
or list of shape tuples (one per output tensor of the layer). Shape tuples can include None for free dimensions, instead of an integer.
Returns:
An input shape tuple.
get_config()[source]

Returns the config of the layer.

A layer config is a Python dictionary (serializable) containing the configuration of a layer. The same layer can be reinstantiated later (without its trained weights) from this configuration.

The config of a layer does not include connectivity information, nor the layer class name. These are handled by Network (one layer of abstraction above).

Returns:
Python dictionary.