Skip to content

NN

This module provides a set of classes and functions for building neural networks.

Classes:

  • Module

    Base class for all neural network modules.

  • Parameter

    A special kind of tensor that represents parameters. It acts as a marker so modules can be able to identify learnable parameters. All Parameter tensors have require_grad set to True.

  • BatchNorm1d

    Batch normalization module.

  • LayerNorm1d

    Layer normalization module.

  • Dropout

    Dropout module.

  • Linear

    Linear transformation module.

  • Sequential

    Sequential container module.

  • Residual

    Residual connection module.

  • ReLU

    ReLU activation module.

  • SoftmaxLoss

    Softmax loss module.

  • Flatten

    Flatten module.

BatchNorm1d

Bases: Module

Applies batch normalization to the input tensor.

Parameters:

  • dim (int) –

    Number of dimensions in the input tensor.

  • eps (float, default: 1e-05 ) –

    Value added to the denominator for numerical stability. Default is 1e-5.

  • momentum (float, default: 0.1 ) –

    Momentum for the moving average. Default is 0.1.

  • device (Device, default: None ) –

    Device on which to place the tensor. Default is CPU.

  • dtype (str, default: 'float32' ) –

    Data type of the tensor. Default is "float32".

Attributes:

  • dim (int) –

    Number of dimensions in the input tensor.

  • eps (float) –

    Value added to the denominator for numerical stability.

  • momentum (float) –

    Momentum for the moving average.

  • weight (Parameter) –

    Learnable weight parameter.

  • bias (Parameter) –

    Learnable bias parameter.

  • running_mean (Tensor) –

    Running mean of the input tensor.

  • running_var (Tensor) –

    Running variance of the input tensor.

Methods:

  • forward

    Applies batch normalization to the input tensor x.

Dropout

Bases: Module

Applies dropout to the input tensor.

Parameters:

  • p (float, default: 0.5 ) –

    Probability of an element to be dropped. Default is 0.5.

Attributes:

  • p (float) –

    Probability of an element to be dropped.

Methods:

  • forward

    Applies dropout to the input tensor x.

Flatten

Bases: Module

Flattens the input tensor into a 2D tensor.

Parameters:

  • X (Tensor) –

    Input tensor to be flattened.

Returns:

  • Tensor

    Flattened tensor.

LayerNorm1d

Bases: Module

Applies layer normalization to the input tensor.

Parameters:

  • x (Tensor) –

    Input tensor to apply layer normalization.

  • dim (int) –

    Dimension to normalize.

  • eps (float, default: 1e-05 ) –

    Epsilon for numerical stability. Default is 1e-5.

  • device (Device, default: None ) –

    Device on which to place the tensor. Default is CPU.

  • dtype (str, default: 'float32' ) –

    Data type of the tensor. Default is "float32".

Returns:

  • Tensor

    Normalized tensor.

Linear

Bases: Module

Applies a linear transformation to the input data.

Attributes:

  • weight (Tensor) –

    The learnable weights of the module of shape (in_features, out_features).

  • bias ((Tensor, optional)) –

    The learnable bias of the module of shape (1, out_features).

__init__(in_features, out_features, bias=True, device=None, dtype='float32')

Parameters:

  • in_features (int) –

    Size of each input sample.

  • out_features (int) –

    Size of each output sample.

  • bias (bool, default: True ) –

    If set to False, the layer will not learn an additive bias. Default is True.

  • device (Device, default: None ) –

    Device on which to place the tensor. Default is CPU.

  • dtype (str, default: 'float32' ) –

    Data type of the tensor. Default is "float32".

Module

Base class for all neural network modules. Your module should also subclass this.

Attributes:

  • training (bool) –

    Whether the module is in training mode or not.

__call__(*args, **kwargs)

Forward pass of the module.

Returns:

  • Tensor

    The output tensor of the forward pass.

children()

Return the list of child modules in the module.

Returns:

  • list[Module]

    List of child modules in the module.

eval()

Sets the module in evaluation mode.

This method sets the training attribute to False, which affects the behavior of certain modules like dropout and batch normalization. It also recursively sets the training attribute of all child modules.

Notes

This method is a no-op if the module is already in evaluation mode.

parameters()

Returns:

  • list[Tensor]

    A list of tensors representing the parameters of the module.

train()

Sets the module in training mode.

This method sets the training attribute to True, which affects the behavior of certain modules like dropout and batch normalization. It also recursively sets the training attribute of all child modules.

Notes

This method is a no-op if the module is already in training mode.

Parameter

Bases: Tensor

A special kind of tensor that represents parameters. It acts as a marker so modules can be able to identify learnable parameters. All Parameter tensors have require_grad set to True.

ReLU

Bases: Module

Applies the rectified linear unit (ReLU) activation function element-wise.

Parameters:

  • x (Tensor) –

    Input tensor.

Returns:

  • Tensor

    Output tensor with ReLU activation applied element-wise.

Residual

Bases: Module

Applies a residual connection to the input tensor.

Parameters:

  • fn (Module) –

    The module to apply before adding the residual connection.

Attributes:

  • fn (Module) –

    The module to apply before adding the residual connection.

Methods:

  • forward

    Applies the residual connection to the input tensor x.

Sequential

Bases: Module

Applies a sequence of modules to the input.

Parameters:

  • *modules (Module, default: () ) –

    A sequence of modules to apply to the input.

Returns:

  • Tensor

    The output tensor after applying all modules in sequence.

SoftmaxLoss

Bases: Module

Computes the softmax loss between logits and labels.

Parameters:

  • logits (Tensor) –

    Input logits tensor.

  • y (Tensor) –

    Ground truth labels tensor.

Returns:

  • Tensor

    The softmax loss between logits and labels.