TDNN
TDNN, short for time-delay neural network, is a feed-forward neural network designed for sequence data. Instead of processing one time step at a time with recurrence, a TDNN explicitly models temporal context by feeding each neuron inputs from multiple time-delayed copies of the input vector. The connections are organized in taps across time, with shared weights across time steps, enabling the network to learn temporal patterns within a fixed window.
Typically, a TDNN consists of several layers, each applying affine transformations to the concatenation of input
TDNNs were introduced in the 1980s by Waibel and colleagues for phoneme recognition and speech processing.
Applications include speech recognition, speaker verification, phoneme classification, and other sequence modeling tasks where temporal context
Related concepts: temporal convolution, 1D convolution, dilated convolution, recurrent neural networks, convolutional neural networks.