0-9
A
- Attributes Synonym for Features
- Attribute is a commonly used synonym to refer to a feature in the context of deep learning. It refers to a specific property of an object or data that is used as input in a deep learning model.
- Autoencoder
- Autoencoders: Understand the concept of autoencoders, which are used for unsupervised learning and dimensionality reduction. Learn about encoder and decoder architectures, as well as applications like anomaly detection and image denoising.
B
- Backward
- The backward step, or backpropagation, is the process of calculating gradients that enable updating the model’s weights based on prediction error. This helps improve the model’s performance during training.
C
- CNN
- Convolutional Neural Networks a type of neural network designed for time-series data, image and video processing. Explore concepts such as convolutional layers, pooling layers, and object detection.
- Classification Model
- A model whose prediction is a class. In contrast, regression models predict numbers rather than classes.
Two common types of classification models are binary and multi-class classification
- Computer Vision
- Dive into computer vision techniques, including image classification, object detection, semantic segmentation, and image generation. Explore popular architectures like VGG, ResNet, and U-Net.
D
- Data Prediction
- Data prediction refers to the estimated or predicted value of a data sample by a deep learning model. It is based on the input features and the model’s parameters.
- Data True
- Data true refers to the actual values or labels associated with a data sample. They are used to compare the predictions of a deep learning model with the actual values and evaluate its performance.
- Data Truth
- Data truth, also known as label or target value, represents the expected output or actual value associated with a data sample. It is used to evaluate the accuracy of predictions made by a deep learning model.
E
- Epoch
- An epoch is a complete iteration over the entire training dataset when training a deep learning model. During each epoch, the model adjusts its parameters to reduce prediction error.
F
- Features
- Features, also known as variables or attributes, are the different dimensions or measurements of data used as input in a deep learning model.
- Forward
- The forward step, or forward propagation, is the process of calculating the output of a deep learning model using the input features and the weights of the connections between neurons.
G
- GAN
- Generative Adversarial Networks (GANs): Familiarize yourself with GANs, which involve two competing neural networks: a generator and a discriminator. Learn about image generation, data augmentation, and unsupervised learning.
- Graph
- A graph represents the structure of a deep learning model, showing the connections between neurons and how they are organized into layers. It also illustrates the flow of data during forward propagation.
H
I
J
K
L
- Leaky ReLU
- ReLU has a limitation known as the “dying ReLU” problem. Neurons that consistently output zero during training can become “dead” and stop learning. This can occur when the input to a neuron is always negative, resulting in zero gradients and preventing the weights from being updated. To mitigate this issue, variants of ReLU have been introduced, such as Leaky ReLU and Parametric ReLU (PReLU), which address the dying ReLU problem by introducing small non-zero gradients for negative inputs.
- Loss
- Loss, also known as loss function or error function, measures the discrepancy between the predictions of a deep learning model and the actual values associated with data samples. It is used to adjust the model’s parameters.
M
- Max Pooling
- Max pooling is a pooling operation commonly used in convolutional neural networks (CNNs) as a downsampling technique. It reduces the spatial dimensions of the input feature maps while retaining the most salient features. max pooling remains a widely used and effective technique for downsampling and feature extraction in CNNs.
- Model
- In deep learning, a model is a mathematical representation of an artificial neural network. It is used to make predictions or classifications on input data.
N
- NLP
- Natural Language Processing, NLP techniques, including text classification, sentiment analysis, named entity recognition, and machine translation. Understand the use of recurrent and transformer models in NLP tasks.
O
- Optimization
- Optimization Techniques: Gain knowledge of optimization algorithms used in deep learning, such as stochastic gradient descent (SGD), Adam, and RMSprop. Understand the role of learning rate, batch normalization, and regularization techniques.
P
- PReLU
- PReLU (Parametric Rectified Linear Unit) is a variant of the ReLU activation function that introduces learnable parameters to address the dying ReLU problem. Unlike traditional ReLU, which sets negative values to zero, PReLU allows negative values to take on small, non-zero values determined by the learnable parameter ‘a’. By preventing neurons from becoming completely inactive, PReLU enables the network to learn more robust representations and avoids the issue of dead neurons during training. It offers improved performance and captures a wider range of features in the data. However, it increases model complexity and memory requirements due to the additional parameters. Other variants like Leaky ReLU and ELU also address the limitations of ReLU with different trade-offs.
- Pooling
- Pooling is another important concept of CNNs , which is a form of non-linear downsampling. There are several non-linear functions (e.g., max, mean) to implement pooling among which max pooling is the most common.
Q
R
- RNN
- Recurrent Neural Networks (RNNs): Explore RNNs, which are used for sequence data processing. Understand concepts like recurrent connections, long short-term memory (LSTM), and natural language processing (NLP) applications.
- ReLu
- ReLU is the abbreviation ofΒ Rectified Linear Units. This is a layer of neurons that applies the non-saturatingΒ activation functionΒ . It increases the nonlinear properties of the decision function and of the overall network without affecting the receptive fields of the convolution layer.
- Recall
- In deep learning, the term “recall” refers to one of the evaluation metrics used to assess the performance of a model, particularly in classification tasks. It measures the ability of a model to correctly identify positive instances out of all actual positive instances in the dataset.
More specifically, recall is calculated as the ratio of true positive (TP) predictions to the sum of true positive and false negative (FN) predictions:
Recall = TP / (TP + FN)
low recall implies that the model may be missing a significant number of positive instances, which can be problematic in applications where false negatives can have severe consequences.
Overall, recall is a valuable metric for evaluating the performance of a model in tasks such as disease diagnosis, fraud detection, or any scenario where the emphasis is on minimizing false negatives and maximizing the detection of positive instances.
- Regression Model
- Informally, a model that generates a numerical prediction. (In contrast, a classification model generates a class prediction.)
- Reinforcement Learning
- Learn about reinforcement learning, where agents learn to make decisions by interacting with an environment. Study concepts such as Markov Decision Processes (MDPs), policies, rewards, and value functions.
- ResNet
- Residual Network, ResNet is a groundbreaking convolutional neural network architecture that addresses the challenge of training very deep networks. It was introduced by Kaiming He et al. from Microsoft Research. ResNet utilizes residual blocks, which contain skip connections that allow the gradient to flow directly through the network, overcoming the problem of vanishing gradients in deep networks. This architecture enables the construction of extremely deep networks with hundreds of layers
S
- Samples
- Samples are the individual data points used to train or test a deep learning model. Each sample consists of features and a corresponding label.
T
- Transfer Learning
- Explore the technique of transfer learning, which involves utilizing pre-trained models for new tasks. Understand how to fine-tune models, extract features, and leverage pre-trained weights.
U
- U-Net
- U-Net is a convolutional neural network architecture designed for semantic segmentation tasks, where the goal is to classify each pixel of an image into different classes. U-Net was proposed by Olaf Ronneberger, Philipp Fischer, and Thomas Brox. It consists of an encoder and a decoder pathway. The encoder pathway gradually reduces the spatial dimensions while capturing contextual information through convolutional and pooling layers. The decoder pathway then upsamples the feature maps to the original image size using transposed convolutions. Skip connections between the encoder and decoder allow the network to retain fine-grained information for precise segmentation. U-Net has been widely adopted for biomedical image segmentation, such as cell and organ segmentation, as well as other computer vision tasks that require pixel-level predictions.
V
- VGG16
- 16 layer transfer learning architectureΒ and is quite similar to earlier architectures as it’s foundation is based on CNN only but the arrangement is a bit different. The standard input image size which was taken by the researchers for this architecture was 224*224*3 where 3 represents the RGB channel.
- VGG19
- Visual Geometry Group 19 layers , VGG19 is an extension of VGG16, also developed by the Visual Geometry Group. It has a deeper architecture with 19 layers, including 16 convolutional layers and 3 fully connected layers. Like VGG16, it uses small-sized filters and max-pooling layers. VGG19 provides increased representational capacity due to its additional layers, but it also comes with a higher computational cost compared to VGG16.
W
X
Y
Z