New - Neural Network & Deep Learning
New - Neural Network & Deep Learning
UNIT - 01
Here’s a well-organized set of notes structured for your requested topics.
---
UNIT - 02
---
---
UNIT - 03
TENSOR FLOW
## TensorFlow Overview
TensorFlow is a widely-used open-source platform developed by Google for machine learning and deep
learning. It’s especially good at handling large-scale neural networks for applications like image
recognition and natural language processing.
### 7. **Names**
- **Naming Tensors and Operations**: Each tensor and operation can have a name, which is useful for
tracking and debugging. Assign names using the `name` parameter.
- **Example**: `a = tf.constant(3, name='constant_a')`.
---
UNIT - 04
Keras is a user-friendly, high-level API in Python used to build and train deep learning models. It is built
on top of TensorFlow and simplifies many processes in neural network development.
---
UNIT - 05
## Deep Learning
Deep learning is a branch of machine learning that uses neural networks with multiple layers to model
complex patterns in data. Here, we’ll cover key concepts related to deep learning.
### 3. **Overfitting**
- **Definition**: Overfitting occurs when a model learns the training data too well, including noise and
irrelevant patterns, which reduces performance on new data.
- **Causes**: Typically happens when the model is too complex (e.g., too many layers or parameters)
and has learned specific details of the training data.
- **Signs**: High accuracy on training data but low accuracy on test data.
- **Prevention**: Use techniques like regularization, dropout, or simpler models.
### 4. **Underfitting**
- **Definition**: Underfitting happens when a model is too simple to capture the underlying patterns in
the data, resulting in low performance on both training and test sets.
- **Causes**: Can occur when the model lacks enough complexity or when insufficient features are
used.
- **Solution**: Use a more complex model, add relevant features, or train for more epochs.
### 6. **Dropout**
- **Definition**: Dropout is a technique that randomly ignores (or “drops out”) a fraction of neurons
during training.
- **How It Works**: At each training step, dropout “turns off” a subset of neurons in the layer, forcing the
model to learn multiple independent representations.
- **Benefits**:
- Prevents overfitting by ensuring the model doesn’t rely too heavily on any single neuron.
- Increases model robustness, as it learns diverse patterns.
- **Typical Dropout Rates**: Common rates are between 0.2 and 0.5 (20%-50% of neurons dropped per
layer).
---