Posts

6. Object detection using Transfer Learning of CNN architectures

6. Object detection using Transfer Learning of CNN architectures a. Load in a pre-trained CNN model trained on a large dataset  b. Freeze parameters (weights) in model’s lower convolutional layers  c. Add custom classifier with several layers of trainable parameters to model  d. Train classifier layers on training data available for task e. Fine-tune hyper parameters and unfreeze more layers as needed Download Writeup Here import tensorflow as tf from tensorflow.keras.datasets import cifar10 from tensorflow.keras.applications import VGG16 from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Flatten, Dense from tensorflow.keras.utils import to_categorical # Load CIFAR-10 dataset (itrain, ltrain), (itest, ltest) = cifar10.load_data() # Preprocess the data itrain = itrain / 255.0 itest = itest / 255.0 ltrain = to_categorical(ltrain) ltest = to_categorical(ltest) # Load pre-trained VGG16 model (excluding the top fully-connected layers) basem = VGG16(weights=&#

5. Implement the Continuous Bag of Words (CBOW) Model.

Implement the Continuous Bag of Words (CBOW) Model. Stages can be  a. Data Preparation  b. Generate training data  c. Train model  d. Output Download Writeup Here Implementation of the CBOW Model The Continuous Bag-of-Words model (CBOW) is frequently used in NLP deep learning. It is a model that tries to predict words given the context of a few words before and a few words after the target word. This is distinct from language modeling since CBOW is not sequential and does not have to be probabilistic. Typically, CBOW is used to quickly train word embeddings, and these embeddings are used to initialize the embeddings of some more complicated models. Usually, this is referred to as pretraining embeddings. It almost always helps performance by a couple of percent. This is the solution to the final exercise of  this  great tutorial on NLP in PyTorch. Example Corpus We are about to study the idea of a computational process. Computational processes are abstract beings that inhabit computers.

4. Use Autoencoder to implement anomaly detection.

Image
Use Autoencoder to implement anomaly detection. Build the model by using:  a. Import required libraries  b. Upload / access the dataset  c. Encoder converts it into latent representation  d. Decoder networks convert it back to the original input  e. Compile the models with Optimizer, Loss, and Evaluation Metrics Download Writeup here Let's build the simplest possible autoencoder We'll start simple, with a single fully-connected neural layer as encoder and as decoder: import keras from keras import layers # This is the size of our encoded representations encoding_dim = 32  # 32 floats -> compression of factor 24.5, assuming the input is 784 floats # This is our input image input_img = keras.Input(shape=(784,)) # "encoded" is the encoded representation of the input encoded = layers.Dense(encoding_dim, activation='relu')(input_img) # "decoded" is the lossy reconstruction of the input decoded = layers.Dense(784, activation='sigmoid')(encoded)

3. Build the Image classification model by dividing the model into following 4 stages:

3. Build the Image classification model by dividing the model into following 4 stages: a. Loading and preprocessing the image data b. Defining the model’s architecture c. Training the model d. Estimating the model’s performance  Click here for program details Click Here to download the Writeup

1. Study of Deep learning Packages: Tensorflow, Keras, Theano and PyTorch. Document the distinct features and functionality of the packages.

 1. Study of Deep learning Packages: Tensorflow, Keras, Theano and PyTorch. Document the distinct features and functionality of the packages. Download The Write-up Here

2. Implementing Feedforward neural networks with Keras and TensorFlow

  Download The Write-Up Here. # # Title of Assignment-2: # Implementing Feedforward neural networks with Keras and TensorFlow # a. Import the necessary packages # b. Load the training and testing data (MNIST) # c. Define the network architecture using Keras # d. Train the model using SGD # e. Evaluate the network # f. Plot the training loss and accuracy # # # Importing libraries # In[1]: #importing necessary libraries import tensorflow as tf from tensorflow import keras # In[2]: import pandas as pd import numpy as np import matplotlib . pyplot as plt import random get_ipython (). run_line_magic ( 'matplotlib' , 'inline' ) # # Loading and preparing the data # MNIST stands for “Modified National Institute of Standards and Technology”. # It is a dataset of 70,000 handwritten images. Each image is of 28x28 pixels # i.e. about 784 features. Each feature represents only one pixel’s intensity i.e. from 0(white) to 255(blac