- What is Python?
- How to Install Python?
- Python Variables and Operators
- Python Loops
- Python Functions
- Python Files
- Python Errors and Exceptions
- Python Packages
- Python Classes and Objects
- Python Strings
- PostgreSQL Data Types
- Python Generators and Decorators
- Python Dictionary
- Python Date and Time
- Python List and Tuples
- Python Multithreading and Synchronization
- Python Modules
- What is Python bytecode?
- Python Regular Expressions
Python Panda Tutorial
- Selenium Basics
- Selenium with Python Introduction and Installation
- Navigating links using get method Selenium Python
- Locating Single Elements in Selenium Python
- Locating Multiple elements in Selenium Python
Python Flask Tutorial
- How to Install Django and Set Up a Virtual Environment in 6 Steps
- Django MTV Architecture
- Django Models
- Django Views
- Django Templates
- Django Template Language
- Django Project Layout
- Django Admin Interface
- Django Database
- Django URLs and URLConf
- Django Redirects
- Django Cookies and Cookies Handling
- Django Caching
- Types of Caching in Django
- Django Sessions
- Django Forms Handling & Validation
- Django Exceptions & Error-handling
- Django Forms Validation
- Django Redirects
- Django Admin Interface
- Django Bootstrap
- Ajax in Django
- Django Migrations and Database Connectivity
- Django Web Hosting and IDE
- Django Admin Customization
- What is CRUD?
- Django ORM
- Django Request-Response Cycle
- Django ORM
- Making a basic_api with DRF
- Django Logging
- Django Applications
- Difference between Flask vs Django
- Difference between Django vs PHP
- Numpy Introduction
- NumPy– Environment Setup
- NumPy - Data Types
- NumPy Histogram
- Matrix in NumPy
- NumPy Arrays
- NumPy Array Functions
- Matrix Multiplication in NumPy
- NumPy Matrix Transpose
- NumPy Array Append
- NumPy empty array
- NumPy Linear Algebra
- NumPy sum
- NumPy Normal Distribution
- NumPy correlation
- Why we learn and use Numpy?
- Introduction To Tensorflow
- INTRODUCTION TO DEEP LEARNING
- EXPLAIN NEURAL NETWORK?
- CONVOLUTIONAL AND RECURRENT NEURAL NETWORK
- INTRODUCTION TO TENSORFLOW
- INSTALLATION OF TENSORFLOW
- TENSORBOARD VISUALIZATION
- Linear regression in tensorflow
- Word Embedding
- Difference between CNN And RNN
- Explain Keras
- Program elements in tensorflow
- Recurrent Neural Network
- Tensorflow Object Detection
- EXPLAIN MULTILAYER PERCEPTRON
- GRADIENT DESCENT OPTIMIZATION
Interview Questions & Answers
CONVOLUTIONAL AND RECURRENT NEURAL NETWORK
CNN is kind of deep neural network which were designed from biologically driven models, so what researchers found, how human perceives an image into the brain is in different layers and that’s how this convolution neural network was designed and hence that is so true and efficient for all image processing pattern recognition kind of application. As it worked in different layers. Let’s there is a input image, that goes through bunch of convolutional layers-
- First layer take the patch from the input and it will take patch to next convolution layer and apply set of filters.
- The convolution filters or second layer i.e. convolution layer applied set of filters to the input image, the same input image processing to number of filters that processed filters that activated data goes into another layer that is called pooling layer.
- Pooling layer is kind of nonlinear down sampling layer which is basically shown. The image has been shortened to half the size and so this same patch will go down in the pooling layer and different aspects of pooling layer, different functions, different non-linearity which is introducing pooling layer.
- Similar process keeps going on about a continuation of convolution pooling which is continued with another layer of convolution and further another layer of pooling and same the image go through bunch of activation filters in the convolution layer, get some more information about the image and then again down sample it further and once it’s done, at the end the network- there are layers called fully connected layers.
- Fully connected layers are the layers where each and every node is connected to next node in the coefficients, so it’s heavy data driven node where there are lot of coefficients which are loaded so to support each and every node in the pooling data and that’s have multiple sets of output from the pooling layer, and it’s going to collectively draw the top five best cases for the objects into consideration, so in this case from the input object to the end of pooling layer at the end of the fully connected layer, it will follow something like layer of softmax or an SDM 4 probability distribution and come up with three probabilities like 0.97 is a leaf, 0.02 is a grass and 1 percent of bush in image. So output will give three probabilities.
Basically picking the top probability, so obviously it is in the higher ranges.
So this is what basic purpose of CNN is what the image is and in some cases what’s the value corresponding to that. After this if you talk about more applications then applications are everywhere as in which CNN the image processing pattern recognition does, so wherever there is a camera you can see an application of CNN. So everything you carry these days from mobile phone to variable devices, you will be putting CNN over there.
- If you consider mobile applications in mobile phones, so there is very common application of gesture control and that is something where you are using CNN where CNN is targeting each and every gesture and it has set of filtered coefficients on each and every gesture.
For Example sliding of window or tab or deleting it, so this all will control that through mobile phone.
- So next thing is like surveillance which is another step in phone where object recognition is done as well as its classification such as face detection, people detection in camera, pictures etc. and by recognizing people, it keep that data into network and then compare it to a set of data set which it has over the cloud in its database and that’s what used in surveillance.
- Then the next step and next area is of automotive like autonomous driving, in which there is segmentation of entire image is done which automotive is seen. For example automotive will see that where is road or sidewalk or pedestrian etc.
- Another thing that is used is ARVR. Like google glasses and stuff like that you are using these CNN based applications to measure the dimension of the room, to create the objects in the room, to move the objects in room. For which bunch of applications are used using CNN like object recognition, depth creation which is an application done through stereo that uses dual camera which is used in mobile phones.
As CNN mainly used for image processing. Now talk about RNN which is used mainly for natural language processing tasks, so if you think about deep learning, CNN is mainly for images, RNNs are mainly for NLP. Like Google mail-Gmail. So when you type the sentence it will auto complete it, so google has this RNN or Recurrent Neural Network embedded into it, which saves your time.
- Another use case is translation, you must have used google translate where you can translate sentence from one to another language easily.
- NER- Named Entity Recognition where in ‘x’ you give Neural Network a statement and in ‘y’ the neural network will tell you the person name, company and time.
- So these are various user cases where the sequence model or RNN helps.
- Another use case is Sentiment Analysis- where you have a paragraph and it will tell you sentiment whether this product review is one star, two stars and so on.
But you will wonder that why can’t you use a simple neural network to solve this problem. As all these problems are called sequence modeling problem because the sequence is important. When it comes to human language sequence is very important. For example, when you say “how are you?” v/s “you are how”, that doesn’t make sense. So sequence is important.
If you use simple neural network, let’s try for language translation. As you have following type of defined network as-:
When input is in English and output is in Hindi but what if the sentence size changes. Might it would be inputting different sentence size and with fixed neural network architecture it’s not going to work because you have to decide that how many neurons are in the input and output layer. So with language translation, number of neurons becomes a problem. Like what you decide as a size of neurons?
- Second issue is too much computation. As neural network work on numbers, they don’t work on string, so you have to convert your word into a vector. Like there are 25000 words are in your vocabulary and you will do encoding with that. Like in “how are you”, how is at 46th position, are at 17000 position etc. so at that position put 1 and 0 at others.
- Third issue is when you translate language, there are two English statements and have single Hindi statement.
- Say when you have structured data example you are trying to figure out, if the transaction is fraud or not and your features are transaction amount, whether the transaction was made out of country or whether the SSN that customer provided is correct or not. Now here of you change the order of this features, let’s say ‘ssn correct?’ it’s not going to affect anything because the sequence as-:
- Whereas if you have English to hindi and instead of saying “I ate golgappa on sunday” or if I say “I ate Sunday on golgappa ”, the meaning becomes totally different. So, sequence is very important and that’s why AI doesn’t work in this case.
But it will increase too much computation.
Sequence in which you supply the input doesn’t matter.
Issues using ANN for sequence problem-:
So RNN works by following all above methods like encoding, decoding, or other ways of vectorizing a word. Then you have layer of neurons. So these are all individual neurons.
As in above diagram, suppose there is layer of neurons i.e. hidden layer. You supply input and you get one output. So each neuron has a sigma function and activation function. You can take example by process the sentence word by word. Like you have sentence “shiv loves baby yokul”.
- First you give input as ‘shiv’ then first it will be vectorize that it convert 0 to 1 and you get its output and when you give input of next single word then with that give output as a input of previous word too. So which means that next input of layer is not only the next word but the previous output because language makes sense. Language needs to carry the context.
Now presenting in different way as-:
As in above diagram it is a different way but make sure these are not four different hidden layers. This is a time travel. So, actually hidden layer is only one and it’s just a time travel. So here when first word is supplied you got output in form of activation function.
Once network is trained then it will get output as 1, 0 or as follows-:
So you get NER output.
- GENERIC RESPRESENTATION OF RNN IS-:
You are actually almost following loop in this as supplying the output of previous word as an input to the second word.