Author- Himanshu Bobade
Automatically creating the description of an image using any natural language sentences is a tough task. This blog discusses about image captioning task. The functioning, the models used for extracting the features, determining captions is clearly mentioned. Model training and in addition to that, testing and determining captions using the trained model is also demonstrated.
Humans can describe almost everything around them. Machines can also describe Images using NLP, Deep learning and Computer vision techniques. This process is called Image Captioning. In this way, we can generate captions with the help of a trained model for a given image or set of images. The algorithm we will execute, uses InceptionV3 and Glove. We need to use transfer learning to deal with Inception and Glove. Inception is used for identifying objects or extracting the features from the Image. GloVe stands for Global Vectors for Word Representation. It is and unsupervised learning algorithm which offers vector representation for words. Training is performed on aggregated global words co-occurrence statistics from a corpus, and the resulting representations flaunts interesting linear substructures of the word vector space.
A simple illustration of the execution is as shown in fig. 1, we extract features/objects from the image and try to combine them to form a sentence. The model detects it’s a woman walking with green field around.
Caption Generated: a woman walking on a lush green field. The necessary modules needed to be imported are as follows:
Now, we will use a variable called root_captioning to store the folder location where we have our dataset files. We need to download the Glove file and flickr_8k dataset.
We will load the dataset and convert the string texts in the file into lowercases, remove punctuations, etc. For each image there are 4-5 captions. We will make a dictionary and assign image name as the keys and captions as the values associated with every key.
Now we need to load the image dataset and after doing that split them into train and test sets.
We will later use the start token to begin the process of generating a caption. Encountering the stop token in the generated text will let us know the process is complete.
Now we are going to load the inception model. We are using the output_dim as 2048 which is less. You can increase it with other models like mobilenet but then it’d take time to process and train.
After that, we need to encode the images to create training sets:
The loaded image files are now needed to be pickled and turning them into 2048 vector. Similarly we do it with test set. Now, we need to create a word vocabulary for our captions:
The table idxtoword converts index numbers to actual words to index values.
The way our model will work is it will give a words one by one. For example,
A woman walking
A woman walking in lush green fields.
The words are added one by one. This can be achieved by :
Neural Network building:
Defining layers and functions:
Optimizer used here is ‘Adam’. This will take couple of hours to train.
Testing and Evaluate the dataset images:
Our model is ready, now we need to test it. We will define a function to test it. We will use generate Caption function.
Input Image from test dataset shown in fig. 2:
Output caption: A dog is chasing balls.
Thus, we learnt how we can effectively detect objects and generate it’s most suitable caption to describe the image.