CLIP (OpenAI's recent multimodal neural network) which computes the relevance between (Image, Text) pairs. This code repository aims to use pre-trained released version of CLIP to solve the task of Visual Question Answering.
Createt a Virtual Enviroment and run pip install requirements.txt
which installs all the dependencies for Python 3.7.3.
Here's a demo code:
from LanguageModels.appendQAModel import AppendQAModel
from CLIPInterface.clipInterface import CLIPInterface
from VQAInterface.vqaInterface import VQAInterface
from CLIPVQA.clipvqa import CLIPVQA
appendModel = AppendQAModel(separator = " ", candidateAnswerGenerator = 'most_common') #Naive Language Model which appends answer to the question to generate sentence
clipInterface = CLIPInterface(device = "cpu")
vqaInterface = VQAInterface(dataDir = './data', versionType = "v2", taskType = "OpenEnded", dataType = "mscoco")
clipVqaModel = CLIPVQA(clipInterface, appendModel, vqaInterface) #Wrapper model which wraps all functionalities of pre-trained Language and CLIP model to generate VQA Results
results = clipVqaModel.generateResults(evalDataSubType = "val2014", answersDataSubType = "train2014", numCandidates = 1000, outFile = "./Results/resultTest.json")
├── data
│ ├── Annotations
│ │ ├── v2_mscoco_train2014_annotations.json
│ │ └── v2_mscoco_val2014_annotations.json
│ ├── Images
│ │ └── mscoco
│ │ └── val2014
│ └── Questions
│ ├── v2_OpenEnded_mscoco_train2014_questions.json
│ └── v2_OpenEnded_mscoco_val2014_questions.json
├── Results
All Language models inherit LanguageModeBase
class. It has the following functionalities:
-
getText(self, question, answer)
- Takes aquestion: str
andanswer: str
and generates text based on the corresponding Language Model Class -
getCandidateAnswers(self, question, allAnswers, k)
takes questions, all possible Answers to generatenumCandidate
candidate answers for the question type based on corresponding Language Model Class's logic -
getTextFromAllPossibleAnswers
wrapper function which inputs questions and all possible answers to generate candidate texts to be input to CLIP.
AppendQAModel
is a naive language model which generates candidate answers based on co-occurances only (prior probabilities) and appends answers to questions to generate text
A simple Interface class for the pretrained CLIP model
getProbs
takesimageFilePath
(single image file path or a list of file paths) andtexts
and output the probability of each text being pared with the each of the images. Return shape:#imageFilePaths x #texts
Interface class to understand VQA data.
getAllAnswers(self, dataSubType)
: Gets frequence of all answers present in the correspondingdataSubType
. The answer chosen per annotation is the'multiple_choice_answer'
field in the annotation file in VQA Annotations filegetQIPairs(self, dataSubType)
: Generates a dictionary which mapsquestion_id
to (1) absolute'image_path'
(2) str'question'
Wrapper class which takes the above three classes as inputs and uses them to generate final results
generateImageTextPairs(self, evalDataSubType, answersDataSubType, numCandidates)
: Generates (question_id, image_path, texts, answers) tuples which is the is used byCLIPInterface
to compute probabilities of each answer.evalDataSubType
is the dataSubType used to get images and questions,answerDataSubType
is the dataSubType used to get possible answers.generateResults(self, evalDataSubType, answersDataSubType, numCandidates, outFile = None)
: Generates final results and saves them inoutFile
(if passed as argument).evalDataSubType
is the dataSubType used to get images and questions,answerDataSubType
is the dataSubType used to get possible answers.numCandidates
is the number of candidate answers used for all questions.generateResultsDataLoader(self, evalDataSubType, answersDataSubType, numCandidates, outFile = None)
: The same function as above but utilizes a data loader instead of loading all Image,Text pairs in memory.