Skip to content
/ openvqa Public
forked from MILVLG/openvqa

A lightweight, scalable, and general framework for visual question answering research

License

Notifications You must be signed in to change notification settings

arssly/openvqa

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

OpenVQA

Documentation Status powered-by MILVLG

OpenVQA is a general platform for visual question ansering (VQA) research, with implementing state-of-the-art approaches (e.g., BUTD, MFH, BAN, MCAN and MMNasNet) on different benchmark datasets like VQA-v2, GQA and CLEVR. Supports for more methods and datasets will be updated continuously.

Documentation

Getting started and learn more about OpenVQA here.

Benchmark and Model Zoo

Supported methods and benchmark datasets are shown in the below table. Results and models are available in MODEL ZOO.

VQA-v2 GQA CLEVR
BUTD
MFB
MFH
BAN
MCAN
MMNasNet

News & Updates

v0.7.5 (30/12/2019)

  • Add supports and pre-trained models for the approaches on CLEVR.

v0.7 (29/11/2019)

  • Add supports and pre-trained models for the approaches on GQA.
  • Add an document to tell developers how to add a new model to OpenVQA.

v0.6 (18/09/2019)

  • Refactoring the documents and using Sphinx to build the whole documents.

v0.5 (31/07/2019)

  • Implement the basic framework for OpenVQA.
  • Add supports and pre-trained models for BUTD, MFB, MFH, BAN, MCAN on VQA-v2.

License

This project is released under the Apache 2.0 license.

Contact

This repo is currently maintained by Zhou Yu (@yuzcccc) and Yuhao Cui (@cuiyuhao1996).

About

A lightweight, scalable, and general framework for visual question answering research

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.2%
  • Dockerfile 0.8%