IMAGINE

IMAGINE: Improving Multi-modal lAnguage Generation wIth world kNowledgE is a research project which main goal is to investigate how to incorporate world knowledge into vision & language tasks within natural language generation. I am a Marie Skwodówska-Curie Global Fellow.

I am spending ~2 years in New York University’s Courant Institute for Mathematical Sciences where I will work with Kyunghyun Cho, followed by 3 months in Paris visiting Antoine Bordes in Facebook Artificial Intelligence Research (FAIR). I will finally return to the Institute for Logic, Language and Computation (ILLC) in the University of Amsterdam, where I will continue collaborating with Raquel Fernández.

Concretely, I investigate how to:

  • Gather world-knowledge (semi-)automatically from publicly available multi-modal knowledge bases.
  • Learn representations for a knowledge base that encompasses both text and images.
  • Integrate this knowledge into multi-modal language generation tasks, such as multi-modal machine translation, visual question answering and image description generation.

Please get in touch with you would like to collaborate on any of these research topics!


Iacer Calixto© 2020. All rights reserved.

Powered by Hydejack v8.4.0