I have been granted a Marie Skwodówska-Curie Global Fellowship, which will fund my research for about 3 years. The name of my project is IMAGINE: Improving Multi-modal lAnguage Generation wIth world kNowledgE, and its main goal is to investigate how to incorporate world knowledge into vision & language tasks within natural language generation.

I will spend ~2 years in New York University’s Courant Institute for Mathematical Sciences where I will work with Kyunghyun Cho, followed by 3 months in Paris visiting Antoine Bordes in Facebook Artificial Intelligence Research (FAIR). I will finally return to the Institute for Logic, Language and Computation (ILLC) in the University of Amsterdam, where I will continue collaborating with Raquel Fernández.

Concretely, I will investigate how to:

  • Gather world-knowledge automatically from publicly available multi-modal knowledge bases.
  • Learn representations for a knowledge base that encompasses both text and images.
  • Integrate this knowledge into multi-modal language generation tasks, such as multi-modal machine translation, visual question answering and image description generation.

Please get in touch with you would like to collaborate on any of these research topics!

Iacer Calixto© 2019. All rights reserved.

Powered by Hydejack v8.4.0