Loading [MathJax]/extensions/TeX/extpfeil.js
Recurrent Attention LSTM Model for Image Chinese Caption Generation | IEEE Conference Publication | IEEE Xplore

Recurrent Attention LSTM Model for Image Chinese Caption Generation


Abstract:

A Recurrent Attention LSTM model (RAL) is proposed for image Chinese caption generation. The model uses Inception-v4 as CNN model developed by Google to extract image fea...Show More

Abstract:

A Recurrent Attention LSTM model (RAL) is proposed for image Chinese caption generation. The model uses Inception-v4 as CNN model developed by Google to extract image features while the recurrent attention LSTM mechanism determines feature weights. The model can generate words accurately because of adding the weights of image region. Therefore, the proposed model is able to generate more relevant descriptions and improve the efficiency of the system. Compared with Neural Image Caption (NIC) model, the experiment results show that the performance of the proposed model is improved by 1.8% with BLEU-4 metrics and 6.2% with CIDEr metrics on the AI Challenger Image Chinese Captioning dataset.
Date of Conference: 05-08 December 2018
Date Added to IEEE Xplore: 16 May 2019
ISBN Information:
Conference Location: Toyama, Japan

Contact IEEE to Subscribe

References

References is not available for this document.