When we talk about Convolutional Neural Networks, that take a fixed size input and produce a fixed size output, we usually think of image processing. When it comes to Natural Language Processing, Recurrent Neural Networks come into play, since they allow us to operate over sequences of vectors.
Now, a new trend in research is emerging, experimenting with Convolutional Neural Networks for Sequence to Sequence Learning. Another great study that has been published this year gives basic guidance for DNN selection by comparing CNN and RNN (GRU and LSTM) architectures on a wide range of representative NLP tasks.
RNNs performed well and are robust in a broad range of tasks, except for keyphrase recognition tasks such as in some sentiment detection and question-answer matching settings. However, after speaking to people in academia as well as in industry, both sides made clear that the first approach for a NLP problem would always be a Recurrent Neural Network. By the way, I just came across Andrej Karpathy’s post about
“The Unreasonable Effectiveness of Recurrent Neural Networks”
which is a bit older indeed, but a classic and def worth reading again. If you have anything to add, please share your opinion and experience with CNNs and RNNs for NLP tasks on Quora.