Since everyone is creating their own chatbot nowadays, a usual misconception occurs when you have to choose the core model of you bot and some people start to work on wrong models which often leads to failure of their beloved chatbot. Do you want to differentiate between the two models? Read!
Retrieval-based models are easy to understand and work on. They usually work with a source of predefined and available responses and eventually get and a good command over context and content. The retrieval-based models contain some good configuration of Machine Learning but they are not configured to generate any new text, they just pick the appropriate text from a predefined class and set of texts and project the one that matches the intent of a question or a statement. Retrieval based models will go perfect will those chatbots that just perform a task and nothing else. You will have a set of answers and your chatbot will answer to user according to them and the keywords
Generative models are hard to understand and setup they do not rely on the predefined class and generate text of their own according to the understandings of a giving question or statement. Generative models work with Machine Translation and translate the content and context to the machine. They generate brand new responses from the scratch and put them in front of the user. If you want to go big, this model is for you. It can be a direct jump in the AI playground but it will make mistakes so you should prepare yourself for them. You will have a feel that you are talking to a human.
Both models have some pros and cons, Retrieval-based models don’t usually make grammatical errors because there is a predefined class and everything is proofread.
On the other hand, generative models respond from the scratch so there is a chance that they can make such mistakes. RB models are good if your bot is just performing a small task. Generative models are smarter; their responses are more human like. They even keep track of the configuration and will know what you are referring to. You can apply Deep Learning techniques on both but architectures like Sequence-to-Sequence are well received in a Generative model and most researchers are currently working on this model as well.
Do you want to understand the whole blog in an infographic? Here you go!