I’m training a chatbot involving student/teacher dialogue (think of it like an IT helpdesk chatbot model), but in the discussion, there’s often contextual information that is very important.
Example:
Johnny is trying to calculate the area of a rectangle. His problem is:
"You have a rectangle with height of 5 ft, x, and length of x^2. What is the area?"
Johnny (to teacher): "Is the length 25 ft? I still don’t get how to find the area"
Teacher (to Johnny): "Yes, the length is 25 ft because x = 5 and 5 * 5 = 25. Remember the formula for area?"
Johnny (to teacher): “Is it length times width?”
. . .
When training using a straight NMT model, we would map Johnny’s message to the Teacher’s message. However, without also including the actual text of the problem that Johnny was given, the Teacher’s response could be relevant to another problem involving area (where a student made an incorrect guess) and cause the Teacher to say something like:
Teacher (to Johnny): “No, the length is 49 inches because x = 7 and 7 * 7 = 49. Remember the formula for area?”
That would obviously confuse a student pretty profoundly.
I suppose that I could just append the context question to the end of the student’s message like this:
Johnny (to teacher): "Is the length 25 ft? I still don’t get how to find the area | You have a rectangle with height of 5 ft, x, and length of x^2. What is the area?"
Teacher (to Johnny): "Yes, the length is 25 ft because x = 5 and 5 * 5 = 25. Remember the formula for area?"
Johnny (to teacher): “Is it length times width? | You have a rectangle with height of 5 ft, x, and length of x^2. What is the area?”