I tried to use bidirectional lstm with merge_mode='sum' for encoding, but when I try to predict headlines, the model barely generates anything; However, the loss is lower than when I use the simple lstm. This is the only change that I made. Do you know why this happens?