Abstract:
Addressing natural language understanding (NLU) problem is basically enabling the computer to replace human being in tasks that need semantical understanding. Natural language inference (NLI) can be used in paired document applications by reducing the problems to textual entailment task by taking the source document as premises (a coherent text) and the result new document as hypothesis (a language expression which is hypothesized as drawn from the premises). NLU understanding is more challenging tasks due to language expression variability and ambiguity. Most of DNN based models achieve state of the art for all of the tasks with huge amount of dataset. We have proposed deep neural network based natural language inference that uses combined sentence encoder in sentence embedding layer. By most researchers RNN is recommended for NLP tasks that needs the previous history for the new current component. And the current state of the art language modeling approach is Transformer. We have combined these two approaches at sentence embedding layer. Bidirectional LSTM with average pooling, and blocked Multi-head attention mechanism and positional embedding as a transformer sub model has been implemented. We have proposed and implemented the dynamic dropout throughout the model layers and two training controllers i.e. early stopping patience and learning rate patience. We have trained the model for 10 epochs. We have also implemented each encoder separately and compare their performance with the proposed model. We have prepared Amharic language inference dataset. Additionally, we have used SNLI dataset. The proposed model predicts 88.91% of the Amharic testing dataset that outperforms by 5.95%, 6.5% and 7.13% performance of model with only Bi-LSTMs, Only with Transformers and averaged models respectively. It also predict 87.06% of SNLI testing dataset by outperforming LSTM based model with 2.01%, Transformer sub-model with 3.31% and averaged with 3.16%.