LTAT.01.001 Natural language processing
This course aims to provide an overview of the main tasks in the field of natural language processing and to introduce the contemporary methods to address them. The course will cover tasks such as language modeling and word/sentence representations, text classification, sequence tagging for finding parts of speech or morphological features, information extraction such as named entity recognition, finding the important structural parts of a sentence as well as some higher level tasks such as machine translation. During recent years, the NLP field has more and more started to use deep neural models. Thus, in this course we will look at various deep neural models that are nowadays commonly used for NLP: recurrent networks for modeling sequential data, convolutional networks for text classification, static and contextual word embeddings, attention mechanism for finding alignment between different inputs or inputs and outputs.
Instructor: Kairit Sirts, Kirill Milintsevich
Term: Spring
Location: Delta building (Narva mnt 18)
Time: Tuesdays at 12:15, Tuesdays at 16:15