RESEARCH AREAS

Large Language Models


OptimalAI Large Large Model (LLM) research focuses on algorithms that apply at scale, across languages, and across domains. Research encompasses a broad range of traditional NLP tasks, with a strong foundation in general-purpose syntax and semantic algorithms that form the basis of specialized systems. A key focus is to develop algorithms that scale seamlessly and operate efficiently in highly distributed environments.

With semantics, we identify entities in free text and label them into types, cluster mentions of these entities within and across documents (coreference resolution). We explore the integration of multiple sources of knowledge and information to enhance text analysis capabilities.

We apply frame semantics at different levels, including noun phrases, sentences, and entire documents, to further advance our understanding of text content.

We value user diversity, and have made it a priority to deliver the best performance to as many languages as possible. We aim to break new ground by deploying speech technologies that help people communicate, access information online, and share their knowledge – all in their language.