Poster #1: Deep Bidirectional Transformers for Relation Extraction without Supervision
NLP Scientist, Yannis Papanikolaou, will be presenting his work on relationship extraction, using a fine-tuning of Google’s BERT language model. Being able to successfully extract relationships despite scarce data is vital to discovering treatments for rare diseases, for which funds – and therefore research – tends to be limited.
BERT (Bidirectional Encoder Representations from Transformers) is one of Google’s most advanced language models for NLP. Its key innovation is in the bidirectional training of Transformer, which gives the model a stronger sense of language context and flow.
Poster #2: Reasoning Over Paths via Knowledge Base Completion
Machine Learning Scientist, Saatviga Sudhahar, will discuss her work on developing a knowledge-based model to explain novel connections between biomedical entities. Her method ranks the most relevant paths, given the thousands of connections available.
Saatviga uses a simple compositional method, and frames the problem in a novel way to infer explanations from the knowledge graph where direct information is not known. Once again, this is important in discovering new treatments and we use it internally to assist us in defining our drug discovery hypotheses.