Skip to content
‘Using Sequences of Life-Events to Predict Human Lives’

NGN Research
‘Using Sequences of Life-Events to Predict Human Lives’

‘Exploring the Role of Audio in Video Captioning’

NGN Research
‘Exploring the Role of Audio in Video Captioning’

‘Speak Much, Remember Little: Cryptography in the Bounded Storage Model, Revisited’

NGN Research
‘Speak Much, Remember Little: Cryptography in the Bounded Storage Model, Revisited’

‘SemEval-2023 Task 8: Causal Medical Claim Identification and Related PIO Frame Extraction From Social Media Posts’

NGN Research
‘SemEval-2023 Task 8: Causal Medical Claim Identification and Related PIO Frame Extraction From Social Media Posts’

‘RedHOT: A Corpus of Annotated Medical Questions, Experiences and Claims on Social Media’

NGN Research
‘RedHOT: A Corpus of Annotated Medical Questions, Experiences and Claims on Social Media’

‘Revisiting Relation Extraction in the Era of Large Language Models’

NGN Research
‘Revisiting Relation Extraction in the Era of Large Language Models’

‘Jointly Extracting Interventions, Outcomes and Findings From RCT Reports With LLMs’

NGN Research
‘Jointly Extracting Interventions, Outcomes and Findings From RCT Reports With LLMs’

Research on de-centering damage and trauma in human-computer interactions wins Best Paper Award

NGN Research
Research on de-centering damage and trauma in human-computer interactions wins Best Paper Award

‘Statistical Detection of Differentially Abundant Proteins in Experiments with Repeated Measures Designs and Isobaric Labeling’

NGN Research
‘Statistical Detection of Differentially Abundant Proteins in Experiments with Repeated Measures Designs and Isobaric Labeling’

‘Ecosystem Graphs: The Social Footprint of Foundation Models’

NGN Research
‘Ecosystem Graphs: The Social Footprint of Foundation Models’

‘Mass-Editing Memory in a Transformer’

NGN Research
‘Mass-Editing Memory in a Transformer’

‘Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task’

NGN Research
‘Emergent World Representations: Exploring a Sequence Model Trained on a Synthetic Task’