Afra Feyza Akyürek

PhD Student in Computer Science at Boston University

akyurek [at] bu [dot] edu

665 Comm Ave, Boston, MA

prof_pic.jpg

I am a fifth-year Computer Science PhD student at Boston University focusing on natural language processing.

My research focuses on enhancing language model interactions to be more akin to human-like communication. Having inspired by how humans modify their knowledge and beliefs through feedback received in natural language, my specific interest lies in how such feedback, expressed in natural language, can guide a language model’s outputs to align with facts, requirements, natural phenomena, or preferences. My aim is to develop methods that enable language models to incorporate this feedback consistently, thereby enhancing their alignment, reliability, and safety.

I have the privilege of being guided by Derry Wijaya. Recently, I had the enriching experience of collaborating with the talented teams at Allen AI, Apple and I often collaborate with Jacob Andreas at MIT.

I am currently exploring opportunities in the industry job market!

Improving Language Models with Feedback

How can we alter language models to adhere to natural language feedback?

  • We have devised an automatic critique generator called RL4F which is trained with reinforcement learning. RL4F is trained via reinforcement learning and rewarded as long as the generated critiques improved a second model’s predictions.

  • I have led the curation of a model editing benchmark DUnE where edits are natural language sentences. We also showed that retrieval augmented language modeling is superior to specialized editing techniques when edits are natural language phrases.

  • Moreover, I have developed a scheme that enables growing the number of a classes that an object classifier can recognize using language information about the objects such as labels and descriptions.

Safety in Language Models

How to make language models safer with language feedback?

  • I have studied bias measurement in instruction-tuned language models and conducted sensitivity analysis for measuring bias in language models.

  • A significant portion of our model editing benchmark DUnE includes edits that solicit debiased model outputs. We find that language models struggle parsing the instructions that call for avoiding harmful biases and stereotypes.

biography

I was born and raised in the beautiful coastal city of Izmir in Turkey. I studied at Izmir Fen Lisesi where I met my favorite person on Earth. I have taken the nation-wide college admission test and was ranked 31st amid two million exam takers. I moved to Istanbul to study at Koc University and spent five amazing years double-majoring Computer Engineering and Industrial Engineering. Ekin and I married in 2018 and moved to the US where I started my PhD in Statistics at Carnegie Mellon. I later moved to Boston to do my PhD in Computer Science, working on Natural Language Processing. I did an internship at Apple, working on gender bias in language models and recently spent some time at Allen AI with the Aristo team where we developed an automatic feedback generator.

I am coming from a family of avid travelers and try to keep the spirit alive in the little family of ours 😍🌍🗺️🏝️

selected publications

news

Oct, 2023 I am invited to give a talk on my work on safety and editing in language models at MuslML in NeurIPS 2023.
Apr, 2023 I gave a presentation on our work ​RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs at New England NLP Meetup in UMass Lowell.
Feb, 2023 I gave a presentation on our work ​RL4F at BU AIR seminars.
Jul, 2022 I was invited to give a talk on my work Challenges in Measuring Bias in Text Generation at Gender Bias in NLP workshop at NAACL 2022.
Feb, 2021 I was selected as Rafik Hariri Institute Graduate Fellow.
Feb, 2020 I gave a talk on our paper Multi-Label and Multilingual News Framing Analysis at Boston University’s AIR seminars.