Ayoung Lee (이아영)
Hi, I am a 2nd-year Ph.D. student in Computer Science and Engineering at University of Michigan. I am fortunate to be advised by Prof. Lu Wang. My research focuses on reasoning and alignment in natural language processing. Feel free to contact me for collaboration or questions!
(Last updated on Dec 8th)
News
Education
University of Michigan (08/2024 ~ Present)
Ph.D. in Computer Science and Engineering (Advisor: Lu Wang)
Seoul National University (03/2020 ~ 02/2024)
B.S. in Computer Science and Engineering
GPA: 4.18/4.3 (cumulative), 4.17/4.3 (major)
Summa Cum Laude, Rank: 1st in the department
Seoul Science High School (03/2017 ~ 02/2020)
The school for gifted students in mathematics and science, teaching college-level math and science
GPA: 4.18/4.3 (cumulative)
Publications
Enhancing Multilingual Reasoning in Language Models
- Developing a new dataset that focuses on cultural adaptation, addressing the limitations of direct translation.
- Exploring different rollout methods and loss functions in RL to improve multilingual reasoning.
- Generating synthetic training datasets to improve multilingual reasoning.
Ongoing project
LiveOIBench: Can Large Language Models Outperform Human Contestants in Informatics Olympiads?
- Implemented self-debugging baselines.
Kaijian Zou, Aaron Xiong, Yunxiang Zhang, Xinliang Frederick Zhang, Yueqi Ren, Jirong Yang, Ayoung Lee, Shitanshu Bhushan, Lu Wang
Submitted to ICLR 2026, Initial avg. score in top ≈25%
Logit Arithmetic Elicits Long Reasoning Capabilities Without Training
- Ran supervised fine-tuning experiments, with and without LoRA, for distillation baselines.
Yunxiang Zhang, Muhammad Khalifa, Lechen Zhang, Xin Liu, Ayoung Lee, Xinliang Frederick Zhang, Farima Fatahi Bayat, Lu Wang
COLM 2025, The 1st Workshop on Test-time Scaling and Reasoning Models, Submitted to ARR 2025 Oct [paper]
CLASH: Evaluating Language Models on Judging High-Stakes Dilemmas from Multiple Perspectives
- Assessed the ability of language models to respond to dilemmas based on character descriptions as perspectives.
- Analyzed reasoning chains and identified new failure modes in the value-understanding domain.
- Converted a non-verifiable dilemma task into a verifiable format.
Ayoung Lee, Ryan Sungmo Kwon, Peter Railton, Lu Wang
Preprint, Submitted to ICLR 2026, Initial avg. score in top ≈20% [paper]
On Consistency Training for Language-Based Image Editing Interface
- Generated training dataset using stable diffusion and YOLOv7 to enforce greater consistency in object-level image edits.
Youngwon Lee*, Ayoung Lee*, Yeonjoon Jung, Seung-won Hwang (* denotes equal contribution)
IJCNLP-AACL 2023, Second Workshop on Natural Language Interfaces (Oral) [paper] [code]
Honors and Awards
Best Presentation Award, NLP @ Michigan Day (03/2025)
Kwanjeong Educational Foundation Scholarship (08/2024 ~ Present)
Outstanding Student Commendation from the Alumni Association (02/2024)
Excellent Bachelor's Thesis Presentation Award (02/2024)
SNU Tomorrow's Engineers Membership (03/2022 ~ 08/2022)
Outstanding Tutor Award (08/2021)
Presidential Science Scholarship (03/2020 ~ 02/2024)
Work Experience
Naver Cloud HealthCare AI (03/2024 ~ 06/2024)
Research InternLanguage and Data Intelligence Lab (03/2023 ~ 02/2024)
Undergraduate Research Intern (Advisor: Prof. Seung-won Hwang)DeepMetrics (06/2022 ~ 01/2023)
Machine Learning Engineering Intern (Advisor: Prof. Hyun Oh Song)Samsung Device Solutions Memory (07/2021 ~ 08/2021)
Software InternTalks
Injecting Math Reasoning Abilities in Language Models @ Deepest, Seoul National University (06/2024)
Study Abroad Presentation for Students @ Seoul Science High School (05/2024)
Teaching
(M1522.000700) Logic Design @ Seoul National University (09/2021 ~ 12/2021)
Teaching Assistant (Instructor: Prof. Jihong Kim)
(034.005) Foundation of Physics 1 @ Seoul National University (03/2021 ~ 08/2021)
Peer Tutor
(M1522.000700) Logic Design @ Seoul National University (03/2021 ~ 08/2021)
Tutor