my photo

Seungwon Lim

Hi, I'm Seungwon Lim. I'm a researcher at Yonsei University, MIRLAB (Multimodal Intelligance Research Lab) advised by Youngjae Yu. I received my bachelor's degree in Computer Science, and I am currently pursuing an integrated MS/PhD program in Computer Science.

My Research question I want to develop is "What lies behind AI systems' actions and utterances?". I believe their actions and utterances cannot be attributed solely to computation, but can be analyzed through humanities, sociology, and linguistics. I'm particularly interested in language as the connection between AI systems and fields studying human experience.

Publications

Publication thumbnail

VisEscape: A Benchmark for Evaluating Exploration-driven Decision-making in Virtual Escape Rooms

arXiv

Seungwon Lim, Sungwoong Kim, Jihwan Yu, Sungjae Lee, Jiwan Chung, Youngjae Yu

Under Review

See More

TLDR; We introduce VisEscape inspired by Escape Room games, and evaluate the Reasoning and Decision-making of diverse MLLMs in exploration-driven and dynamic environments.

Publication thumbnail

Persona Dynamics: Unveiling the Impact of Persona Traits on Agents in Text-Based Games

arXiv

Seungwon Lim, Seungbeen Lee, Dongjun Min, Youngjae Yu

ACL2025 Main

See More

TLDR; We introduce PANDA, which incorporates Human Personality Traits into AI agents for Text-based Games and examines how these traits impact their behavior and performance.

Publication thumbnail

Do LLMs Have Distinct and Consistent Personality? TRAIT: Personality Testset designed for LLMs with Psychometrics

arXiv

Seungwon Lim*, Seungbeen Lee*, Seungju Han, Giyeong Oh, Hyungjoo Chae, Jiwan Chung, Minju Kim, Beong-woo Kwak, Yeonsoo Lee, Dongha Lee, Jinyoung Yeo, Youngjae Yu

NAACL2025 Findings

See More

TLDR; We introduce a psychometric-based benchmark TRAIT to measure the personality revealed in the Behavior Patterns of LLMs along with verification of Reliability and Validity.

Publication thumbnail

MASS: Overcoming Language Bias in Image-Text Matching

arXiv

Jiwan Chung, Seungwon Lim, Sangkyu Lee and Youngjae Yu

AAAI2025 Main

See More

TLDR; We introduce MASS, a Training-free framework that improves Visual Accuracy and reduces Bias in Image-Text Matching for pretrained visual-language models.

Publication thumbnail

Can visual language models resolve textual ambiguity with visual cues? Let visual puns tell you!

arXiv

Jiwan Chung, Seungwon Lim, Jaehyun Jeon, Seungbeen Lee and Youngjae Yu

EMNLP2024 Main

See More

TLDR; We introduce UNPIE, a new benchmark crafted to evaluate how multimodal inputs influence the Resolution of Lexical Ambiguities.

Publication thumbnail

CLARA: Classifying and Disambiguating User Commands for Reliable Interactive Robotic Agents

arXiv

Jeongeun Park, Seungwon Lim, Joonhyung Lee, Sangbeom Park, Minsuk Chang, Youngjae Yu and Sungjoon Choi

ICRA2024

See More

TLDR; We introduce CLARA, a LLM-empowered method for robots to estimate Uncertainty of user commands and to Disambiguate them via question generation for clarification.