Akari Haga, Akiyo Fukatsu, Miyu Oba, Arianna Bisazza, Yohei Oseki. 2024. BabyLM Challenge: Exploring the effect of variation sets on language model training efficiency, Proceedings of the BabyLM Challenge at the 28th Conference on Computational Natural Language Learning (CoNLL), Long Paper, xxx-xxx.
Kohei Kajikawa, Yusuke Kubota, Yohei Oseki. 2024. Is Structure Dependence Shaped for Efficient Communication?: A Case Study on Coordination, Proceedings of the 28th Conference on Computational Natural Language Learning (CoNLL), Long Paper, 291-302.
Miyu Oba, Yohei Oseki, Akiyo Fukatsu, Akari Haga, Hiroki Ouchi, Taro Watanabe, Saku Sugawara. 2024. Can Language Models Induce Grammatical Knowledge from Indirect Evidence?, Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP), Long Paper, 20591-20603.
Tatsuki Kuribayashi, Ryo Ueda, Ryo Yoshida, Yohei Oseki, Ted Briscoe, Timothy Baldwin. 2024. Emergent Word Order Universals from Cognitively-Motivated Language Models, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL), Long Paper, 14522-14543.
Ryo Yoshida, Taiga Someya, Yohei Oseki. 2024. Tree-Planted Transformers: Unidirectional Transformer Language Models with Implicit Syntactic Supervision, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL), Findings, 5120-5134.
Akari Haga, Saku Sugawara, Akiyo Fukatsu, Miyu Oba, Hiroki Ouchi, Taro Watanabe, Yohei Oseki. 2024. Modeling Overregularization in Children with Small Language Models, Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL), Findings, 14532-14550.
Kohei Kajikawa, Ryo Yoshida, Yohei Oseki. 2024. Dissociating Syntactic Operations via Composition Count, Proceedings of the Meeting of the Cognitive Science Society (CogSci), Long Paper, 297-305.
Tatsuki Kuribayashi, Yohei Oseki, Timothy Baldwin. 2024. Psychometric Predictive Power of Large Language Models, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), Findings, 1983-2005.
Yuto Harada, Yohei Oseki. 2024. Cognitive Information Bottleneck: Extracting Minimal Sufficient Cognitive Language Processing Signals, Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING), Long Paper, 3480-3489.
Taiga Someya, Yushi Sugimoto, Yohei Oseki. 2024. JCoLA: Japanese Corpus of Linguistic Acceptability, Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING), Long Paper, 9477-9488.
Akiyo Fukatsu, Yuto Harada, Yohei Oseki. 2024. Learning Bidirectional Morphological Inflection Like Humans, Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING), Long Paper, 10249-10262.
Taiga Someya, Ryo Yoshida, Yohei Oseki. 2024. Targeted Syntactic Evaluation on the Chomsky Hierarchy, Proceedings of the Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING), Long Paper, 15595-15605.
Miyu Oba, Akari Haga, Akiyo Fukatsu, Yohei Oseki. 2023. BabyLM Challenge: Curriculum learning based on sentence complexity approximating language acquisition, Proceedings of the BabyLM Challenge at the 27th Conference on Computational Natural Language Learning (CoNLL), Extended Abstract, 290-297.
Satoru Ozaki, Yohei Oseki. 2023. CANDS: A Computational Implementation of Collins and Stabler (2016), Proceedings of the Society for Computation in Linguistics (SCiL), 47-68.
Hiroshi Noji*, Yohei Oseki*. 2023. How Much Syntactic Supervision is "Good Enough"?, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Findings, 2300–2305. (* denotes equal contribution)
Taiga Someya, Yohei Oseki. 2023. JBLiMP: Japanese Benchmark of Linguistic Minimal Pairs, Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics (EACL), Findings, 1581–1594.
Tatsuki Kuribayashi, Yohei Oseki, Ana Brassard, Kentaro Inui. 2022. Context Limitations Make Neural Language Models More Human-Like, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), Long Paper, 10421-10436.
Ryo Yoshida, Yohei Oseki. 2022. Composition, Attention, or Both?, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP), Findings, 5822-5834.
Ryo Yoshida, Yohei Oseki. 2022. Learning Argument Structures with Recurrent Neural Network Grammars, Proceedings of the Society for Computation in Linguistics (SCiL), 101-111.
Ryo Yoshida, Hiroshi Noji, Yohei Oseki. 2021. Modeling Human Sentence Processing with Left-Corner Recurrent Neural Network Grammars, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP), Short Paper, 2964-2973.
Hiroshi Noji, Yohei Oseki. 2021. Effective Batching for Recurrent Neural Network Grammars, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), Findings, 4340-4352.
Tatsuki Kuribayashi, Yohei Oseki, Takumi Ito, Ryo Yoshida, Masayuki Asahara, Kentaro Inui. 2021. Lower Perplexity is Not Always Human-Like, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics (ACL), Long Paper, 5203-5217.
Yohei Oseki, Masayuki Asahara. 2020. Design of BCCWJ-EEG: Balanced Corpus with Human Electroencephalography, Proceedings of the International Conference on Language Resources and Evaluation (LREC), 189-194.
Yohei Oseki, Alec Marantz. 2020. Modeling Morphological Processing in Human Magnetoencephalography, Proceedings of the Society for Computation in Linguistics (SCiL), 209-219.
Carmen Saldana, Yohei Oseki, Jennifer Culbertson. 2019. Do cross-linguistic patterns of morpheme order reflect a cognitive bias?, Proceedings of the Meeting of the Cognitive Science Society (CogSci), Long Paper, 994-1000.
Yohei Oseki, Charles Yang, Alec Marantz. 2019. Modeling Hierarchical Syntactic Structures in Morphological Processing, Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics (CMCL), 43-52.
Yohei Oseki, Yasutada Sudo, Hiromu Sakai, Alec Marantz. 2019. Inverting and Modeling Morphological Inflection, Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology (SIGMORPHON), 170-177.
国際ジャーナル
LLM-jp. 2024. LLM-jp: A Cross-organizational Project for the Research and Development of Fully Open Japanese LLMs, arXiv, https://arxiv.org/abs/2407.03963.
Tatsuya Haga, Yohei Oseki, Tomoki Fukai. 2023. Unified neural representation model for physical space and linguistic concepts, bioRxiv, https://doi.org/10.1101/2023.05.11.540307.
Yushi Sugimoto, Ryo Yoshida, Hyeonjeong Jeong, Masatoshi Koizumi, Jonathan Brennan, Yohei Oseki. 2024. Localizing Syntactic Composition with Left-Corner Recurrent Neural Network Grammars, Neurobiology of Language 5, 201-224.
Shingo Shimoda, Lorenzo Jamone, Dimitri Ognibene, Takayuki Nagai, Alessandra Sciutti, Alvaro Costa-Garcia, Yohei Oseki, Tadahiro Taniguchi. 2022. What is the role of the next generation of cognitive robotics?, Advanced Robotics 36, 3-16.
Carmen Saldana, Yohei Oseki, Jennifer Culbertson. 2021. Cross-linguistic patterns of morpheme order reflect cognitive biases: An experimental study of case and number morphology, Journal of Memory and Language 118, 104204.
Yohei Oseki, Alec Marantz. 2020. Modeling Human Morphological Competence, Frontiers in Psychology 11, 513740.
Graham Flick*, Yohei Oseki*, Amanda Kczmarek, Meera Al Kaabi, Alec Marantz, Liina Pylkkänen. 2018. Building words and phrases in the left temporal lobe, Cortex 106, 213-236. (* denotes equal contribution)
Tal Linzen, Yohei Oseki. 2018. The reliability of acceptability judgments across languages, Glossa 3, 100.
Yohei Oseki. 2023. Human language processing in comparative computational psycholinguistics, Issues in Japanese Psycholinguistics from Comparative Perspectives, Volume 1: Cross-Linguistic Studies, Masatoshi Koizumi (ed.), 269-288. De Gruyter Mouton.
Yushi Sugimoto, Ryo Yoshida, Hyeonjeong Jeon, Akitake Kanno, Masatoshi Koizumi, Yohei Oseki. 2024. Investigating syntactic attention in the brain, Society for the Neurobiology of Language (SNL).
Yohei Oseki. 2021. Human language processing in comparative computational psycholinguistics, Issues in Japanese Psycholinguistics from Comparative Perspectives (IJPCP).
Yohei Oseki. 2021. Reverse-engineering human language processing, Joint Workshop on Linguistics & Language Processing (JWLLP).
Yohei Oseki. 2021. Building machines that process natural language like humans, 奈良先端科学技術大学院大学 コロキアム.
Yohei Oseki. 2020. Building machines that process natural language like humans, Logic and Engineering of Natural Language Semantics (LENLS).
Yohei Oseki. 2020. What is the role of language in cognitive robotics?, What is the role of next generation of cognitive robotics?.
大関洋平. 2020. 心理言語学における計算論的転回, 慶應義塾大学計算論的精神医学コロキアム.
受賞
Best Paper Award, CoNLL, 2024.
Outstanding Paper Award, BabyLM Challenge, 2024.
Best Paper Award, LREC-COLING, 2024.
大会発表賞, 日本言語学会 第168回大会, 2024.
優秀賞, 言語処理学会 第30回年次大会, 2024.
委員特別賞×3, 言語処理学会 第30回年次大会, 2024.
スポンサー賞(富士通賞), 言語処理学会 第30回年次大会, 2024.
学生奨励賞(染谷大河), 人工知能学会 2023年度全国大会, 2023.
優秀発表賞(中石海), 日本物理学会 2023年春季大会, 2023.
一高記念賞(吉田遼), 東京大学 大学院総合文化研究科, 2023.
奨励賞(山下陽一郎), NLP若手の会(YANS)第17回シンポジウム, 2022.
委員特別賞, 言語処理学会 第28回年次大会, 2022.
最優秀賞, 言語処理学会 第27回年次大会, 2021.
委員特別賞, 言語処理学会 第27回年次大会, 2021.
若手奨励賞(吉田遼), 言語処理学会 第27回年次大会, 2021.
グッドプラクティス総長表彰, 東京大学, 2021.
ティーチングアワード, 早稲田大学, 2020.
Best Paper Award, Cognitive Modeling and Computational Linguistics (CMCL), 2019.
Best Paper Award, Mental Architecture for Processing and Learning of Language (MAPLL), 2016.