LINGUIST List 35.1283

Tue Apr 23 2024

Calls: Computational Linguistics / Linguistics Vanguard (Jrnl)

Editor for this issue: Zackary Leech <zleechlinguistlist.org>

LINGUIST List is hosted by Indiana University College of Arts and Sciences.



Date: 22-Apr-2024
From: Vsevolod Kapatsinski <vkapatsiuoregon.edu>
Subject: Computational Linguistics / Linguistics Vanguard (Jrnl)
E-mail this message to a friend

Call for Papers:

Special collection: Implications of Neural Networks and other Learning Models for Linguistic Theory

Managing Editor: Vsevolod Kapatsinski (University of Oregon)
Co-editor: Gašper Beguš (University of California, Berkeley)

This Linguistics Vanguard special collection is motivated by the recent breakthroughs in the application of neural networks to language data. Linguistics Vanguard publishes short 3000-4000 word articles on cutting-edge topics in linguistics and neighboring areas. Inclusion of multimodal content designed to integrate interactive content (including, but not limited to audio and video, images, maps, software code, raw data, hyperlinks to external databases, and any other media enhancing the traditional written word) is particularly encouraged. Special collections contributors should follow general submission guidelines for the journal (https://www.degruyter.com/journal/key/lingvan/html#overview).

Overview of the special issue topic:

Neural network models of language have been around for several decades, and became the de facto standard in psycholinguistics by the 1990s. There have also been several important attempts to incorporate neural network insights into linguistic theory (e.g., Bates & MacWhinney, 1989; Bybee, 1985; Bybee & McClelland, 2005; Heitmeier et al., 2021; Smolensky & Legendre, 2006). However, until recently, neural network models did not approximate the generative capacity of a human speaker or writer. This changed in the last few years, when large language models (e.g., the GPT family), embodying largely the same principles but trained on vastly larger amounts of data, have made a breakthrough so that the language they generate is now usually indistinguishable from that generated by a human. The accomplishments of these models have led to both calls for further integration between linguistic theory and neural networks (Beguš 2020; Kapatsinski, 2023; Kirov & Cotterell, 2018; Pater, 2019; Piantadosi, 2023) and criticism suggesting that the way they work is fundamentally unlike human language learning and processing (e.g., Bender et al., 2021; Chomsky et al., 2023).

The present special collection for Linguistics Vanguard aims to foster a productive discussion between linguists, cognitive scientists, neural network modelers, neuroscientists, and proponents of other approaches to learning theory (e.g., Bayesian probabilistic inference, instance-based lazy learning, reinforcement learning, active inference; Jamieson et al., 2022; Tenenbaum et al., 2011; Sajid et al., 2021). We call for contributions addressing the central question of linguistic theory — Why are languages the way they are? – by means of a computational modeling approach. Reflections and position papers motivating the best ways to approach this question computationally are also welcome.

The contributions are encouraged to compare different models trained on the same data approximating human experience. Insightful position papers will also be accepted. Contributions should explicitly address the ways in which the training data of the model(s) they discuss resembles and differs from human experience. Contributions can involve either hypothesis testing via minimally different versions of the same well-motivated model (e.g., Kapatsinski, 2023), or comparisons of state-of-the-art models from different intellectual traditions (e.g., Albright & Hayes, 2003; Sajid et al., 2021) on how well they answer the question above.

Timeline:

abstract due by July 1, 2024
notification of authors (full paper invitations) by August 1, 2024
full paper due by November 1, 2024
reviews to be completed by January 31, 2025
publication by March 2025

For more information and to submit an abstract, please visit https://blogs.uoregon.edu/ublab/lmlt/




Page Updated: 23-Apr-2024


LINGUIST List is supported by the following publishers: