RecNet
Five levels of self-awareness as they unfold early in lifePhilippe Rochat
Feb, 2003
Read
Zizhao Chen recommended on on 4/2/2024
Alarm: anthropomorphizing LLMs ahead. Why does every system prompt start with "You are ...."? Does pre-training/RLHF teach LLMs self-awareness? It's fun to see how we measure infants' sense of self. (Will a latent self variable help transfer comprehension to generation?)
50 Years of Computational Complexity: Hao Wang and the Theory of ComputationNick Zhang
Jun, 2022
Read
Zizhao Chen recommended on on 3/26/2024
This paper traces how ToC wanders and matures in the then-young field of computer science. Would recommend as a bedtime story.
Metamemory: A Theoretical Framework and New FindingsThomas Nelson, Louis Narens
1990
Read
Zizhao Chen recommended on on 3/19/2024
Metamemory refers to individuals' knowledge and awareness of their memory processes, e.g. judging their learning stage, the feeling of knowing, and active learning. Roots of modern mindfulness/self-awareness? Would be fun to play these psych experiments with LLMs.
Symbols and grounding in large language modelsEllie Pavlick
Jun, 2023
Read
Zizhao Chen recommended on on 3/12/2024
Hopefully, I won't forget to update this, but just in case, this op-ed piece was a continuation of what I recommended last week, in more of an NLP style than a philosophical one.
Do Language Models' Words Refer?Matthew Mandelkern, Tal Linzen
Aug, 2023
Read
Zizhao Chen recommended on on 3/5/2024
Food for thought on referentiality and grounding. Rough idea: sensory inputs are not necessary for grounding but a linguistic community providing causal-historical context is. It's a philosophical detour but if you are into this kind of thing this will be fun.
Mission: Impossible Language ModelsJulie Kallini, Isabel Papadimitriou, Richard Futrell, Kyle Mahowald, Christopher Potts
Jan, 2024
Read
Zizhao Chen recommended on on 1/30/2024
Empirical dispute against Chomsky's "LLMs are equally capable of learning languages that are possible and impossible for humans to learn".
Can AI Be as Creative as Humans?Haonan Wang, James Zou, Michael Mozer, Anirudh Goyal, Alex Lamb, Linjun Zhang, Weijie J Su, Zhun Deng, Michael Qizhe Xie, Hannah Brown, Kenji Kawaguchi
Jan, 2024
Read
Zizhao Chen recommended on on 1/23/2024
This paper suggests we study/measure AI creativity wrt. human creativity. That indeed makes the subject more tangible, though secretly I am also interested in AI creativity for its own sake (as long as they are interpretable by a human, e.g. the unexpected move 37).
Self-play Fine-tuning Converts Weak Language Models to Strong Language ModelsZixiang Chen, Yihe Deng, Huizhuo Yuan, Kaixuan Ji, Quanquan Gu
Jan, 2024
Read
Zizhao Chen recommended on on 1/9/2024
By bootstrapping and self-play, this work removes the need for additional annotation when performing alignment. Performance is still capped by the quality of initial human annotated data.
On the Creativity of Large Language ModelsGiorgio Franceschelli, Mirco Musolesi
Mar, 2023
Read
Zizhao Chen recommended on on 12/5/2023
The latest installment from machine creativity community assesses LLM through the lens of Boden’s three criteria for creativity e.g. novelty, surprise and value. Authors argued that motivation (stemmed from self-awareness) and constant adaptation are missing pieces in LLMs today.
AI for Mathematics: A Cognitive Science PerspectiveCedegao E. Zhang, Katherine M. Collins, Adrian Weller, Joshua B. Tenenbaum
Oct, 2023
Read
Zizhao Chen recommended on on 11/28/2023
I always wonder if/when a LLM can identify and name (invent) their own useful mathematical constructs, say a "set", or a "category", along with their axioms. This position paper hints at a few cognitive capabilities that current LLMs need to do that. Ref: https://aimoprize.com
The Language of Programming: A Cognitive PerspectiveEvelina Fedorenko, Anna Ivanova, Riva Dhamala, Marina Umaschi Bers
Jul, 2019
Read
Zizhao Chen recommended on on 11/21/2023
A framework for comparing the nature and extent of potential overlap in the cognitive and neural mechanisms that support computer vs. natural language processing. It helps me learn vocabulary shared by both research communities.
The Reference Class Problem Is Your Problem TooAlan Hájek
Jun, 2007
Read
Zizhao Chen recommended on on 11/14/2023
I misinterpreted the title but it turns out to be an interesting philosophical read. It inspires me to revisit my understanding of the scope of grounding in NLP.
© 2024 RecNet. All rights reserved.