202409231736
Status: #idea
Tags: #ai #philosophy_of_science #compression
# Why deep learning models don't implicitly learn physical laws
[[Science is compression]], and LLMs are indeed encouraged to compress their training data into their weights. However, there is a major limitation:
LLMs have hundreds of billions of parameters. If all (or a large subset of these parameters) are active to predict the value for any given datapoint in the training set, we are hardly compressing the training set.
Thus, the LLM is not encouraged to pull out extremely compact representations of the data, like Newton's laws of motion, in order to explain the data it is seeing.
---
# References