Gate Square “Creator Certification Incentive Program” — Recruiting Outstanding Creators!
Join now, share quality content, and compete for over $10,000 in monthly rewards.
How to Apply:
1️⃣ Open the App → Tap [Square] at the bottom → Click your [avatar] in the top right.
2️⃣ Tap [Get Certified], submit your application, and wait for approval.
Apply Now: https://www.gate.com/questionnaire/7159
Token rewards, exclusive Gate merch, and traffic exposure await you!
Details: https://www.gate.com/announcements/article/47889
Two research papers, from different perspectives, point to the same question—what is a concept?
Imagine language exists in a two-dimensional coordinate system. The X-axis is the time dimension, with vocabulary organized into sentences as time flows. The Y-axis is the meaning dimension; our choice of one word over another is driven by meaning.
Recent results from the SAE series are very interesting; they reveal how neural network models operate along the Y-axis—models learn to extract and express concept features with clear semantics. In other words, there are certain "nodes" in the model's computation process, which do not correspond to arbitrary neural activations but to specific meaningful concept expressions. This means that meaning within deep learning models can be decomposed and observed.