General methods that leverage computation are ultimately the most effective, and by a large margin. The biggest lesson that can be read from seventy years of AI research is that methods which scale with available computation win in the long run — and the gains from Moore's law and its successors dwarf the gains from human cleverness.
The temptation is always to build in what we know. But what we know is small; what we can compute grows without bound.
Researchers have repeatedly built systems that leverage human knowledge of the domain — chess programs with hand-crafted evaluation functions, speech systems with phoneme models, vision systems with edge detectors. In every case, the approach that eventually won was one based on search, learning, or both: methods that find structure in data rather than having structure imposed by a designer.
The pattern holds from game-playing to protein folding to language: learn the function, don't write it.
We have a persistent temptation to encode our knowledge into our agents. This feels natural: we understand the domain, and encoding that understanding seems like it should help. In the short run, it often does — a hand-crafted feature, a hardcoded heuristic, an expert rule. But in the long run, it creates a ceiling. The system cannot surpass our own understanding, and the complexity of maintenance grows while the returns diminish.
Short-term thinking builds a cage. The system becomes as limited as its builders.
Methods need not encode human knowledge to be effective. The most profound breakthroughs have come from systems that were general — that could, in principle, discover any solution, not just the ones their designers imagined. General methods may appear wasteful, simple, even unintelligent. Yet they are the ones that scale, that transfer, that surprise us.
The elegant solution is often the one with fewest assumptions.
We must resist the urge to think of intelligence as something we design and instead treat it as something we cultivate. Build systems that learn. Build infrastructure that scales. Build interfaces that let human judgment guide but not constrain. The bitter lesson is not that human knowledge is worthless — it is that human knowledge is best applied to shaping the conditions for learning, not to hardcoding the answers.
To machine is human: we build the ground; the structure grows.
Adopted as a founding charter — not as dogma, but as a compass. Let these principles guide the work without blinding us to what we have yet to learn.