Jeremy Dombrowski
1 min readJul 3, 2021

--

Transformer type neural networks are demonstrably incapable of multi-stage reasoning. This is not to say that work in this area in not underway, only that we simply have not made a breakthrough that allows us to create machine intelligence that is anything more than stimulus --> response.

This might work well for detecting cats in a picture, and might scale up large enough to fool us into thinking that it reasons... but no, it does not. It hallucinates information since it lacks an ability to generate axioms through experimentation. It demonstrates a fundamental inability to learn rules and concepts and can only reproduce things it has seen before (there are some simple examples that clearly demonstrate this shortcoming).

In short, it cannot perform logic. Programming isn't typing, it's thinking. Our neural networks react, but they don't think-- yet.

We've got a few more years, probably at least 10... maybe as many as 20 until the intelligence explosion changes life as we know it.

--

--