April 12, 2022
AI Hot Take
Some recent news on AI advances had me thinking about the implications and reminiscing on my own work in that area. Google’s new large language model named Pathways is the current trigger. Its results are impressive. I find its ability to reason about language almost incredible.
My views are definitely hot-takes as I am no longer an AI professional. Even when I was, I worked a long way from the pointy end that gets written up in research papers. Reader beware, the following could be bollocks. Still I’m not quite as shrill as this hot-take but probably not as thoughtful as these tweets either.
For reference, I did my undergrad thesis on neural nets in the mid-90s (just as they were going out of fashion again) and then worked on logic programming for postgrad research (as symbolic techniques were popular at the time). Most of my professional career has not been AI related - until 2018 when I worked for a company using older machine learning techniques to analyse financial market research (I left in 2020). However, I have kept an interested eye on the field the whole time.
So my thoughts:
- Improvement in AI models remain much worse than non-linear. It is a particulary bugbear of mine when people suggest making an AI twice as “smart” would only be twice as hard (a common argument amoung non-technical singularity advocates). Nope, twice as good is more like 1000x as hard or worse. When doing logic research I counted myself lucky if the algorithms were merely of exponential complexity. Current researchers are throwing exponentially (at least) more hardware and resources at these problems to get incremental changes. Only the largest and incredibly well financed organisations can hope to compete.
- Connectionist systems FTW. The history of AI has seen many swings between connectionist and symbolic techniques. However, with the improvement in neural networks since the advent of deep learning in the 2000’s, Team Connectionist is ascendant and for so long with no change in sight. The computational logic parts of CS departments must be in heavy decline.
- No whiff of consciousneess. When I swapped from neural nets into logic systems it was because I was unimpressed by them as just complex statistical inference. I still hold that opinion, but the results can’t be ignored. Now Pathways can do basic logical inference out of its connections. The conflation of intelligence and consiousness is under attack. It will be interesting to see where the final line is drawn as the importance of consciousness decreases.
- Language as root of functional intelligence? All these great results are coming out of language models. Is that because there is a fundamental link between communication and intelligence? Or is it simply because that is where researchers focus their attention? I don’t know.