Exponential progress

In 1997 Deep Blue computer of IBM prevailed the world chess champion Gary Kasparov. Deep Blue was a highly sophisticated chess-playing computer. It relied on a massive database of historical chess positions and moves, as well as advanced algorithms, to evaluate positions and make strategic decisions during games. But it was not self-learning in the way we understand it today. Alpha Zero, later developed by DeepMind was a self-learning computer learned by playing against itself and refining its strategies over time without being pre-programmed with extensive database of historical games. This method allows AlphaZero to develop strategies that were not based on human-established patterns, marking a significant advancement in AI.


This technology marked the widespread of the era of artificial intelligence, that was not only meant algorithms, which could not only recognize objects and understand text, but also to learn and improve autonomously, without human intervention, has transformed the field of artificial intelligence.


The pace of progress in computing, guided by Moore's Law, has been exponential. Initially, computing power doubled every two years. But the development of AI has accelerated this rate, making it twice as fast. In 2018, OpenAI introduced its first language model with 117 million parameters, a measure of its scale and complexity. Fast forward five years, and the latest model, GPT-4, is estimated to have over a trillion parameters.


Over the past decade, the computational power used to train the most advanced AI models has increased fourfold each year. Today's cutting-edge AI models, known as "frontier" models, wield five billion times the computing power of their counterparts from just a decade ago. Processes that once took weeks now occur in seconds, with upcoming models expected to handle tens of trillions of parameters. "Brain scale" models, surpassing 100 trillion parameters (roughly the human brain's synapse count), are anticipated within five years.


Not only humans would be able to beat computer now in chess or any other intellectual games, such as Go. AI will exceed human performance at any given task, and it will become self-directed, self-replicating, and self-improving beyond human control.


Looking ahead to 2035, a speculative projection based on the observed growth rate suggests that AI models could potentially have around … parameters. This rapid advancement brings unexpected capabilities. Few foresaw that training on raw text would empower language models to craft coherent, novel sentences, compose music, or solve scientific problems. The trajectory is pushing towards systems with self-improving capabilities, a critical development in AI technology.


Here, you should more focus on challenges, i.e. that the cost to download or steal AI is not the same as to steal nukes.


In the 1960s, an IBM mainframe with only 360 KB of memory cost around $250,000. In 2000, a high-performance Dell PC, featuring a 17-inch screen, Intel Pentium III processor, and 256 MB of RAM, was priced at $2,500. Such computers were desktops, far larger than today's cutting-edge capabilities running on smaller and cheaper devices. While the most powerful models still demand sophisticated hardware, midrange versions can run on affordable rented computers, and soon, on smartphones. The rapid accessibility of such powerful technology is unprecedented.


Unlike nuclear assets governed by non-proliferation treaties and international regulations, AI algorithms are easily copied and shared. The Meta's powerful Llama-1 language model, for instance, leaked online shortly after its debut in March. The proliferation risks of AI are evident, presenting challenges for security and regulation in this rapidly evolving technological landscape.