Stephen Wolfram, founder of Wolfram Research, reflects on his decades-long journey to understand machine learning and neural networks. In the 1980s, he explored symbolic computation systems and wondered if they could be generalized to “fuzzy pattern matching” similar to human thinking. He later developed cellular automata, simple rules-based systems that exhibit complex behavior. Wolfram’s curiosity about neural nets led him to simulate idealized models using statistical mechanics, but his early attempts were unsuccessful.
Years later, after the breakthroughs in deep learning, Wolfram revisited his ideas and began to think of neural nets as sampling programs from across the computational universe. He also wondered if simple systems could learn through “bashing” them hard enough. Recently, he conducted a “throwaway experiment” that surprisingly captured the essence of biological evolution. Building on this success, Wolfram has made progress in understanding machine learning, leveraging his Wolfram Language tools. His work may finally uncover why and when machine learning works its “magic.”
The author’s curiosity about machine learning and neural networks dates back to 1980, when they were developing a symbolic computation system. The idea of “fuzzy pattern matching” resonated with human thinking, but the author didn’t consider neural nets as potential solutions at that time.
Fast-forward to 1981, when the author discovered cellular automata, which led to a deeper understanding of complex behavior emerging from simple rules. This curiosity about neural nets and their relationship to cellular automata persisted over the years, with experiments and simulations that didn’t quite yield the desired insights.
The author’s work on “A New Kind of Science” in 2002 touched on human thinking and included a discrete analog of a neural net, but it wasn’t until 2017 that they began to consider neural nets as sampling programs from across the computational universe. This idea was still gestating when they wrote about ChatGPT and LLMs in 2023.
The breakthrough came with a “throwaway experiment” that unexpectedly captured the essence of biological evolution. This led to a new intuition about machine learning, which was surprisingly close to their earlier work on vertically layered rule arrays. With the power of Wolfram Language, the author was able to revisit and build upon these ideas.
The result is an exciting beginning in the development of foundational science around machine learning. The author hints at practical applications and a deeper understanding of when and why machine learning “works its magic.”
External Link: Click Here For More
