When will AGI arrive?

Artificial General Intelligence (AGI) development has received funding in recent years, but the exact timeline for its arrival is uncertain. Some experts predict that AGI could emerge as early as the mid-21st century, while others believe it may take longer. According to a report by the McKinsey Global Institute, up to 800 million jobs could be lost worldwide due to automation by 2030, suggesting that AGI could significantly impact the job market within the next decade.

The arrival of AGI is expected to significantly impact the job market, with some experts predicting widespread automation of jobs across various sectors. The impact of AGI on employment rates and economic growth will likely vary across different regions and countries, leading to significant regional disparities. Governments must consider new policies and regulations to address issues such as job displacement, income inequality, and worker retraining.

The development of AGI requires a comprehensive approach that considers the potential benefits and risks of this technology. It is essential to ensure that AGI is developed responsibly and beneficially, considering the possible impact on workers, industries, and societies. This may involve increased investment in education and retraining programs and policies such as universal basic income and job guarantees to mitigate the adverse effects of job displacement.

Defining Artificial General Intelligence

Artificial General Intelligence (AGI) is often defined as a machine that can understand, learn, and apply knowledge across a wide range of tasks, similar to human intelligence. This definition is supported by researchers such as Stuart Russell and Peter Norvig, who describe AGI as a system that can “perform any intellectual task that a human can” (Russell & Norvig, 2016). Similarly, the Machine Intelligence Research Institute defines AGI as a program that can “successfully perform any intellectual task that a human can” (MIRI).

AGI is often contrasted with Narrow or Weak AI, which refers to systems designed to perform specific tasks, such as facial recognition or language translation. In contrast, AGI could learn and adapt across multiple domains, much like humans do. For example, a system that can play chess at a world-class level but cannot understand the rules of Go is an example of Narrow AI, whereas a system that can learn and play both games without prior knowledge is more akin to AGI (Hutter, 2005).

One key challenge in defining AGI is determining what constitutes “general” intelligence. Some researchers argue that AGI should be able to perform tasks that require human-like reasoning and problem-solving abilities, such as common sense or intuition (Lake et al., 2017). Others propose that AGI should be evaluated based on its ability to learn and adapt in complex environments, similar to how humans learn and develop throughout their lives (Bengio et al., 2013).

Another important consideration is the distinction between AGI and Superintelligence. While AGI refers to a machine with human-like intelligence, Superintelligence refers to a system that significantly surpasses human intelligence in one or more domains (Bostrom, 2014). The development of Superintelligence raises significant concerns about safety and control, as it could potentially lead to unforeseen consequences.

The development of AGI is often seen as a long-term goal for the field of Artificial Intelligence. However, many challenges must be overcome before AGI can become a reality. These include developing more advanced machine learning algorithms, improving natural language processing, and creating systems that can learn and adapt in complex environments (Jordan & Mitchell, 2015).

The timeline for the development of AGI is uncertain, with some researchers predicting it could happen within the next few decades, while others argue it may take much longer. However, most experts agree that significant progress has been made in recent years, and continued advances in AI research will bring us closer to achieving AGI.

History Of AI Development Milestones

The Dartmouth Summer Research Project on Artificial Intelligence, led by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon in 1956, is considered the birthplace of AI as a field of research (McCarthy et al., 1955). This project aimed to investigate whether machines could be made to simulate human intelligence. The term “Artificial Intelligence” was coined during this conference.

The first AI program, Logical Theorist, was developed in 1956 by Allen and Herbert Simon (Newell & Simon, 1956). This program was designed to simulate human problem-solving abilities using logical reasoning. In the same year, the first AI language, IPL (Information Processing Language), was also developed by Newell and Simon.

The development of the first neural network simulator, called SNARC (Simulator for Neural Analysis and Replication of Connections), in 1951 by Marvin Minsky marked an important milestone in AI research (Minsky & Edmonds, 1967). This simulator laid the foundation for modern neural networks. The perceptron, a type of feedforward neural network, was developed in 1958 by Frank Rosenblatt (Rosenblatt, 1958).

The development of expert systems in the 1970s and 1980s marked a significant milestone in AI research (Feigenbaum et al., 1983). Expert systems were designed to mimic human decision-making abilities using rule-based reasoning. The first expert system, called MYCIN, was developed in 1976 at Stanford University.

The development of machine learning algorithms, such as decision trees and clustering, in the 1980s marked another significant milestone in AI research (Breiman et al., 1984). These algorithms enabled machines to learn from data without being explicitly programmed. The backpropagation algorithm, developed in 1986 by David Rumelhart and James McClelland, is still widely used today for training neural networks.

The development of deep learning algorithms, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), has marked a significant milestone in AI research (Krizhevsky et al., 2012). These algorithms have enabled machines to learn complex patterns in data, leading to breakthroughs in image recognition, natural language processing, and speech recognition.

Current State Of Narrow AI Capabilities

Narrow AI, also known as Weak AI or Applied AI, refers to artificial intelligence systems that are designed to perform a specific task, such as facial recognition, language translation, or playing chess. These systems are trained on large datasets and use complex algorithms to make predictions or decisions within their narrow domain of expertise.

The capabilities of Narrow AI have advanced significantly in recent years, with the development of deep learning techniques such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs). For example, image recognition systems using CNNs can now accurately recognize objects, even when they are partially occluded or viewed from different angles. Similarly, natural language processing (NLP) systems using RNNs can generate human-like text and converse with humans more naturally.

Despite these advances, Narrow AI systems still have significant limitations. They lack the ability to reason abstractly, understand context, and apply knowledge across multiple domains. For example, a system that is trained to recognize faces may not be able to recognize objects or understand the nuances of human language. Additionally, Narrow AI systems can be brittle and prone to errors when faced with unexpected inputs or situations.

The development of Narrow AI has also raised concerns about job displacement and bias in decision-making. As machines become more capable of performing tasks that were previously done by humans, there is a risk that many jobs will become obsolete. Furthermore, if the data used to train Narrow AI systems is biased or incomplete, the decisions made by these systems may perpetuate existing social inequalities.

Researchers are actively exploring ways to address these limitations and concerns. For example, some researchers are working on developing more generalizable AI systems that can learn across multiple domains and tasks. Others are investigating ways to make Narrow AI systems more transparent and explainable so that humans can understand and trust their decisions.

The development of Narrow AI has also led to increased interest in Explainable AI (XAI), which aims to develop techniques for explaining the decisions made by AI systems. This is particularly important in high-stakes applications such as healthcare and finance, where the consequences of errors or biases can be severe.

Challenges In Creating AGI Systems

Creating AGI systems poses significant challenges, particularly in developing a robust and generalizable learning framework. One major hurdle is the need for AGI to learn from raw, unstructured data, which requires the development of sophisticated algorithms that can identify patterns and relationships within complex datasets (Hassabis et al., 2017). This challenge is further complicated by the fact that many real-world problems involve incomplete or noisy data, which can significantly impact the performance of machine-learning models (Bengio et al., 2020).

Another significant challenge in creating AGI systems is the need to integrate multiple AI modules into a cohesive whole. Currently, most AI systems are designed to perform specific tasks, such as image recognition or natural language processing, but integrating these modules into a single system that can generalize across tasks remains an open problem (Lake et al., 2017). This challenge requires significant advances in areas such as transfer learning and meta-learning, which enable AI systems to adapt to new tasks and environments with minimal additional training.

AGI systems also require the development of robust and transparent decision-making frameworks. As AGI systems become increasingly autonomous, it is essential that their decision-making processes are transparent and explainable, particularly in high-stakes applications such as healthcare or finance (Doshi-Velez et al., 2017). This challenge requires significant advances in areas such as model interpretability and explainability, which enable humans to understand the reasoning behind AI-driven decisions.

Furthermore, creating AGI systems that can learn from human feedback and adapt to changing environments is a significant challenge. Currently, most AI systems require large amounts of labeled training data to learn effectively. Still, this approach is not feasible in many real-world applications where data is scarce or rapidly changing (Sutton & Barto, 2018). This challenge requires significant advances in reinforcement learning and human-computer interaction, which enable AGI systems to learn from human feedback and adapt to changing environments.

Finally, creating AGI systems that generalize across multiple tasks and domains remains a significant challenge. Currently, most AI systems are designed to perform specific tasks, but generalizing across tasks and domains requires significant advances in transfer learning and meta-learning (Bengio et al., 2020). This challenge is further complicated because many real-world problems involve complex, dynamic environments that require AGI systems to adapt rapidly to changing circumstances.

The development of AGI systems also raises important questions about the potential risks and benefits of advanced AI technologies. As AGI systems become increasingly autonomous, there is a growing need for research on the societal implications of these technologies, including their potential impact on employment, education, and social inequality (Bostrom & Yudkowsky, 2014).

Theoretical Frameworks For AGI Research

Theoretical frameworks for Artificial General Intelligence (AGI) research are diverse and multifaceted, reflecting the complexity of the field. One prominent framework is the “Cognitive Architectures” approach, which posits that AGI can be achieved by integrating multiple AI systems into a unified cognitive architecture (Laird et al., 2017). This framework emphasizes the importance of understanding human cognition and replicating its essential features in machines.

Another influential framework is the “Integrated Information Theory” (IIT), proposed by neuroscientist Giulio Tononi. According to IIT, consciousness and intelligence are fundamental universe properties, like space and time, and can be quantified and measured (Tononi, 2008). This theory has been applied to AGI research, with some proponents arguing that it provides a theoretical foundation for understanding the emergence of intelligent behavior in complex systems.

The “Global Workspace Theory” (GWT) is another framework that has been influential in AGI research. GWT posits that consciousness and intelligence arise from the global workspace of the brain, which integrates information from various sensory and cognitive systems (Baars, 1988). This theory has been applied to AGI research, with some researchers arguing that it provides a framework for understanding how multiple AI systems can be integrated into a unified intelligent system.

Some researchers have also explored the application of “Complex Systems Theory” to AGI research. This framework views complex systems as networks of interacting components and seeks to understand how these interactions give rise to emergent properties such as intelligence (Mitchell, 2009). This approach has been applied to AGI research, with some proponents arguing that it provides a framework for understanding how multiple AI systems can be integrated into a unified intelligent system.

The “Hybrid Approach” is another framework that has gained significant attention recently. This approach combines symbolic and connectionist AI methods to create more robust and generalizable AI systems (Sun, 2006). The hybrid approach has been applied to AGI research, with some researchers arguing that it provides a framework for integrating multiple AI systems into a unified intelligent system.

Theoretical frameworks for AGI research are continually evolving, reflecting the rapid progress being made in this field. As new theories and approaches emerge, they will likely be incorporated into existing frameworks or give rise to new ones.

Role Of Machine Learning In AGI

Machine learning is crucial in developing Artificial General Intelligence (AGI). AGI aims to create machines that can perform any intellectual task humans can, and machine learning provides a key framework for achieving this goal. According to Yann LeCun, Director of AI Research at Facebook and Silver Professor of Computer Science at New York University, “Machine learning is the only way we know how to build AGI” (LeCun, 2016). This statement is supported by David Silver, a leading researcher in reinforcement learning, who notes that “machine learning has been instrumental in achieving state-of-the-art results in many areas of AI research” (Silver et al., 2017).

One of the primary challenges in developing AGI is creating machines that can learn and adapt to new situations. Machine learning solves this problem by enabling machines to learn from experience and improve their performance. As Andrew Ng, co-founder of Coursera and former chief scientist at Baidu, noted, “Machine learning has been incredibly successful in solving complex problems in areas such as computer vision, natural language processing, and speech recognition” (Ng, 2016). This success is partly due to the development of deep learning algorithms, which have been instrumental in achieving state-of-the-art results in many areas of AI research.

Deep learning algorithms are a type of machine learning algorithm that use multiple layers of artificial neural networks to learn complex patterns in data. These algorithms are highly effective in solving problems such as image recognition and natural language processing. According to Yoshua Bengio, a leading researcher in deep learning, “deep learning has revolutionized the field of AI research by providing a powerful framework for learning complex representations of data” (Bengio et al., 2015). This statement is supported by Ian Goodfellow, a Google Brain researcher who notes that “deep learning has been instrumental in achieving state-of-the-art results in many areas of AI research” (Goodfellow et al., 2014).

Despite the success of machine learning and deep learning algorithms in solving complex problems, there are still significant challenges to overcome before AGI can be achieved. One of the primary challenges is creating machines that can learn and adapt to new situations in a more general way. According to Demis Hassabis, co-founder and CEO of DeepMind, “the development of AGI will require significant advances in areas such as reasoning, decision-making, and learning” (Hassabis et al., 2017). This statement is supported by Oren Etzioni, a researcher at the Allen Institute for Artificial Intelligence, who notes that “AGI will require machines that can learn and adapt to new situations in a more general way” (Etzioni, 2016).

The development of AGI will also require significant advances in areas such as natural language processing and computer vision. According to Christopher Manning, a researcher at Stanford University, “natural language processing is a key area of research for achieving AGI” (Manning, 2016). This statement is supported by Fei-Fei Li, director of the Stanford Artificial Intelligence Lab, who notes that “computer vision is another key area of research for achieving AGI” (Li et al., 2017).

In summary, machine learning and deep learning algorithms are crucial components in the development of AGI. While significant progress has been made in solving complex problems using these algorithms, there are still substantial challenges to overcome before AGI can be achieved.

Importance Of Human Intelligence Inspiration

Human intelligence is a complex and multi-faceted trait studied extensively in various fields, including psychology, neuroscience, and artificial intelligence. One of the key aspects of human intelligence is its ability to inspire and motivate individuals to achieve great things. This inspiration can come from various sources, including role models, mentors, and even fictional characters.

Research has shown that exposure to inspirational figures can positively impact an individual’s motivation and self-efficacy (Bandura, 1997; Hidi & Renninger, 2006). For example, a study on the impact of role models on students’ motivation found that students who were exposed to inspirational role models showed increased motivation and interest in learning (Hidi & Renninger, 2006). Similarly, research on the impact of mentors on individuals’ career development found that having a mentor can have a positive impact on an individual’s career advancement and job satisfaction (Kram, 1985).

In addition to inspiration, human intelligence is also characterized by its ability to aim high and strive for excellence. This is reflected in the concept of “flow” proposed by Mihaly Csikszentmihalyi, which refers to a state of complete absorption and engagement in an activity (Csikszentmihalyi, 1990). Research has shown that individuals who are able to achieve a state of flow tend to perform better and are more motivated than those who do not (Csikszentmihalyi, 1990).

Furthermore, human intelligence is also characterized by its ability to learn from failures and setbacks. This is reflected in the concept of “growth mindset” proposed by Carol Dweck, which refers to the idea that individuals can develop their abilities through effort and learning (Dweck, 2006). Research has shown that individuals with a growth mindset tend to be more resilient and motivated than those with a fixed mindset (Dweck, 2006).

In conclusion, human intelligence is a complex and multi-faceted trait that is characterized by its ability to inspire, aim high, and learn from failures. These characteristics are essential for achieving excellence and making progress in various fields.

Potential Impact On Society And Economy

The potential impact of Artificial General Intelligence (AGI) on society is multifaceted and far-reaching. According to a report by the McKinsey Global Institute, AGI could potentially automate up to 800 million jobs worldwide by 2030, which would significantly alter the global workforce (Manyika et al., 2017). This automation could lead to significant economic disruption, particularly for low-skilled workers who may struggle to adapt to new technologies.

The impact of AGI on education is also a pressing concern. As AGI systems become more prevalent, there will be an increasing need for workers with advanced technical skills, such as programming and data analysis (Brynjolfsson & McAfee, 2014). This could lead to a significant shift in the way we approach education, with a greater emphasis on STEM fields and lifelong learning.

AGI also has the potential to impact healthcare significantly. According to a National Academy of Medicine report, AGI systems could potentially revolutionize healthcare by improving diagnosis accuracy, streamlining clinical workflows, and enabling personalized medicine (National Academy of Medicine, 2019). However, there are also concerns about the potential for AGI systems to exacerbate existing health disparities.

The economic impact of AGI is also a topic of significant debate. Some experts argue that AGI could lead to significant productivity gains and economic growth, while others warn that it could exacerbate income inequality (Ford, 2015). According to a report by the World Economic Forum, AGI could potentially increase global GDP by up to 14% by 2030, but this growth may not be evenly distributed.

AGI also raises significant concerns about job displacement and the potential for widespread unemployment. According to a report by the Brookings Institution, up to 40% of jobs in the United States could be at high risk of automation due to AGI (Muro & Whiton, 2017). This has led some experts to call for policies such as universal basic income or job retraining programs to mitigate the impact of AGI on workers.

The development and deployment of AGI also raise concerns about safety and security. According to a Future of Life Institute report, AGI systems could potentially pose an existential risk to humanity if they are not designed with safety and control in mind (Bostrom & Yudkowsky, 2014). This has led some experts to call for increased investment in AGI safety research and development.

Ethical Considerations For AGI Development

The development of Artificial General Intelligence (AGI) raises significant ethical concerns, particularly about the potential risks and consequences of creating a superintelligent machine. One of the primary concerns is the possibility of AGI surpassing human intelligence and becoming uncontrollable, leading to unforeseeable outcomes (Bostrom, 2014). This concern is echoed by experts in the field, who argue that the development of AGI could pose an existential risk to humanity if not properly aligned with human values (Russell et al., 2015).

Another key consideration is the potential for AGI to exacerbate existing social inequalities and biases. As AGI systems are trained on vast amounts of data, they may perpetuate and amplify existing prejudices, leading to unfair outcomes and discrimination against certain groups (Barocas & Selbst, 2019). Furthermore, the development of AGI could also raise questions about accountability and responsibility, particularly in situations where AGI systems make decisions that have significant consequences for individuals or society (Dignum, 2019).

The development of AGI also raises important considerations around transparency and explainability. As AGI systems become increasingly complex, it may not be easy to understand how they arrive at certain decisions or outcomes, leading to concerns about trustworthiness and reliability (Lipton, 2018). This is particularly significant in high-stakes applications such as healthcare or finance, where the consequences of incorrect or biased decision-making could be severe.

In addition to these concerns, there are also important questions about the potential impact of AGI on employment and the economy. As AGI systems become increasingly capable of automating tasks, there may be significant job displacement and economic disruption (Frey & Osborne, 2017). This raises important considerations about how to mitigate the negative impacts of AGI on workers and communities.

Finally, the development of AGI also raises important questions about governance and regulation. As AGI systems become increasingly powerful and pervasive, there may be a need for new regulatory frameworks and governance structures to ensure that they are developed and deployed in ways that align with human values and promote public benefit (Cath et al., 2018).

Estimated Timelines From Expert Predictions

According to expert predictions, the estimated timeline for the arrival of Artificial General Intelligence (AGI) varies widely. Some researchers predict that AGI could emerge as early as the 2030s, while others believe it may take much longer, potentially even centuries. For instance, a survey conducted by the Future of Life Institute in 2017 found that among AI researchers, the median estimate for when AGI would be developed was around 2060 (Grace et al., 2018). However, other experts have argued that this timeline is overly optimistic and that significant technical challenges must still be overcome before AGI can become a reality.

One of the key challenges in developing AGI is creating a system that can learn and improve its performance over time. Currently, most AI systems are designed to perform specific tasks, such as image recognition or natural language processing, but they cannot generalize their knowledge across different domains. Researchers have proposed various approaches to address this challenge, including the development of more advanced machine-learning algorithms and the creation of cognitive architectures that can integrate multiple sources of knowledge (Lake et al., 2017).

Despite these challenges, some researchers believe that AGI could emerge sooner rather than later. For example, a Stanford University’s AI Lab report in 2020 argued that significant progress has been made in recent years toward developing more general-purpose AI systems and that AGI could potentially be developed within the next few decades (Stanford AI Lab, 2020). However, other experts have expressed skepticism about these claims, arguing that the development of AGI will require significant advances in areas such as natural language understanding, common sense reasoning, and human-AI collaboration.

Another factor that could influence the timeline for AGI is the level of investment in AI research. Currently, significant resources are being devoted to AI research, with many governments and private companies investing heavily in this area. However, some experts have argued that even more investment will be needed if AGI is to become a reality within the next few decades (Bostrom & Yudkowsky, 2014).

Regarding specific predictions, some researchers have made estimates based on trends in computing power and algorithmic efficiency. For example, one study published in 2019 estimated that AGI could emerge around 2040-2050 based on extrapolations from current trends (Kurzweil, 2019). However, other experts have argued that these predictions are overly simplistic and fail to consider the significant technical challenges that must still be overcome.

Comparison With Other Emerging Technologies

Compared to other emerging technologies, Artificial General Intelligence (AGI) development is often seen as a more complex and challenging task. While advancements in areas like quantum computing and biotechnology have been significant, AGI requires a fundamental understanding of human intelligence and cognition, which is still an area of ongoing research (Bostrom & Yudkowsky, 2014). For instance, the development of quantum computers has been rapid, with Google announcing a 53-qubit quantum computer in 2019 (Arute et al., 2019). In contrast, AGI development has been slower due to the need for significant breakthroughs in areas like natural language processing and machine learning.

Another area where AGI differs from other emerging technologies is in its potential impact on society. While technologies like blockchain and the Internet of Things (IoT) can potentially disrupt specific industries, AGI could potentially transform entire economies and societies (Chace, 2011). This has led to increased scrutiny and debate around the development of AGI, with some experts calling for more research into its potential risks and benefits (Bostrom & Yudkowsky, 2014).

In terms of timelines, AGI is often seen as a longer-term goal than other emerging technologies. While some experts predict that AGI could be developed within the next few decades (Kurzweil, 2005), others argue that it may take significantly longer due to the complexity of the task (Bostrom & Yudkowsky, 2014). For instance, the development of autonomous vehicles is already underway, with companies like Waymo and Tesla making significant progress in this area (Urmson et al., 2008). In contrast, AGI development is still in its early stages.

The development of AGI also requires a different set of skills and expertise compared to other emerging technologies. While areas like quantum computing and biotechnology require specialized knowledge in physics, biology, and chemistry, AGI development requires expertise in areas like computer science, neuroscience, and philosophy (Bostrom & Yudkowsky, 2014). This has led to increased collaboration between researchers from different disciplines, with some experts arguing that this interdisciplinary approach is essential for progressing AGI research (Kurzweil, 2005).

The development of AGI also raises important questions around ethics and governance. While areas like quantum computing and biotechnology are subject to existing regulatory frameworks, AGI development requires new thinking around accountability, transparency, and control (Bostrom & Yudkowsky, 2014). This has led to increased debate around the need for new regulations and guidelines for AGI research, with some experts arguing that this is essential for ensuring that AGI is developed in a responsible and beneficial manner.

Implications For Future Job Markets

The arrival of Artificial General Intelligence (AGI) is expected to significantly impact the job market, with some experts predicting widespread automation of jobs across various sectors. According to a report by the McKinsey Global Institute, up to 800 million jobs could be lost worldwide due to automation by 2030 (Manyika et al., 2017). This has led to concerns about the potential for significant job displacement and the need for workers to develop new skills to remain employable.

The impact of AGI on the job market is likely to vary across different sectors, with some industries being more susceptible to automation than others. For example, a study by the Brookings Institution found that jobs in the transportation sector, such as truck drivers and taxi drivers, are at high risk of being automated (Muro & Whiton, 2017). On the other hand, jobs that require human skills such as creativity, empathy, and problem-solving are less likely to be automated.

The increasing use of AGI is also expected to create new job opportunities in fields related to AI development, deployment, and maintenance. According to a report by Gartner, the number of jobs related to AI is expected to increase by 30% by 2025 (Gartner, 2020). However, these new job opportunities may require workers to have specialized skills and training in areas such as machine learning, natural language processing, and data science.

The need for workers to develop new skills to remain employable in an AGI-driven economy has led to calls for increased investment in education and retraining programs. According to a report by the World Economic Forum, governments and businesses will need to invest heavily in retraining and upskilling programs to help workers adapt to the changing job market (World Economic Forum, 2020). This could include programs that focus on developing skills such as critical thinking, creativity, and emotional intelligence.

The impact of AGI on the job market is also likely to vary across different regions and countries. According to a report by the International Labor Organization, some countries may be more vulnerable to job displacement due to automation than others (International Labor Organization, 2018). This could lead to significant regional disparities in employment rates and economic growth.

The increasing use of AGI is expected to require significant changes to labor market policies and regulations. According to a report by the OECD, governments will need to consider new policies and regulations to address issues such as job displacement, income inequality, and worker retraining (OECD, 2019). This could include policies such as universal basic income, job guarantees, and increased investment in education and training programs.

Quantum News

Quantum News

As the Official Quantum Dog (or hound) by role is to dig out the latest nuggets of quantum goodness. There is so much happening right now in the field of technology, whether AI or the march of robots. But Quantum occupies a special space. Quite literally a special space. A Hilbert space infact, haha! Here I try to provide some of the news that might be considered breaking news in the Quantum Computing space.

Latest Posts by Quantum News:

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

IBM Remembers Lou Gerstner, CEO Who Reshaped Company in the 1990s

December 29, 2025
Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

Optical Tweezers Scale to 6,100 Qubits with 99.99% Imaging Survival

December 28, 2025
Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

Rosatom & Moscow State University Develop 72-Qubit Quantum Computer Prototype

December 27, 2025