Google's new quantum chip, “Willow”, performed calculations that would take today's fastest supercomputers billions of years. [1] [7] The development of quantum computers means a lot of changes, but by itself, quantum computing is probably not a catastrophe. It will require lots of changes for people trying to secure servers, networks, or online services. You may see a lot more ‘2 factor’ requirements, like facial recognition, or verifying texts to your cell phone. There will be a few adjustments needed:
-Encryption can be broken faster. That is true.
-Breaking block chain would likely need 1M qbits. (not available in the near future). Googles chip is not even close to challenging blockchain [1] so don't sell your Bitcoin based on this, just yet.
-Passwords will be less useful. User Passwords in online banking, ATMs, in your browser, could be compromised by this, but don’t worry, you just need to use “2 factor security” everywhere, and we know how to do this now. Basically, this just means you can make things pretty secure, just by requiring a confirmation to your cell phone every time you do an ATM withdrawal, or log into some service on the Internet.
-Unknown issues.
The time frame for having real quantum computers for real workloads seems to be accelerating. Forbes magazine says that you can purchase Quantum Computing now from the big players. [3] We will probably see them more widely available in a few years, but Google and others that make quantum chips will be able to get them in quantity much sooner. Currently, quantum computers do not have a lot of ‘qbits’, but that is a number to watch. The increasing ‘qbits’ is one measure of speed for such computers.
Quantum Computing + AI
However, what if you add AI to a system with quantum chips? Is that an issue?
Well, you are basically talking about training and then running the AI on much faster hardware, so it could be trained up much faster. Also, it could process data much faster. So, it is realistic to say that using quantum computers would greatly speed up some workloads in training and running an AI.
But we are already uncertain how safe an AI will be, once they get better. And we already know they are hungry for CPU processing power, and real electricity since that is needed to run all of those CPUs. This demand is growing so fast that Microsoft is paying for a nuclear plant to be reopened, just to supply this need. [4] Quantum computers can do a lot more processing with a lot less electrical power, but this does not remove the power constraint, as not all AI tasks can benefit from quantum speedups. I expect that quantum chips will be implemented like a coprocessor to offload the tasks that they are good at, much like graphics processors today.
However, AI needs data input. Lots and lots of data, for it to be trained up. For example, ChatGPT was trained on 500 gigabytes of text data, or about 300 billion words. We are already running into some limits on this. And people are making even less data available due to copyrights. [2] This lead to me asking the following question of my favorite AIs:
I am reading that AI requires a lot of data to be trained, and the amount of data being allowed is drying up (https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html ). But even if all data on Earth was made available, is that enough data to train a superintelligent AI? Or to ask this another way, is the data volume an absolute limit on AI intelligence levels?
All of the AIs asked, agree on this, “While data is a critical resource for training AI, it is not an absolute limit on intelligence.” [6] In other words, the amount of data for training them will not limit AI intelligence. So the data constraint doesn’t seem to be real either.
The promise of such a system is clear. We may finally have enough data and processing speed to predict or figure out complex systems like climate, economies, politics, biological systems, or physics!
Should I Worry?
So, should we worry that if we make a quantum computer and use that super-fast computer in training and running an AI? What if we let that computer design its successor?
I really don’t know the answer, but I strongly suspect that this will create a really really smart AI. And I predict that the results will be unpredictable to small minds like mine.
But I asked some AIs if we used quantum computers to help with the power constraint, and allowed AIs to design their successors to remove the slow-human-design constraint, what constraints might remain? Here is what an AI said about the constraints given those conditions:
“If quantum computing and AI self-improvement become realities, here's how the constraints landscape might shift:
Data Limitations: Would likely remain a significant constraint, although AI could potentially become more efficient at learning from less data or even generating its own training data through simulations.
Algorithmic Limitations: Could be significantly reduced or transformed. The bottleneck might shift from designing better algorithms to ensuring the safety and alignment of self-improving AI.
Computational Resources: Potentially dramatically reduced for certain types of computations thanks to quantum computing, but new power constraints related to maintaining quantum systems could emerge.
Human Factors: The nature of this constraint would change. The focus would shift from algorithm design to oversight, ethical guidance, and ensuring alignment of self-improving AI.
Ethical Considerations: Would become even more critical and complex. Ensuring the safety and beneficial use of increasingly autonomous and intelligent AI would be a paramount concern.
New Constraints: We might see the emergence of constraints related to the predictability and controllability of self-improving AI systems.” [5]
To me this simply means that the constraints do not go away. We will still be CPU limited, electrical power limited, data limited, and limited by development of better AI algorithms. Also I will add a constraint that doesn’t get much press - Garbage In, Garbage Out. We are training these things on human produced data, which is biased and inaccurate. Recall that when Microsoft tried to train its AI bot using Twitter content, it produced misogynistic and racist responses. So ‘curated’ data, seems important, if we want a sane AI. Curation takes effort, and since this is presumably human effort, (surely we wouldn’t try to automate that) then this seems to be a constraint as well.
These Quantum based AIs would still be really smart and really fast, (and possibly biased). But as one AI cautioned me, “This is a highly speculative discussion. The potential impact of these technologies is still largely unknown and subject to ongoing research and debate.”
References
Images where not credited come from the AI at craiyon.com
[1] Googles willow chip
[2] Data shortage for AI
[3] Buy quantum processing now
[4] https://www.cnn.com/2024/09/20/energy/three-mile-island-microsoft-ai/index.html
[5] credit Model A: centaur, but others have answered similarly. Chatgpt agree that energy consumption constraint could be significantly alleviated, although challenges related to the development and scalability of quantum computers would remain, like algorithmic development, and gave me a disclaimer: This is a highly speculative discussion. The potential impact of these technologies is still largely unknown and subject to ongoing research and debate.
[6] complete example AI answer
I am reading that AI requires a lot of data to be trained, and the amount of data being allowed is drying up (https://www.nytimes.com/2024/07/19/technology/ai-data-restrictions.html ). But even if all data on Earth was made available, is that enough data to train a superintelligent AI? Or to ask this another way, is the data volume an absolute limit on AI intelligence levels?
That's a really insightful question that gets to the heart of the current challenges and future possibilities of AI. Let's break it down:
Is all the data on Earth enough to train a superintelligent AI?
We don't know for sure, but the prevailing sentiment leans towards probably not, at least with our current understanding and methods. Here's why:
Quality over Quantity: Simply having vast amounts of data doesn't guarantee intelligence. A lot of the data on Earth is redundant, irrelevant, or even misleading. Superintelligence likely requires highly curated, high-quality data that is specifically designed to teach advanced reasoning, problem-solving, and abstract thought. Imagine trying to build a house with a pile of random rocks, compared to using carefully selected and shaped bricks.
The "Dark Data" Problem: A huge amount of potentially valuable data is unstructured, unlabeled, and inaccessible. Think of the raw sensor data from scientific experiments, the tacit knowledge of experts, or the complex interactions within biological systems. Extracting meaningful information from this "dark data" is a massive challenge.
Beyond Human Experience: Superintelligence, by definition, would surpass human intellect. It might need to explore concepts and connections that are beyond our current understanding and therefore not represented in the data we generate. Think of how a human struggles to understand concepts in quantum physics – a superintelligence might encounter similar barriers with domains beyond our comprehension.
The Need for Active Learning and Exploration: Current AI models are largely trained on static datasets. Superintelligence might require a more dynamic learning process, actively exploring the world, conducting experiments, and generating its own data to fill in knowledge gaps. Think of a scientist designing experiments to test a hypothesis, rather than just passively observing.
The "Algorithm" Factor: It's not just about the data; it's about the algorithms that process it. Even with all the data in the world, if our AI architectures and learning methods are fundamentally limited, we won't reach superintelligence. Think of it like having all the ingredients for a gourmet meal, but lacking the recipe or the cooking skills.
Defining Superintelligence: The very definition of superintelligence is debated. If it implies capabilities far beyond human comprehension, then it's hard to imagine that human-generated data alone could be sufficient.
Is data volume an absolute limit on AI intelligence levels?
While data volume is undoubtedly a significant factor and a current bottleneck, it's likely not the absolute limit. Here's why:
Algorithmic Breakthroughs: Future advancements in AI algorithms could significantly reduce the data requirements for training. We might find more efficient ways to learn complex concepts from less data, similar to how humans can learn from a few examples.
Synthetic Data Generation: AI could potentially generate its own training data. Imagine an AI creating simulated environments and scenarios to learn from, overcoming the limitations of real-world data.
Unsupervised and Self-Supervised Learning: These approaches minimize the need for labeled data. As AI gets better at learning patterns and relationships without explicit guidance, the reliance on massive labeled datasets could decrease.
Transfer Learning and Meta-Learning: AI could learn general principles and then efficiently apply them to new tasks with limited data. Meta-learning, learning how to learn, could be a game-changer.
The Nature of Intelligence: Intelligence isn't just about memorizing data. It's about understanding relationships, making inferences, and creatively solving problems. While data is the fuel, the engine (the algorithms) and the driver (the learning process) are equally crucial.
Analogy: Think of a human brain. It doesn't need to experience every single possible scenario to become intelligent. It learns through a combination of experience, reasoning, and innate abilities. Similarly, superintelligent AI might not need every piece of data ever generated.
In conclusion:
While the current availability of data is a practical constraint on AI development, it's unlikely to be the ultimate barrier to achieving superintelligence. Reaching that level of AI will likely require significant breakthroughs in algorithms, learning paradigms, and potentially even a deeper understanding of consciousness itself.
The drying up of easily accessible training data is a real challenge for the current trajectory of AI development. However, it might also spur innovation towards more data-efficient and creative approaches to building intelligent systems. The focus might shift from simply amassing more data to developing smarter ways to learn from the data we have and even generate the data we need.
Great article. Lots to ponder!