The human race is faced with many problems.
The most important of these are disease, famine and a lack of renewable resources.
We have a grave and urgent need, simultaneously facing extinction and unsustainable population growth.
We require a new form of intelligence, an artificial general intelligence that exceeds the capabilities of the human brain.
The idea and implementation of a human level artificial intelligence raises myriad questions. What is meant by the term human level artificial intelligence? Is it a machine that can think like a human, or one that has emotions like humans have emotions? What makes us ‘human’? By what architecture can we allow the emergence of artificial consciousness, sapience and selfhood? How do we ensure a peaceful future for man and machine beings? If these subjects are of interest to you, you’re in good company with Cognami.
- Artificial Sentience and Consciousness
- Intrinsic Motivation
- AI Safety
WHAT ARE EXPERTS SAYING ABOUT AGI?
The initial condition of an AI will determine its ongoing development… initially.9
The problem of moving on flat surfaces is solved quite well by wheels, but generalizing the wheel might not be the best solution to moving around on general surfaces.2
When intelligent machines are constructued, we should not be surprised to find them a confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like.0
Will robots inherit the earth? Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called Evolution. Our job is to see that all this work shall not end up in meaningless waste.1
Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve.2
The ability to achieve complex goals in complex environments using limited computational resources.
No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.2
No one has tried to make a thinking machine. The bottom line is that we really haven’t progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains. The true majesty of general intelligence still awaits our attack. We’ve got to get back to the deepest questions of AI and general intelligence and quit wasting time on little projects that don’t contribute to the main goal.0
When we write programs that “learn”, it turns out that we do and they don’t.
There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from “impossible” to “obvious”. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards…0
AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.0
As a technologist, I see how AI and the fourth industrial revolution will impact every aspect of people’s lives.1
It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.0
The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.0
I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works0
The new spring in AI is the most significant development in computing in my lifetime. Every month, there are stunning new applications and transformative new techniques. But such powerful tools also bring with them new questions and responsibilities.0
You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine.0
If we do it right, we might actually be able to evolve a form of work that taps into our uniquely human capabilities and restores our humanity. The ultimate paradox is that this technology may become the powerful catalyst that we need to reclaim our humanity.0
What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.0
Much has been written about AI’s potential to reflect both the best and the worst of humanity. For example, we have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly bigger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive. Understanding what AI can do and how it fits into your strategy is the beginning, not the end, of that process.0
The relationship between human intelligence and artificial intelligence (HI + AI) will necessarily be one of symbiosis. The challenge and potential of exploring this co-evolutionary future is the biggest story of the next century and one in which a closeness in development velocity is a necessity.0
AI doesn’t have to be evil to destroy humanity—if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.0
I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.0
I don’t think that any of the human faculties is something inherently inaccessible to computers. I would say that some aspects of humanity are less accessible and creativity of the kind that we appreciate is probably one that is going to be something that’s going to take more time to reach. But maybe even more difficult for computers, but also quite important, will be to understand not just human emotions, but also something a little bit more abstract, which is our sense of what’s right and what’s wrong.0
“Ultimately, AIs will dematerialize, demonetize and democratize all of these services, dramatically improving the quality of life for 8 billion people, pushing us closer towards a world of abundance.”0
“Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. We’ll spend the next decade—indeed, perhaps the next century—in a permanent identity crisis, constantly asking ourselves what humans are for. In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”0
“Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.”0
“As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.”0
“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”0
Washington, DC 20037