Particle dummy COGNAMI Artificial General Intelligence R&D Learn More dummy

WHAT WE DO

We are building an Artificial General Intelligence bootstrap.

Strong Artificial Intelligence has always been a dream of humanity.
It is the holy grail and the promise of improving our lives,
improving society, and finally freeing us from cruelties that plague man.

WHY DO IT?

The human race is faced with many problems.
The most important of these are disease, famine and a lack of renewable resources.
We have a grave and urgent need, simultaneously facing extinction and unsustainable population growth.
We require a new form of intelligence, an artificial general intelligence that exceeds the capabilities of the human brain.

AGI will push humanity forward.

The human race requires the next phase of evolution.
The human race needs to evolve into a new species.

This is the next step in evolution.

Our Approach

We are using a unified, general purpose substrate based on Spiking Neural Networks, a Piagetian inspired cognitive architecture, and the ability to hook into machine-optimized subsystems such as Deep Learning and a Non-Axiomatic Reasoning System.

What guides Our Efforts

Our efforts are guided on the principle of circumambulation.  We have made AGI development our obsession, and we circle in around the goal using these techniques:

  • Zettelkasten
  • The Feynman Technique
  • Heilmeier Questions
  • First Principle Thinking
  • Cartesian Doubt

The idea and implementation of a human level artificial intelligence raises myriad questions.  What is meant by the term human level artificial intelligence? Is it a machine that can think like a human, or one that has emotions like humans have emotions? What makes us ‘human’?  By what architecture can we allow the emergence of artificial consciousness, sapience and selfhood?  How do we ensure a peaceful future for man and machine beings?  If these subjects are of interest to you, you’re in good company with Cognami.

Architectural Considerations:

  • Artificial Sentience and Consciousness
  • Emotions
  • Sapience
  • Intrinsic Motivation
  • AI Safety
1

Construct bootstrap substrate capable of desired emergent behaviors, visual and textual sensory channel development, air-gapped agent environment 

2

Child like learning through sensory inputs and interaction with its environment

3

Integration with 3rd party optimized components such as narrow intelligence models

4

Safety, Emotional control, and Conscious Volition

WHAT ARE EXPERTS SAYING ABOUT AGI?

Hugo de Garris
Quoted by Ben Goertzel — First Conference on Artificial General Intelligence, “AI and AGI: Past, Present and Future”, March 1, 2008

The initial condition of an AI will determine its ongoing development… initially.

0
Ben Goertzel
First Conference on Artificial General Intelligence, “AI and AGI: Past, Present and Future”, March 1, 2008

The problem of moving on flat surfaces is solved quite well by wheels, but generalizing the wheel might not be the best solution to moving around on general surfaces.

0
Marvin Minsky
Quoted by Douglas Hofstadter — Gödel, Escher, Bach: an Eternal Golden Braid, p. 722, Basic Books, Inc., 1999 (orig. 1979)

When intelligent machines are constructued, we should not be surprised to find them a confused and as stubborn as men in their convictions about mind-matter, consciousness, free will, and the like.

0
Marvin Minsky
Will Robots Inherit the Earth?

Will robots inherit the earth? Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called Evolution. Our job is to see that all this work shall not end up in meaningless waste.

1
Eliezer S. Yudkowsky
Staring into the Singularity, 1996

Our sole responsibility is to produce something smarter than we are; any problems beyond that are not ours to solve.

1
Ben Goertzel
First Conference on Artificial General Intelligence, “AI and AGI: Past, Present and Future”, March 1, 2008

General Intelligence:
The ability to achieve complex goals in complex environments using limited computational resources.

0
Marvin Minsky

No computer has ever been designed that is ever aware of what it’s doing; but most of the time, we aren’t either.

0
Marvin Minsky
Hal’s Legacy, 1996

No one has tried to make a thinking machine. The bottom line is that we really haven’t progressed too far toward a truly intelligent machine. We have collections of dumb specialists in small domains. The true majesty of general intelligence still awaits our attack. We’ve got to get back to the deepest questions of AI and general intelligence and quit wasting time on little projects that don’t contribute to the main goal.

0
Alan J. Perlis
Epigrams on Programming, Sept., 1982

Epigram 63:
When we write programs that “learn”, it turns out that we do and they don’t.

0
Eliezer S. Yudkowsky
Staring into the Singularity, 1996

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from “impossible” to “obvious”. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards…

0
Sam Altman

AI will probably most likely lead to the end of the world, but in the meantime, there’ll be great companies.

0
Fei-Fei Li

As a technologist, I see how AI and the fourth industrial revolution will impact every aspect of people’s lives.

0
Yann LeCun

Our intelligence is what makes us human, and AI is an extension of that quality.

0
Gary Marcus

It’s likely that machines will be smarter than us before the end of the century—not just at chess or trivia questions but at just about everything, from mathematics and engineering to science and medicine.

0
Stephen Hawking

The real risk with AI isn’t malice but competence. A superintelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants.

0
Geoffrey Hinton

I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works

0
Sergey Brin

The new spring in AI is the most significant development in computing in my lifetime. Every month, there are stunning new applications and transformative new techniques. But such powerful tools also bring with them new questions and responsibilities.

0
Yuval Noah Harari

You want to know how super-intelligent cyborgs might treat ordinary flesh-and-blood humans? Better start by investigating how humans treat their less intelligent animal cousins. It’s not a perfect analogy, of course, but it is the best archetype we can actually observe rather than just imagine.

0
John Hagel

If we do it right, we might actually be able to evolve a form of work that taps into our uniquely human capabilities and restores our humanity. The ultimate paradox is that this technology may become the powerful catalyst that we need to reclaim our humanity.

0
Tim Cook

What all of us have to do is to make sure we are using AI in a way that is for the benefit of humanity, not to the detriment of humanity.

0
Andrew Ng

Much has been written about AI’s potential to reflect both the best and the worst of humanity. For example, we have seen AI providing conversation and comfort to the lonely; we have also seen AI engaging in racial discrimination. Yet the biggest harm that AI is likely to do to individuals in the short term is job displacement, as the amount of work we can automate with AI is vastly bigger than before. As leaders, it is incumbent on all of us to make sure we are building a world in which every individual has an opportunity to thrive. Understanding what AI can do and how it fits into your strategy is the beginning, not the end, of that process.

0
Bryan Johnson

The relationship between human intelligence and artificial intelligence (HI + AI) will necessarily be one of symbiosis. The challenge and potential of exploring this co-evolutionary future is the biggest story of the next century and one in which a closeness in development velocity is a necessity.

0
Elon Musk

AI doesn’t have to be evil to destroy humanity—if AI has a goal and humanity just happens in the way, it will destroy humanity as a matter of course without even thinking about it, no hard feelings.

0
Ray Kurzweil

I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045. The nonbiological intelligence created in that year will be one billion times more powerful than all human intelligence today.

0
Yoshua Bengio

I don’t think that any of the human faculties is something inherently inaccessible to computers. I would say that some aspects of humanity are less accessible and creativity of the kind that we appreciate is probably one that is going to be something that’s going to take more time to reach. But maybe even more difficult for computers, but also quite important, will be to understand not just human emotions, but also something a little bit more abstract, which is our sense of what’s right and what’s wrong.

0
Peter Diamandis

“Ultimately, AIs will dematerialize, demonetize and democratize all of these services, dramatically improving the quality of life for 8 billion people, pushing us closer towards a world of abundance.”

0
Kevin Kelly

“Over the past 60 years, as mechanical processes have replicated behaviors and talents we thought were unique to humans, we’ve had to change our minds about what sets us apart. As we invent more species of AI, we will be forced to surrender more of what is supposedly unique about humans. We’ll spend the next decade—indeed, perhaps the next century—in a permanent identity crisis, constantly asking ourselves what humans are for. In the grandest irony of all, the greatest benefit of an everyday, utilitarian AI will not be increased productivity or an economics of abundance or a new way of doing science—although all those will happen. The greatest benefit of the arrival of artificial intelligence is that AIs will help define humanity. We need AIs to tell us who we are.”

0
Arthur C. Clarke

“Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.”

0
Amit Ray
Famous AI Scientist, Author of Compassionate Artificial Intelligence

“As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership.”

0
Stephen Hawking
Famous Theoretical Physicist, Cosmologist, and Author

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.”

0
Bill Gates

I am in the camp that is concerned about super intelligence.

0
have questions
Contact us
need help? call us


    Washington, DC 20037
    0