Key Takeaways:
The term "Artificial General Intelligence" was coined by Peter Voss and two colleagues when brainstorming a book title - the "general" echoing both broad applicability and the psychological "g factor" of general intelligence.
In narrow AI systems like Deep Blue or GPT-3, the intelligence is not in the machine but in the programmer or data scientist who designed the approach - the machine itself has no reasoning or learning capability.
The three waves of AI are rule-based symbolic systems, big data statistical systems, and now cognitive architectures - which address what intelligence actually requires: reasoning, learning, contextual interpretation, and concept formation.
A cognitive architecture must be built on a fuzzy, contextual knowledge graph that can hold contradictions and handle uncertainty - not on Boolean logic, which breaks the moment reality gets ambiguous.
Quality of data dramatically outweighs quantity for enterprise AI deployments where responses must be accurate, legally reviewable, and contextually appropriate - statistical guessing is not acceptable in customer-facing or regulated environments.
Every customer is a unique individual with unique history and requirements - genuine personalization means maintaining a part of the knowledge graph specific to each individual, not sorting people into demographic buckets.
Automation should deliver a superior customer experience, not a cheaper second-class one - 24/7 availability, zero wait time, and hyper-personalized conversation memory are things human agents structurally cannot match.
Insightful Quotes:
"Software is inherently quite dumb. If the programmer didn't think of some situation, the program would just crash or do something nonsensical. There's no common sense, no thinking. That started me on the journey of how we can build software that actually has some intelligence." - Peter Voss
"What happened to AI is that the intelligence you see in things like Deep Blue or expert systems is actually not the intelligence in the AI - it's the intelligence of the programmer. Somebody figured out: what algorithms do we need, what tricks can we use? The machine is just executing." - Peter Voss
"You really want to think about automation and conversational AI as offering a much superior experience to your customer. It's not the second class - it's actually going to be the first class experience. Twenty-four seven available, no wait time, and hyper-personalized conversation." - Peter Voss
Tune in to hear Peter Voss explain why the AGI he has spent 25 years pursuing is not science fiction but an engineering challenge - and why the path forward runs through cognitive architecture, quality data, and a unified corporate knowledge graph rather than more parameters and bigger training sets.
Links :
Thanks to our sponsors:
Podcast Transcript: AGI, Cognitive Architecture, and Why the Intelligence in AI Still Belongs to the Programmer
Transcript introduction
This transcript captures a conversation between Seth Earley, Chris Featherstone, and Peter Voss about the decades-long quest to build machines that genuinely think - tracing Peter's path from electronics engineer to ERP software entrepreneur to 25 years of AI research, exploring the three waves of AI and their fundamental limitations, making the case for cognitive architectures grounded in fuzzy knowledge graphs, and connecting those ideas to practical enterprise applications including 1-800-Flowers' unified conversational AI deployment and the case for a single corporate brain over siloed bots.
Transcript
Seth Earley: Good afternoon, good evening, good morning, depending on your time zone. Welcome to our podcast. I'm Seth Earley.
Chris Featherstone: And I'm Chris Featherstone. Good to be with you.
Seth Earley: Our guest today is a serial entrepreneur, engineer, inventor, and a pioneer in artificial intelligence. He's parlayed his passion for studying intelligence - and how it develops in humans - into the creation of a natural language engine and AI applications. Please welcome Founder, CEO, and Chief Scientist at Aigo.ai, Peter Voss.
Peter Voss: Thanks for having me.
Seth Earley: Peter, give us the thumbnail of how you got to where you are, especially your thinking around artificial general intelligence versus narrow intelligence.
Peter Voss: I started as an electronics engineer and started my own company building electronic equipment for industrial applications. Then I fell in love with software. My company changed very quickly into a software company - actually a systems hardware and software company - and I ended up designing an ERP software system for small to medium-sized businesses. That company was quite successful. We went from the garage to four hundred people and an IPO.
When I sold my interest in that company, I had the freedom to think about what big exciting project I wanted to tackle next. What struck me is that software is inherently quite dumb. I'm saying that while being quite proud of the software I wrote and that we built the business around. But still - if the programmer didn't think of some situation, the program would just crash or do something nonsensical. There's really no common sense, no thinking. That started me on the journey of how we can build software that actually has some intelligence - software that can actually think and reason and have common sense and can learn.
That's a journey I've been on for the last twenty-five years or so. I initially took five years just to study intelligence and all things related to it - starting with philosophy, epistemology, theory of knowledge, how do we know anything, what is reality. I studied how children learn and how our intelligence differs from animals, what IQ tests measure or don't measure. And of course I learned about what else had been done in the field of artificial intelligence over the last fifty to sixty years. That culminated in 2001, when I started a company to turn my ideas into code and prototypes.
Around that time I also got together with some other people interested in a similar pursuit of real artificial intelligence, and we ended up writing a book and coining the term Artificial General Intelligence - AGI - which in the last twenty years has been adopted quite widely. I coined that together with two other people. We were brainstorming the title for the book and came up with this term. The "general" has the meaning of being very broad, not narrow - but it also has that little "g" which is used in psychology to denote the general intelligence factor, or IQ. Since then I've basically been developing this through various companies and commercializing the technology, getting closer and closer to human-level intelligence.
Seth Earley: When you did your study of psychology, learning, knowledge, and epistemology, what was the big takeaway? Was there an aha moment?
Peter Voss: I think it's more of an accumulation of things - getting a better and better understanding. Initially it started with: what is consciousness, what is intelligence, what is free will, how do machines think? Over time that became clearer. But if I had to name two of the biggest insights that are not commonly known about AI and AGI:
The first is the importance of concepts and concept formation - really understanding how concepts work. It's high-level abstract concepts that are the key to human intelligence. We can create an effectively infinite number of abstract concepts - concepts of concepts - an entire hierarchy of very abstract ideas like loyalty or government, all the way down to the grounding in our direct perception of reality. Concept formation is fundamental.
The second came out of my work developing a new type of cognitive process profile - not really an IQ test but a measurement of the strengths and weaknesses of a person's cognitive processes. What I learned through that work is that there's one dimension that's particularly important: metacognition. That's your ability to consciously or subconsciously apply the right strategy to problem solving. Some problems require strict logical thinking; others really require a more intuitive approach because there's no strictly logical path. Those are two of the many elements, but those stick out.
Chris Featherstone: What led you down the route of studying animals and children, and what was the outcome?
Peter Voss: It was really important for me to understand cognition in its broadest and deepest terms. One key element of intelligence is learning - the ability to learn. One of the key puzzles when you go into this field is: what makes human intelligence special? Understanding how children learn and how animals learn - and why there's such a large difference - was fascinating. For example, chimpanzees in early development are actually ahead of human children cognitively. But then they top out. Even a chimpanzee raised within a human family simply cannot get past a certain threshold. Abstract thinking and concept formation turned out to be the answer. I've also just had an inherent interest in philosophy, ethics, and psychology - what makes humans tick. It all fitted together nicely with understanding how we can build machines that think and reason and learn the way humans do.
Chris Featherstone: At what point do we get to a machine that actually has a conscience - what is consciousness as opposed to abstract layers of concepts that can make decisions?
Peter Voss: The quick answer is that consciousness is an essential component of higher-level intelligence. You cannot have human-level intelligence without consciousness. The quickest way I can justify that is to say: the system has to be aware of itself as a thinking, acting entity. It has to have a self-concept - it basically has to say, "I took this action. I can take this other action. By thinking about things, I may come to a different conclusion. My actions have an effect on the world." A high-level intelligence absolutely has to have that kind of self-awareness, and in roughly speaking, that is what consciousness is.
Of course the standard comeback is: how will we know whether machines are conscious? And the standard counter is: how do you know that I'm conscious and not just a machine?
Chris Featherstone: For folks listening, how do you define artificial intelligence, and how do you define artificial general intelligence - and what's the difference?
Peter Voss: When the term "artificial intelligence" was coined some sixty-plus years ago, it was really about building thinking machines that can think, learn, and reason the way humans do. That was the original vision of AI. Originally they thought they could crack this in a few years. It turned out to be much, much harder than that. So what happened over the decades is that the vision completely drifted. If we can't achieve the grand version, let's at least build systems that seem to exhibit some things we consider intelligent.
A perfect example is playing chess. Somewhere in the late eighties, IBM built Deep Blue and it became the world chess champion. People generally thought - if you can play chess, that's something intelligent. But basically the vision of AI got lost over those decades. It turned into narrow AI: solving one specific problem at a time - whether it's chess, container packing optimization, traffic optimization, expert systems for medical diagnosis.
And here is a very important point that is generally not at all appreciated: the intelligence you see in things like Deep Blue or expert systems is actually not the intelligence in the AI. It's the intelligence of the programmer, or the data scientist, or whoever designed the approach. Somebody figured out: what algorithms do we need, what tricks can we use to play a good game of chess? The machine is just executing what the programmer figured out.
Even today with big data and statistical approaches, the same thing is true. Protein folding - there isn't intelligence in the system that figured out how to do protein folding. People figured out: if we prepare the data in a certain way, and we design the neural network in a certain way, and we experiment with this and that - then we can have a machine that can do protein folding. But it's solving one problem at a time, and the problem is really solved by the programmer or data scientist. The realization of AGI - getting back to the original dream of AI - is to have a machine that can figure these things out itself. That's why learning is so important, and reasoning is so important, along with memory and concept formation.
Today's AI systems don't typically have any of those components that are actually essential for real intelligence. Almost everything done in the field of AI today is narrow AI.
Seth Earley: When you think about the history of AI - symbolic AI and knowledge representation, then statistical approaches, and now the recognition that we need to come back to knowledge representation - how do you look at that evolution?
Peter Voss: DARPA actually came up with a presentation they called the Three Waves of AI, which I found a quite useful model. The first wave references logic-based systems - rules-based, expert systems, symbolic-driven systems. Deep Blue would be the good example. That dominated AI for many decades, even though neural networks had also been studied; they never quite made it in comparison to symbolic approaches.
Then the second wave hit like a tsunami about ten years ago, when big data statistical approaches really started to work. Neural networks of a certain type - really the breakthrough was deep-pocketed companies having a lot of data and a lot of computing power, getting significantly better results with neural networks than with symbolic systems. There were some minor breakthroughs in neural network architecture, but it was really more a matter of suddenly this stuff starting to work when you throw a thousand times, a million times more data at it than had ever been tried before. Speech recognition and image recognition beat all the benchmarks. And we're still in that era - it's become self-perpetuating because it's been so impressively successful that virtually unlimited money has flowed into this area. Microsoft wrote a check for a billion dollars to OpenAI.
However - it's still narrow AI. There's no thinking, there's no learning. Once you build the model, the model is essentially read-only. There's optimization during training, but once you deploy it, it's essentially static. No real learning, no reasoning.
The third wave that DARPA talks about is cognitive systems - getting back to answering what intelligence actually requires: reasoning, adaptation through learning, contextual interpretation. The right way to refer to these is cognitive architectures: an architecture that inherently addresses the requirements of intelligence. And I want to be a little careful about the term "hybrid" - people are talking about combining first-wave and second-wave approaches, but just gluing them together isn't going to work. It has to be deeply integrated at every level. But more importantly, the starting point should not be "we need a bit of this and a bit of that." The starting point really needs to be: what does intelligence require? Start from a clean slate - don't start from "we have these super-powerful models from deep learning, how do we tweak them?"
Seth Earley: Talk about what a cognitive architecture actually looks like.
Peter Voss: Cognitive architectures have been around for a while, and the criticism is often "we've tried this for decades and it hasn't worked." I'd like to remind people that the same thing was said about neural networks ten years ago. They didn't work until somebody figured out how to get them to work. I think cognitive architectures are the right approach to the third wave and to getting to human-level intelligence.
Your starting point has to be: what does intelligence require? An intelligent system requires long-term memory. Human memory is quite opaque - we can't pinpoint where and how memories are stored, it's complex and not scrutable. But from an engineering point of view you much prefer something scrutable. So to me the starting point is: if we can have a system that's essentially a semantic network - some sort of knowledge graph - that is the foundation. Computers can have access to massive amounts of structured data and databases. Why throw that out?
But that knowledge graph cannot be based on traditional Boolean logic, because the knowledge we have about the world is often fuzzy, contextual, partly statistical. You may have seen conflicting information about the same thing - you need to be able to hold contradictions pending resolution. Maybe this is true in this situation and something else is true in that situation. Traditional logic systems can't do that. So you need a very flexible, fuzzy, contextual knowledge representation as the key component.
You also need short-term memory - what is the current context? Who are you talking to? What do you already know about this person? What are their goals? What do you expect them to know? And you need a reasoning system that can take into account long-term permanent knowledge and combine that with current context, short-term activation, and goals to reason and respond appropriately.
Chris Featherstone: You've said that the foundations of AGI are centered in quality of data, not quantity. How does that fit into this reasoning system?
Peter Voss: This is also a very practical distinction. You have systems that have billions or trillions of parameters - systems like GPT-3 that have effectively all the knowledge on the internet and in books. But it's statistical: basically, what is the most likely thing the person will say next?
When you try to implement this in a business scenario - we provide conversational AI chat for enterprise customers - you can't have a system that has all the knowledge of the world and might respond in some totally inappropriate way. Not because it's malicious, but because that was the most common statistical response to something similar. For example, one of our clients is 1-800-Flowers. When a customer is having a conversation with our AI, you really need to make sure the information you provide is accurate and reliable - not a random statistical draw.
This is why I say quality of information is much more important for critical applications than quantity. If you want to be entertained, maybe quantity is the right measure. But if you're trying to achieve a particular business objective with precision, quality is much, much more critical - especially when you're talking about things with regulatory or compliance implications, or when you're advising a customer to make a choice or decision. And honestly, even in less regulated environments, any kind of business deployment has to pass legal review, marketing review, business rule review. Statistical guessing is not appropriate for any of that.
Seth Earley: Connect this to how organizations prepare for cognitive AI - when you think about short-term and long-term memory in practical terms, reading digital body language, purchase history, customer context. What does that practical ecosystem look like?
Peter Voss: It really all comes under the heading of context. When you're having a conversation, what do you need to establish the right context? That would include long-term memory of prior interactions - where you may have learned things about the customer that aren't in any corporate database, like their preferences or manner of speaking. Short-term memory is really just what you're learning from the current interaction.
In commercial applications, you inevitably also have back-end systems with information - product availability, customer history, customer preferences. You want to engineer a system that can rapidly and seamlessly integrate that information as required, typically through APIs. During the conversation, if you need additional relevant information, you hit an API, get the information, it becomes part of the knowledge graph either permanently or temporarily. And conclusions and actions will also be executed through APIs back to whatever back-end system you have.
One of the really important distinctions between statistical systems and genuine cognitive systems is how they handle individuality. Statistical systems tend to treat you as a demographic - putting you in a bucket, "you are this kind of customer." That's ultimately not what you want, as a customer or as a company that wants to provide high-quality service. Every individual customer is a unique individual with unique history and unique requirements, even if they overlap largely with other customers. There's a very specific context for this customer at this moment. In our architecture, there is literally a part of the knowledge graph that is unique for every individual customer.
Chris Featherstone: A lot of customers believe in the notion of one bot to rule them all, or they want multiple bots for different functions. How do you advise around that?
Peter Voss: I actually have the opposite approach to what you're describing. I think within an organization you should have one bot. We call it a chatbot with a brain - you build out this corporate brain that covers different functions: sales, service, and so on. If a customer goes from sales to service or back to sales, you really want to be able to share that information and seamlessly transition between functions. If you're at service and then say "actually, let me place a new order" - you want the same system.
This is one of the advantages of automation over humans, actually. We all know the experience with banks and insurance companies - you get to one department, they can only do one narrow function, and if you want something else it's "let me transfer you," another thirty-minute wait, and you have to tell your whole story from the beginning. With a corporate brain, a customer's context carries across all of those interactions.
With 1-800-Flowers, we started with one company, one channel, one application - "where's my order" on their chat. Once we implemented that, it became much easier to implement other channels. We now have five channels, all twelve companies in the group, and about one hundred different applications - but building on each other, sharing context. You can start a phone conversation and transition to chat seamlessly, carrying context over. An SMS exchange, an email - all handled by the same brain. I talked to a bank once that said "we've got twelve different bots and thirteen problems" - they don't talk to each other, each has a separate development path, the customer experience is very different, you change something on one bot and the others work differently. That's not a good way to go if you can avoid it.
Peter Voss: I also want to mention something that's not super obvious. There's a general belief that chatbots and IVR can help you save money - the term "containment" is really a negative one for me. You really want to think about automation and conversational AI as offering a much superior experience to your customer - not the second class, actually the first class experience. The reason is: twenty-four seven availability, no wait time, and hyper-personalized conversation.
If you're calling about your Wi-Fi not working and the first interaction is "have you tried rebooting your router," and ten minutes later you call back and get a human, they'll say "have you tried rebooting your router." Talking to an intelligent chatbot, it will have that history and say "did that work?" - and if not, suggest the next step. Half an hour later: "did moving it to the kitchen help?" You cannot get that kind of experience with a human, quite apart from training limitations and all the other constraints of live call centers. The automation should be the superior option - not the cheaper second-class option.
Seth Earley: Peter, we've come to the end of our time. This went by so fast. We could spend hours talking about this. Thank you so much - it's been wonderful, and I can't wait to talk to you in six months and see how things have continued to evolve. I know you're making tremendous progress.
Peter Voss: Thank you. It was fun - thank you very much.
Chris Featherstone: Thanks, Peter.