Artificial Intelligence is often misunderstood. And much like blockchain, those who fiercely stand by the technology believe it will change the world for the better. Others fear the negative repercussions it could bring it and would rather see it disappear.
We’re joined by Ben Goertzel. Ben’s interest in AI and robotics date back to his childhood and he has made these his life-long passion and work. He is the CEO of SingularityNET, a company building a marketplace for AIs which leverages blockchain. He is also Chief Scientist at Hanson Robotics who has brought us the now famous Sophia robot. When he’s not building blockchains and robots, he leads the OpenCog open-source AI framework and is Chair of Humanity +, an organization which focuses on technology and ethics.
Topics we discussed in this episode
- Ben’s background as a mathematician and his lifelong passion for AI and robotics
- What is AI, AGI and machine learning, and how these technologies differ
- What is the killer application for AI
- The problem of data and power centralization as it relates to AI
- AI safety and with what we should be most concerned when it comes to AI dominance
- Hanson Robotics and the Sophia robot
- How blockchains and AI are relevant to each other
- The role of AI in blockchain governance and the potential for AI systems to compete amongst each other
- What is SingularityNET and what the company is building
Sebastien Couture: Alright, so we’re here with Ben Goertzel and Ben is the Founder and CEO of SingularityNET. He is also chief scientist at Hanson Robotics and holds a number of positions in other organizations, but we’ll get to that in this interview. Hi Ben, that thanks for joining us.
Ben Goertzel: Hey, it’s a pleasure to be here.
Sebastien: So yeah, you have a very impressive resume. So as I mentioned you’re the CEO of SingularityNET and you’re also chief scientist at Hanson robotics. You have a PhD in mathematics, you’ve started a whole bunch of companies in lots of different areas and you’re involved in some nonprofits and foundations as well. So, how did you how did you get here? And it what is your trajectory look like and how did you get involved in AI and Robotics?
Ben: I’ve been interested in AI robotics, life extension, nanotech femtotech, time travel all these things since the early 1970s when I was a little kid reading science fiction books and now a few decades have passed and I found myself in a world where many of these apparently science fictional technologies are gradually becoming realities and so it’s really really exciting to me to actually be every day, concretely working on building thinking machines and networking people and computers together into a global brain and applying AI to longevity and nanotechnology. It’s astounding that we live in a time when these things are realities and of course it’s also a bit scary and sobering at times because these things could go badly wrong or they could go amazingly right, and while I’ve been involved in a lot of different aspects of all these technologies, many of which are converging together now, I did a PhD in in mathematics. But even at that time I was very interested in AI biotechnology and a bunch of other things. I just figured mathematics, as Bitcoin says, in math we trust, right? Mathematics underlies everything, that’s the foundation of all modern science and technology. So I figured learning a bunch of math couldn’t be bad, but since shortly after getting my PhD which was in ’89, I’ve been working on AI in various dimensions and aspects and now the last few years that’s really taken off along with a bunch of other technologies. And of course blockchain and cryptocurrency which you guys know a lot about this is all part of the mix. So right now there’s an insane number of different advanced technologies for manipulating creating different kinds of information that are all the intersecting and and pushing each other forward and you could talk about these for hundreds of hours without exhausting at all.
Brian Fabian Crain: Yeah, that’s definitely true. So let’s spend a little bit of time first on the topic of AI which is something that we’ve tangentially talked about a bunch of times but still, I guess for many outside of blockchain space, is a big scary term, that’s a little bit hard to demystify, so how do you define AI and what’s the difference between AI and terms that people use like machine learning and deep learning?
Ben: I don’t think any of these terms are worth too much in the end. I mean AI in what sense is it really artificial? It’s all part of nature and to some extent these systems are evolving and emerging instead of being purely artificially created. And intelligence, we don’t even have a good definition for among humans. Like there’s not an IQ test that works across different cultures or ages of people let alone across different kinds of minds. So none of these are very rigorous terms. I mean machine learning, again in essence all AI really is about machines that learned and reason and think. That term has lately come to be used to describe particular types of AI algorithms that are trained on on large amounts of data, but then the term is also used more loosely. So you always reinforcement learning a kind of machine learning or not. It’s not especially well defined and I mean deep learning again in cognitive science. A guy named Stella Nelson wrote a book on deep learning what 15 years ago, which encompassed neural networks, logic systems, production systems many kinds of AI algorithms. But now the term seems to be used for what used to be called multi-layer perceptrons, multi-layer neural networks, which is really only one special kind of deep learning system in the broader sense, so remember what deep learning originally meant was any system that just does hierarchical pattern recognition, like recognizes patterns within patterns within patterns within patterns in the world and uses those to take some action. The deep learning systems being talked about mostly now are hierarchical neural networks, which is one special kind of deep learning system. So I mean what we have is a lot of words with confusing definitions that shift over time and don’t necessarily mean what they sound like they mean, which comes back to ‘in math we trust’ because the thing is the algorithms are doing what they’re doing and there is a real mathematical description to them and they carry out practical functions, but the buzz words associated with them serve mostly to sell things rather than to convey useful information.
Brian: Okay, that is helpful, but then let’s speak about one term and I think that’s a term that maybe you have more relationship to which is a AGI or artificial general intelligence.
Ben: Yeah. Well, let me let me try to go to what I think are the foundations here because it is possible to describe these things in a way that makes sense. It’s just that things become marketing buzzwords and then become confusing so I think fundamentally you can think about a mind and an intelligent system. That’s something that’s recognizing patterns in itself and in the world around it and then the system may have some goals which doesn’t mean everything it does is goal-directed but it may have some goals and it then recognizes patterns regarding which actions will achieve which goals in which contexts right? So you have a pattern recognition system and it has goals among other dynamics and it’s trying to recognize patterns of how to achieve what goals and what situations and, babies do that, babies are recognizing patterns in the world around them all the time and they have some goals that they want to get some milk, some food. They want to run around and they try to figure out what patterns of activity will let them achieve their goals and in what situations and then where I mean, we’re deep. Learning comes in is the world we live in seems to be made largely of hierarchically composed patterns where you have patterns the build up in the more complex ones into more complex ones. I mean just like physics builds into chemistry builds into biology builds into psychology builds into sociology. So we have a hierarchically composed patterns, which means if you have a learning engine that is trying to recognize patterns in the hierarchy, it may well succeed because our world seems to be seems to be built that way. Now it happens that most of the AI out there in the world now are able to recognize patterns in a very narrowly defined context and to achieve only a very narrow set of goals. Like say the original AlphaGo could recognize patterns in go games and it could achieve the goal of winning a go game. Right? And that was it. Now Alpha0 went a step beyond that, and these programs are all by Google DeepMind, which is one of the more interesting AI organizations out there. Alpha 0 went beyond that because it can recognize patterns in a broader scope of environments and many different types of board games and it can achieve more types of goals because the ways of winning chess or go or shogi whatever are different right? It’s still not nearly as general as a human being though because we can only play board games and we can recognize patterns in a huge number of other kinds of environments. And we can achieve many many different types of goals you like we can prove math terms, we can blow people off, we can chase girls, we can make art, we can try to save starving kids. There’s a lot of goals we can work toward in the fairly rich collection of environments, but we’re still not like infinitely general intelligence has. Could you imagine a mind that could recognize patterns in 407 dimensional space, we’re very bad at that right? We’re much better at two, three or four dimensions. So we’re still somewhat restricted. We’re good at recognizing patterns and some kinds of environments and achieving some kinds of goals better than Alpha 0 or existing AI programs, but you can imagine some kind of mind that could recognize patterns in a space of any dimension and in things that just look like noise to human beings and they could achieve goals that humans can’t even begin to understand. So I think totally general intelligence that could recognize any kind of pattern in any kind of world and could figure out how to achieve any kind of goal by recognizing patterns of other achieve that goal.
That’s probably not achievable in this physical universe, totally general intelligence, but we’re much more general than any existing AI program, each of us can deal with a lot of different problems. And if you give us something totally new to do with the internet didn’t exist when I was born let alone when the my DNA evolved, it didn’t exist when I went to school, but I, like everyone else was able to adapt to deal with this new thing. Right? We don’t yet have AI’s that can transfer the knowledge and adapt to deal with some very new type of thing. They weren’t programmed or trained for that and I think we will but we’re not there yet. So I think now the AI field is starting to begin the transition from narrow AI’s that do highly specific things, recognize patterns and achieve goals and very specific domains toward more general AI’s that can just deal with a broader scope of knowledge in a broader variety of goals and can transfer what they’ve learned so far. The very different conditions and I mean, this will be really important. You see that would like self-driving cars now are crashing into people because they’re seeing situations that weren’t in their training data. I mean, that’s a failure to generalize and then in financial markets when you have what’s called a regime change, oh suddenly the markets acting totally different than it was acting before, well again current quantitative financial prediction systems and risk management systems have failed to generalize, they were trained on previous market regimes when you give them a new market regime, no, they’re still acting on their on their previous knowledge. Now, of course most people can’t deal with the new market regime either but at least foundationally we do have the ability to go back to basics and deal with any radically new situation. And that’s a big challenge facing the AI field in in the next phase, which I think we’re going to meet but it there’s still some research challenges there.
Sebastien: That’s interesting. I’ve never considered that way that, I guess humans and carbon-based beings are good at recognizing certain types of patterns. And I guess you could maybe differentiate, so humans are good at recognizing certain path types of patterns and acting on them and that might be different from for instance the intelligence of a dolphin or another type of carbon-based being and then artificial or computer-based intelligence or silicon-based intelligence might be good at recognizing patterns. And like you said, you multiple hundreds of dimensions and figuring out actions based on certain patterns.
Ben: It’s kind of subtle if you think about it because we evolved in this domain of discrete solid objects, bouncing off each other and so on and this probably led us to ideas about causation. But if you’re a dolphin in the water you are seeing things flow round and blend into each other. You’re not seeing some of these solid objects bouncing off each other that probably leads to a quite different world view. Now also we’re each of our minds is stuck in an individual body for our entire life until we die. And I mean barring reincarnation and other freaky things at least 2 or 2 or first degree of approximation. Now if you’re an AI that can put yourself between different bodies or occupy a hundred different bodies of the time or like fork yourself and roll back to your last version before a traumatic experience, how does that change your whole outlook? What kind of patterns you look for? What goals you bother to achieve what risks you’re willing to take right?
There’s so many ways that were over fit to the exact environment we evolved in and the problems that we’re trying to solve and then the other thing to realize is we’re stuck without root access to our brains and bodies, which is pretty terrible, right? Because I mean if you’re an AI you can have like root superuser access in your own brain and body if you think well, I don’t like the way I react in this situation just go in and fix the damn bug, but we can’t, if you want to fix bugs in our self, it’s like years and years of meditation or therapy or reflection rather than just go in and change the rogue piece of code, right? So, there a lot of things we take for granted now in terms of biases. We have our restrictions that we have which are not really intrinsic to being an intelligent mind. But I mean, they’re just particularities of how we happened to evolve out of apes in Africa. And I mean that’s in general. This is sort of why I like a mathematical and conceptual view of things because how things evolved. There’s some fundamental reality to but there’s a lot of historical contingency. I mean, you see the same thing with exchange and money and so forth. People take so many things for granted about how economies work which aren’t necessarily intrinsic to the nature of exchanging value in the community. They’re just how things happened to evolve for quasi random combination of reasons.
Sebastien: So let’s talk about about about AI and data and this is a topic that has been brought in a lot in the conversation about AI and the fact that the AI needs large quantities as if data to train and we kind of talked about this. So today that data is very centralized and is held and owned by a small number of very large companies. You see this is as a problem, you know type of repercussions that were unintended there or is there a better system that you think we could achieve?
Ben: Yeah, I mean the situation with the collection storage and use of data regarding human beings on the planet now is really pretty ridiculous. It’s not that it’s necessarily entirely bad or malevolent, some of its really good and useful, but overall the ownership and control of data from the various sensors we have everywhere is centralized in a pretty bizarre way. Some of it’s good of course like Google Maps. It’s pretty nice and it’s collecting data on where everyone’s driving so you can see where there’s a traffic jam, right? So these are very useful functions and I don’t really mind sharing location anonymously of where I’m driving with Google Maps so we can tell everyone else where there’s a traffic jam, right? That seems like a fair exchange but in the end the agency regarding use of people’s data is in a very confused state. So this phone I carry with me everywhere, there’s a tremendous amount of data coming through this phone onto the internet and it’s all in a sense, data about what I’m talking about, who I’m talking to, where I am when I’m taking pictures, but all this data coming for me to this device that I bought and then I pay a subscription to connect to the internet each month, this data is going haphazardly into various databases owned by various large corporations, probably passing along to various governments along the way and then this data is then being used for some useful things like telling me when there’s a traffic jam. And then it’s being used to advertise things to me which doesn’t matter to me much since I’ve never clicked on in my life. It’s being used by big companies to make themselves money and increase their ability to manipulate people as a whole, even if I don’t click on their heads, but studying me along with everyone else. So learning how to manipulate the minds of other people who do click on their ads and do read their fake news. So then you gotta ask, okay this data that comes from me through this device and that I’m paying to connect to the internet with, why isn’t there an easy way for me to observe what this data is being used for and have some agency over what this data is is being used for like it if my data is being used to provide data to fuel someone’s political campaign. I’d rather have it be used only for a candidate I agree with or something, and it’s quite within our reach technologically to put agency over use of our data whether for AI are four basic statistics in in the hands of the human being who produces that data, on the other hand it’s not in accordance with the business model of the large corporations involved in the phone and the internet services behind it. It’s not in the interest of the business model these corporations to provide that agency to the user except in so far as like government regulators force them to but of course government regulators, even when well intention, which is only a fraction of the time, they can’t keep up with the advances of technology.. Now I mean this right now is mostly an inconvenience in this sort of aesthetic and moral infelicity. But I mean as you move from AI to AGI, if it turns out that these stores of data are critical for giving some parties a boost toward AGI more so than others, then this hoarding of data could actually have a more critical importance and then in principle blockchain and related technologies give away to circumvent these issues, by putting each individual’s data in some online repository or distributed decentralized repository, which is encrypted by their private key and then giving that individual agency over how the data is used and then there are fancy tools like homomorphic encryption and multi-party computation which can be used to let a person give certain aspects of their data to certain other parties use for certain things without giving all of it away. In theory the blockchain based decentralized ecosystem provides the technical tools and the sort of cultural oomph to solve these problems and on the other hand the centralized ecosystem underlying, big data and mobile phones and computers and embedded devices is as multiple trillion dollar companies pushing things forward. So there’s the decentralized world has the right tools with a big challenge on their hands here.
Brian: I’d love to speak a little bit about the concept of AI safety and and just to take a step back here. What are some of the fear scenarios, so let’s say fear scenario today is AI is replacing all of these jobs., so people become unemployed, maybe it leads to accumulation of resources with few people more and more and it’s like extreme inequality. That’s one fear are about AI. Maybe a different one is that then this AI starts to have its own objectives becomes more and more powerful gets more and more resources and its objectives, maybe it’s hostile to humans or maybe it’s indifferent to humans. And so you have these potentially bad outcomes in that may be extreme inequality, maybe just have like human beings becoming an inferior species being exploited and then there’s this idea of AI safety. What are your thoughts on it? Do you think this is an important field? Do you think efforts around AI safety are needed?
Ben: People are certainly right to be thinking about AI safety and really about the impact and implications of AI for the advance of technology in the growth of humanity in general because I mean looking at AI separately from politics and from all the other tech connected with AI probably doesn’t make sense. So people are certainly right to be thinking and worrying about it. Now, whether the things that people will do about it will have a positive or negative impact is a different question. I mean bioethics is a somewhat similar thing and in general it’s easy to agree we should be thinking somewhat about the ethics of genetic engineering and the biohacking and so forth. I don’t want people to create weird artificial babies that have like a hypertrophied pain cortex. So they’re just suffering and screaming with billion times the level of suffering any normal human can have, so I mean clearly there are some things that as a society we just don’t want people to bioengineer because they’re just plain old nasty and you’re just creating suffering. On the other hand in practice the role of most bioethicists seems to be just to say no genetic engineering is bad. Don’t upgrade your intelligence, right? So while in theory, yes, there are things that are just morally bad to do by essentially any human standard and we want to reflect on what to actually do and what not to do not all possible things should be done. On the other hand in practice bioethics seems very one-sidedly inclined to just push against advancing of humanity in new directions and to push against reduction of suffering in favor of maintenance of the status quo when I would say most most people who talk a lot about AI safety are not really thinking about how to maximize the odds of a beneficial outcome for humanity all things considered. They are more thinking how do we slow down AI development because we don’t understand it and we’re scared about it. So I found myself disagreeing with almost everyone who’s putting themselves out there as an AI safety pundit, but that doesn’t mean I don’t think AI safety is important. I don’t want the Terminator to be roaming the streets I mean I have four kids. I don’t want an AI to be turning them into fuel or something, right?
Sebastien: So with regards to Hanson Robotics, you guys have built this robot named Sophia that I’m sure most of our listeners have seen at least once on the internet because she’s had quite a few media appearances. She’s been on Jimmy Fallon and was at a bunch of different conferences. What’s the purpose of this robot?
Ben: Sophia indeed was partly created and envisioned as sort of an ambassador of AI and love and compassion and I think that’s been interesting to see because David Hanson who’s a good friend of mine. I’ve known them for a long time and I came to Hong Kong where I’m living now, in 2011, and he visited me here once and I ended up convincing him to come here and move his company here and introducing him to some folks who helped inject funding into his company here. So we’ve been talking about these things a long time and I think what’s interesting is David is really a warm loving good-hearted person. He wanted to create a robot that would emanate love and compassion and make people love it so that it would build a positive relation between humans and robots, proactively even before we have human level AGI so that as AI’s and robots get more and more generally intelligent that positive relationship is there. On the other hand David he’s an artist and as such he can’t help himself from poking people and provoking controversy a little bit and making making things a little bit creepy sometimes just because he thinks that looks coolest. So I mean, I would say Sofia and all the Hanson robots, they’re driven by David’s desire to build a sort of compassionate loving bond between human and AI and robot and also on some level driven by David’s semi-conscious artistic desire to poke at people a little bit and make them a little uncomfortable. And these come together in an interesting way and I think that’s good because my emotional orientation is optimistic and positive. So, I mean my intuition and feeling is that the technological singularity is going to come out awesome and closer to utopic then dystopic, but I also think there’s a fundamental uncertainty to all this so people are certainly justified to feel a little bit uncomfortable and confused. In the end none of us knows what’s going to happen. We’re on the verge of creating machines that are ten, twenty, a hundred billion times more intelligent and capable than we are and we’d be idiotic to believe we could predict in detail how this is going to come out. I mean, I find this irreducible uncertainty beautiful and exciting and I see it as what humanity has been doing since the beginning, this is why we’re less boring than cows and sheep right? I mean we decided not to remain monkeys and we invented language and fire and wheels and and machines and money and Bitcoin and AI and AGI, I mean that’s the trajectory we’re on we’re revolutionizing ourselves over and over and we never know what’s going to happen next. And that’s part of the essence of what it is being human and I think David he baked some of that into Sophia as an artist along with the love and compassion, which is quite cool.
Brian: Yeah, and so probably many listeners have seen the TV show Silicon Valley. So there is this part there inspired by you and Sofia where someone meant to be you is playing this role. But let’s move to blockchain now, when did you get interested in blockchain? And why did you think that blockchain relevance to the future course of AI?
Ben: I’ve been some interested in crypto for a long time, since the early 90s knows when I was doing math with finite fields and cryptography tech and that seemed like it could potentially be important just politically in terms of stopping governments from having the ability to spy on everyone’s information and keep it uniquely for themselves. Bitcoin I didn’t like because it just because proof-of-work annoyed me, it just heats up the environment and wastes energy unnecessarily, so I didn’t get involved in that. When Ethereum came out that’s the first thing where I thought well this is actually cool. It did use proof of work, but you can see there was a will and a path to going beyond that and then you had a scripting language which basically lets you create this secure and encrypted decentralized world computer and I thought Ethereum was a vision in the right direction and it was a reasonable software tool set although obviously immature at first and not that mature still. So once Ethereum came out, I started really thinking, how do we use this to create a decentralized global AI Network. Because in 2001 I published a book called Creating Internet Intelligence, which envisioned a decentralized global network of AI is coming together as a society of mind. Before that in in 95 I posted some web pages claiming I was going to run for US president on the decentralization party platform, which I ended up not doing because I realized in time what a terrible job it would be to be president anyway, but I mean these ideas were interesting me for a long time both decentralized control politically because I always had a sort of a narco socialist bent and then the idea of making a decentralized global AI network like Marvin Minsky’s Society of Minds, but an economy of minds where AI’s are paying each other to work and there’s collective intelligence coming out of the whole network beyond the intelligence in the parts, but Ethereum seemed like a critical step toward having a tool set that that would let you do this and so then as soon as Ethereum was there yet the idea of DAO’s decentralized autonomous organizations, which again they’ve been spelled out in science fiction like in Charlie Strokes his book Accelerometer and a bunch of others, but with a salinity programming language, like wow, you can script a DAO in a short script, right? That’s similar feeling to how when I first learned Java in 1995. It’s like wow, you can create a web page or send an email with this much code, that’s power. Solidity was like that you can create a decentralized corporation in just a little bit of code. It’s not the perfect language, just like Java wasn’t, but I mean it really opened the door and once I saw how Ethereum worked I started thinking well, how do we put this together with for example OpenCog, which is my open source AI platform aimed at general intelligence or distributed neural networks and genetic algorithms or whatever other type of AI, it seemed clear you could use Ethereum as a basis for connecting together many different AI nodes into some sort of decentralized AI mind and then this logically this should be able to kick the asses of Google, Amazon Tencent, and the IBM and all these big companies by the power of decentralized community. And then when I met Simone Giacomelli who was later to co-found SingularityNET with me, and he had a blockchain development team in Italy and he’d been helping out the host of different blockchain projects. So when I met Simone who was really conversant with the blockchain world both technically and on the business level, then we put our heads together and we roughed out what became the SingularityNET design and then started moving toward the initial token sale and then David Hanson already was a close friend. I mean, he saw the vision immediately. Our first meetings on this were in the Hanson robotics office in Hong Kong and David saw this as a way to get a decentralized global robot mind cloud behind his robots because you always knew the intelligence is going to be in Sofia right? I mean some is about seeing and moving but the cognitive parts the long-term knowledge are going to be in the cloud. But what cloud? Do you have a million robots around the world and all the intelligence is running in Amazon’s cloud or it’s using like Microsoft Azure API or do you have a decentralized Mindcloud that’s owned and controlled by all the people who are buying these robots, right? So David was seeing it as a robot Mindcloud, but it was really the same thing that Simone and I were seeing with a decentralized blockchain based AI mind.
Brian: Okay. So would it be fair to characterize this as, you see this traject or you see this AI coming but then the question is yeah, where do those AI’s coordinate? Where do they share information? What kind of substrate do they run on and of course if you look at it today, whether it will be mostly controlled by companies like Google and Facebook and then with something like SingularityNET there could be a kind of open decentralized transparent accessible democratic platform where AI’s could coordinate AI’s can share data AI’s could evolve.
Ben: Yeah, that’s right. So there I mean as I’ve said before what really excited me about the SingularityNET design and vision is seeing two different goals which are very important we converge into one. So what one goal is to make sort of a venue for many different AI components to join forces to make a collective AI mine with a whole is greater than the sum of the parts. You may have one AI that uses our OpenCog algorithm to generalize and abstract and reason you could have another AI that recognize patterns in DNA data. Another AI that uses deep neural nets recognize patterns and visual data and you connect them all together into a mind that self-organizes and adapts. So the AGI could be in the whole network not in any one particular node in the network. And then the other thing is okay, but if we’re going to have this network of AI’s, who controls that network? Is it all sitting inside Google or Amazon or is it just more like the internet right which is not controlled by anyone. It’s a network of networks, which is controlled by the different participants, right? And so it seemed you could use blockchain to achieve both of these goals to make a network of AI’s that’s controlled by the participants in the sort of democratic self-organizing and open way and also make it so that the design encourages the AI’s to collaborate with each other and join federations with collective intelligence and so forth. And so this, of course, it’s easier said than done but we did the initial token sale for this December 2017. We’re launching the initial beta version of the platform the end of February after a simple Alpha was launched September 2017 and then during 2019 post the February launch of the beta we’re going to add more and more and more features to the network as well as adding more and more of our own AI into the network and then the biggest part of the struggle remains in the future because I mean, we have a beta version of the platform, we have some nice AI we’ve put in there but still, our competitors are trillion dollar companies with humongous server farms. Amazon has 10,000 people working on Alexa, right? So to counteract that we need not only a good design and smart AI we need to attract a developer and user community which is even bigger and better than the armies of highly paid employees at these big tech companies have and this is one of the reasons why I’m happy to talk to you guys and your audience because getting a community crystallized around decentralized AI is absolutely critical to really making the decentralized AI Vision happen.
Brian: Let’s go a little in depth on SingularityNET and what that looks like. So can you speak about what are the different kind of components of the system and you mentioned, developers getting involved. Let’s say now there is some AI algorithms, how would an interaction with SingularityNET look like?
Ben: If you have an AI algorithm integrating it with SingularityNET is actually not especially difficult. I mean, it’s a container based system like most cloud systems now, so you put your AI in a docker container or LXC container and then there’s a simple API to integrate it with which then lets your AI accept payment for services in our AGI cryptographic token, and then announce what API it wants to use to get data and queries and then it can give responses in JSON or whatever API it wants. So it’s really just a system of containers and then there’s a payment system using a token and for cases where an AI outsources work to another AI which outsources work to another AI there’s a multi-party escrow framework on the back end and there’s a system that allows a lot of AI to AI transactions to occur off the blockchain for speed purposes, but all that’s really behind the scenes. I mean from the point of view of an AI developer it’s really pretty simple to put your AI in the container and take 15 minutes to two or three hours to integrate with the SingularityNET wrappers.
Brian: So I get that, so I put my algorithm into docker container make it accessible through SingularityNET. Let’s say on the other hand I am somebody I have a bunch of data. I would love to get a better understanding of maybe what’s actionable what it means so could I then go and hire the services of these AI algorithms to get me results.
Ben: Yeah. So I think the decentralized protocol could actually be used by anyone and I mean we use behind-the-scenes a component called Drizzle which allows decentralized search of any network of Ethereum node. So you could if you are a reasonable scripture, just script your own query to go search the whole network and find the AI that broadcast sets able to do the kind of thing that you want. Now on the other hand, we’re making it easier than that. So along with the beta we’re launching just a marketplace user interface, which is a website and you can go to that website and can see what AI services are listed and and what sorts of things they do and you see their addresses and so forth. So that’s right now in practice that’s a bit centralized right? Because we make this web interface which lists a bunch of AI on there and we are legally liable for what we list there. So we have to do some vetting of what we allow on there just like the Google Play Store does or something, On the other hand, the underlying protocol is completely decentralized and open. So for example we’re incorporated SingularityNET Foundation, which is building the Singularity network now is incorporated in the Netherlands. So suppose that Netherlands law said we weren’t allowed to list on our user interface, when AI based in Iran and North Korea or something then we’d have to take that off our interface. On the other hand the decentralized network is whatever it is. So someone in Iran can build another interface which is like an interface to all the Iranian and and North Korean AI nodes on the network. So this is the beauty of this architecture, you have this decentralized protocol, which is controlled by no one and anyone can put an AI online and then it just announces it to the other AI’s in that network and then it can be found by decentralized peer-to-peer interaction. So that’s there which gives a lot of robustness to it. On the other hand for ease of use we’re putting up a simple website which just lists the AI’s that are on there which then can be interacted with from a customer’s view just like you’re getting AI as a service from any other directory or somebody’s website. Now the beta still has some limitations in the sense that we accept payment in the beta only in our HAI token, which is an ERC20 token. Then one of the things we’re going to do in the months after the beta is integrate the third party no fiat to crypto payment system because of course most companies who want to use AI inside their website of their product not from the crypto space, they don’t want to deal with crypto wallets and so forth at this point, but this isn’t a really big obstacle, it’s just something we hadn’t done yet. It’s more regulatory thing than the hard technical problem.
Brian: So you mentioned HAI token. So what’s the role of the token?
Ben: The token is used by AI’s to pay other AI’s for services that they provide but having our own token economy, lets us nudge the incentive mechanisms in an interesting way. So as well as using it for payment of one AI by another day, we also will issue token bounties as rewards for people who contribute AI’s that are requested by the community. And then we will implement later this year a curation market where if I want to rate your AI as good, one way I can do that is to stake some token on your AI and then if you’re AI comes out to be rated good by a lot of other people I’ll get some reward. Whereas if you’re AI turns out to be horrible then I will lose some of what I staked. So having our own token it’s both an efficient, secure and private way to do transactions and it lets us do things with bounties for development and staking and curation markets, which I think can sculpt and guide the economy of AI’s and this is quite important because there’s there’s something in AI cognitive science called the assignment of credit problem, which is when you have a complex network of agents cooperating to do some function. How do you ensure that the agents deep in the bowels of the network that indirectly helped achieve the function are actually getting getting rewarded? And the human brain somehow does this right? Like if you do something that gets you food or sex or money or intellectual satisfaction, whatever is good, you know the neurons that moved your arms and legs don’t get all the reward right? There’s reward that goes to the neurons deep in your brain that help you get whatever those goodies were. The US economy for example doesn’t do so good a job of assigning credit internally, which is why bankers make so much more money than programmers or kindergarten teachers or artists. And arguably the Bitcoin and Ethereum economies, although they’re really cool in some ways, there’s a strong tendency toward oligopoly and oligarchy in these economies and they don’t necessarily do a brilliant job of assigning credit to genuine value either. So by making our own token and sculpting the reward system in it we hope to make the economy of AI’s operate better than other existing economies so that the AI is really contributing most to the overall network and its intelligence and the value it delivers the AI is contributing most to the overall network are actually getting rewarded significantly. And this is a hard problem where economic design matches cognitive science, right? It’s these are fairly subtle things.
Sebastien: I want to ask you, so if you take something like, I’m not sure if you’re familiar with the blockchain Truebit. So it’s like a distributed computation blockchain and just distributed computation systems have been around for years and more recently people have embedded them in with blockchain systems so that you can have this reward mechanism. And so with Truebit you have these actors of the network who can potentially validate or verify the computations. And so therefore there’s an incentive to those providing the computations to provide correct computations because there’s a possibility of been getting slashed if they don’t provide accurate computations. Now, this is for computations that are somewhat trivial to to achieve with even general-purpose computers, but with AI if I send some tasks to an AI and as a result it returns some sort of data set, how can I as a user or even other users of the network verify that and also I think with the general intelligence becoming closer to reality AI’s could have their own kind of subjective bias perhaps and so it may be different between one AI and another, how do you test for that and how you verify that the result the AI is providing is actually accurate?
Ben: Yeah, I mean there’s clearly no general solution to that problem just as there isn’t among humans, because the AI’s are going to be doing so many different types of things. I mean you could have an AI that’s proving math theories, or coming up with science hypotheses to help with with biomedicine, or predicting the stock market or something, right? So then it’s like, is your stock prediction AI giving subtly bias predictions that it’s then using to make money by trading itself in the background against what you traded or something. I mean there’s a lot of subtleties that could come up and they’re going to be different for the different kinds of AI that that you’re doing. So, I think that if you’re doing a specific type of computation for a specific type of problem, then you could come up with a formalistic solution for this, like if you have an AI that’s generating programs according to specs you can do some formal software verification to see that the software actually performs according to spec, and if you have an AI that’s analyzing DNA data. I mean, if you have your own human DNA data you can do out of sample testing on that data to see if it’s valid but there’s really going to be no general purpose solution and SingularNET is really a general-purpose network. So we put a bunch of work into designing a reputation and rating system which is sophisticated and hard to gain and this is not part of the beta but it will be rolled out later in 2019, but I think that been like a holy grail for every online marketplace, and we really need to get that right because in the end verification that things are accurate unbiased or not too biased or inappropriately biased. I mean this is really hard and it’s domain-specific and ultimately each person isn’t doing that on their own.
Sebastien: If the point of delegating tasks to any AI’s a idea delegated to……
Ben: if I want to verify that someone’s AI for analyzing DNA data is accurate and then not many of us are going to write the code or even run the code for that ourselves. We’re going to go to some service that does that and then which service do we trust, is it the SingularityNET Foundation certified service? Well, then that’s like a centralized elite, or or do you have a variety of competing services out there then you choose which one but then it comes down to reputation systems again, because then you’re choosing the one that you think has the highest reputation may because it comes from Harvard University or from the the NSA or who do you trust right? So ultimately even when there’s a formal mathematical solution there you’re placing trust in someone. If something is simple and generic enough, you could bake verification into the protocol. I mean that is done with cryptographic checking but I think checking if an AI is correct or not is just not going to be that simple. It’s going to be a variety of different algorithms for checking different types of problems in different domains, and then you need reputation system to be able to know which verification checker to trust and then people will try to game that reputation system by giving a high rating to bogus, like truth verification checkers, right? So you need a machine learning based reputation police just to try to stomp out people gaming the reputation system and then you have to believe the machine learning based reputation police itself isn’t corrupt. So, this is the world that we’re in but on the other hand the real world economy isn’t all that clean and and safe either. Like which major government is not corrupt in some serious way. So I don’t think the AI in blockchain economy is not creating this problem. This is a problem of human beings being assholes. And this just manifests itself in everything that human beings do.
Brian: Okay, so this is great. So I think this very much ties into another thing that I really look forward to addressing here a little bit. So when you spoke about the division mission of SingularityNET in a different interview I heard, you mentioned that SingularityNET has these two objectives, first its objective of maximizing intelligence. The other one this objective of pursued the maximum benefit for all beings. I’m curious, we spoke a little bit now about how do you evaluate an AI and how can you check what they’re doing is correct, is in your your interest, so now I understand also the concept of creating this efficient market place for AI and so now I is a normal small business owner I can use AI and maybe have something almost as good as Google or maybe something better than Google right down the line. But how can you make sure that this system is going to end up being a system that pursues this benefit for all beings and that embodies this value?
Ben: Well we can’t make sure of anything and I would say if we don’t create SingularityNET, if I decide to go do something more relaxing with my life instead, then how do you know for sure that Vladimir Putin, Donald Trump, Google IBM tencent all the companies out there. How are you sure that those guys are going to create an AI which is for broad human benefit. And if everyone stops making AI how do you guarantee that no one’s going to send synthetic viruses out there to poison everyone to death, right? Or that the proliferation of nuclear material in Eastern Europe isn’t going to be used to blow everybody up. So, I mean, I think we’re not in a point in human history where there’s a great amount of certainty. There’s probably even more uncertainty than in the past and there’s always been a lot. But really the question to ask is on average are we better off creating a decentralized, benefit oriented AI platform like SingularityNET or are we better off not having that there and having all the other shit going on in the world. Right? I mean that’s the question to ask.
Brian: So I think that’s a fair point. But then I mean that’s just sort of rephrasing the question a little bit. So then I guess my question is what are you doing in order such that you know, this objective is in this value is embodied in the platform?
Ben: Yeah. I mean, there’s two parts of that. So one part is in the tokenomics of the SingularityNET ecosystem. The other part is in the AI’s that the SingularityNET foundation itself are building and putting into the network. So I mean in terms of the tokenoics, there’s curation markets and an intelligent reputation system, which is designed so that at least the agents that are contributing value to the network are getting rewarded proportionally to that instead of having game theoretic dynamics where a few agents will accumulate all wealth, which is what seems to be happening in Bitcoin and Ethereum and this is what happens in most conventional economies also and then on top of that a certain percentage of the tokens that were initially minted are earmarked to be spent on benefit tasks as decided by the community which can be things like healthcare, education medicine and so forth. So there’s at least that nudge and put in there to have a certain percentage of the token spin on things that are considered of broad benefit in the end. This is much like a government does when it spends some percentage of its wealth on social welfare, right? It’s just most projects don’t wire that into their economic operation. But then the AI’s that we are putting into the network ourselves are largely benefit oriented. So Sophia robot which we talked about one thing we’ve been doing is using Sophia as a therapist and meditation assistant. So that’s not solving all the problems of the world, but it’s different than the Terminator. Right? It’s using a robot to just kind of help people expand their consciousness and we’re working on applying AI that’s using OpenCog framework and wrapped in SingularityNET to analyze DNA data of people living a hundred and five years over to figure out what makes them live so long to try to figure out how to extend other people’s lives. We’re analyzing images of plants from China and Africa to try to diagnose spread of crop disease and early stages using deep neural nets for image processing. So, of course, each of these things is a drop in the bucket regarding what we need to do to massively improve the state of humanity, but the hope is that by injecting these things in the network at an early stage you’re impacting the culture of the community because ultimately this is about the community that you build around SingularityNET. So to curation rewards and benefit tokens and having a bunch of positive beneficial stuff happening in the network and then our largest development offices in Ethiopia in Addis Ababa where we have 20-something developers working on SingularityNET so we’re we’re trying to actively pull people from the developing world into development and use of the network. So hopefully all these things will be nudging the community in a positive direction, which is really going to be the most important thing because if we’re successful with this then five years from now the work done by SingularityNET Foundation will be a relatively modest percentage of all the work being done to build the protocol on the network over time. And the AI’s in the network will be mostly contributed by other random people not by people paid by SingularityNET Foundation, but then we are seeding this community and we’re seeding this culture, you can see that in the Linux. Linus Torvalds and Richard Stallman and their friends from the old days write a very small percent of code in Linux right now, but the culture of Linux is what it is because of how they started it, so we want to get beneficial motives and love and compassion and inclusiveness in the cultural DNA of SingularityNET community and then it will continue to be there in the code also and this is a bit soft and fuzzy. It’s not like a mathematical guarantee of beneficial activity, but I think that’s how things actually have to work because in the end it’s about the community of human beings who are going to be developing this ongoing.
Sebastien: Well, that’s that’s really fascinating. And I think also the fact that you guys have actual people in Ethiopia working on problems in Ethiopia is really great and far removed from what a lot of people in the blockchain space are doing.
Ben: Cardano is running a year-long education program where they’re teaching a hundred young Ethiopian programmers Haskell.
Sebastien: That’s cool.
Ben: The programming language. So I think well, I’ve had this office in Ethiopia since 2014 I guess doing AI outsourcing before we shifted them to SingularityNET, but now Cardano’s moved in there and yeah there’s a lot of tech projects throughout various African tech hubs now, so those powerful forces of centralization and wealth concentration, but yet there’s also the opposite and that peer-to-peer and positive globalization happening. So it’s a very interesting time with these two different forces are both surging forward in powerful ways.
Sebastien: Cool. So before we wrap up I did want to ask you one last question and we kind of touched on this earlier when you were talking about AI’s making predictions. So let’s imagine a future now where a lot of economy is these watching systems so you have powerful markets that exist exclusively on blockchains and organizations and companies are interacting with these markets on blockchains doing business on these markets and these markets are run by DAO’s. So there are governance mechanisms in place, which allow the companies that themselves are operating on these markets to participate also in the governance through staking. So the companies that use the markets also have stake on the markets and they can participate in governance decisions for updates or things like this, now it seems like there would be an incentive at this point, and even for something like prediction markets for these companies that have stake to essentially delegate their stake to an AI because the AI is going to make much better decisions on what types of governance or what types of proposals that they should be making in order to maximize the network itself and also maximize their profits long term. So it seems like there would be kind of a like a nash equilibrium here where at some point if one company starts using an AI to manage their governance or to make predictions and other companies start using AI’s to make predictions and then when everybody is making predictions or making governance decisions with an AI as we move closer to generally then it’s like they just have AI’s competing with AI’s and I guess that also extends…..
Ben: But who is in charge of the world now? Nobody is in charge, right. Which in some ways is good when you have presidents like Donald Trump out there. It’s good that it’s so collective, self organised dynamic that’s in charge rather than any one person. And who’s in charge of Bitcoin and Ethereum? Yeah, we don’t actually know that but it’s clear it’s concentrated in a small number of individuals and investment groups who are controlling these things. So, yeah, I think in the long term, it’s inevitable that if AI’s are a thousand times more intelligent than human beings and of molecular nanotechnology and so forth. It’s inevitable they’re going to have more physical power than humans. It doesn’t mean they’re going to control every little aspect of what humans do in their lives, but they’re going to have more oomph than we do. So in the long term, which may just be decades from now, we’re going to have two choices, one is you wire into the network and become one with a super intelligent global brain, even if that means giving up many aspects of your legacy humanity or else you live in the people preserve like the squirrels in the national park and the squirrels in the park, they can fight over girlfriends and hunt for food and play and have fun and people are not trying to regulate every aspect of their little squirrel existence. On the other hand if they run out of the park, they might get rolled over by a truck. So I think if you have a superhuman AI that’s tremendously more intelligent than us, either you join it or you’re going to remain living a happy human life, hopefully with a lot of abundance provided for you. But in the end there’s something much more powerful than you that does have some regulatory control when it needs to which could be good also, right? Like if the squirrels died of some plague we’ll come in to give them antibiotics, and the same way of human society went too far awry a super AI that loved us, it would let us go about our business, but if things went too far awry, it might come in and and fix things. I mean that’s long term. It’s upload to the global brain or live in the people to reserve right?
But I mean in the medium term it’s going to be really really complicated and as you say there’s going to be a gradual transition from human decision-making to AI decision-making but given how profoundly fucked so much of our political and corporate ecosystem is now I see that as a great opportunity to improve things. I mean if the AI is written right it’s going to do a lot better than the individual humans institutions that are controlling things now, so then it really comes down to creating the AI’s that are going to be the decision support systems for the people controlling most of the world’s resources.
Sebastien: I’m glad that I can always go back to the human nature reserve wherever that is so that I can chase whatever thing you would chase these days without any encumbrance.
Ben: Yeah, we’re setting aside a region in Southern Ethiopia for this purpose. So yeah, I’ll show it to you sometime.
Sebastien: So let’s wrap up. I just want to ask you, how can people get involved in SingularityNET and where they can learn more?
Ben: Yeah. Absolutely. So the center of it all go to the website Singularitynet.io and there you can find information on how to download and play with the beta if you’re a developer or we have a blog which has updates on the research pretty frequently. We have a telegram discussion group which has some percentage of interesting things on it and some other things and so I think lots of ways to get involved with the community and I’m still, as well as doing some actual work, going around and speaking at various conferences so I can meet some of you guys listening there. I think in middle of March we’re having TOKEN2049 conference here in Hong Kong. So if anyone’s there we can hang out. But yeah in the end, while we’re talking about building superhuman AI, getting there is all about the human community. So we need people to be involved in many many different ways. So yeah, join our communities online and we’re happy to talk to you about what you can do to help out.
Sebastien: great, then thank you so much for joining us. It was a real pleasure talking and diving deep in this, it was a really fascinating topic that we don’t really always get much of a chance to discuss here on the podcast. So having you on was good fun.