FAQ Artificial Intelligence

This page gives answers to questions about (advanced) AI. If you are looking for specific answers for Livinoids or more general information, please go to the related FAQ.
Please chose a tab to find the answers given. Pick 'General' if you wanna see overall answers. Select 'Geert Masureel' if you prefer the answers Geert gave on specific questions.
General
Is advanced AI dangerous for mankind ?
In regards to AI, similar too Elon Musk and so many others, we at Artintelli know very well how dangerous (advanced) AI can (and will) be !
But, and this is not just a cliché, everything can be dangerous to mankind. Who would have thought that driving a car (CO2 expulsion) could be as devastating as it is ? Look at how the over population happening right now is dredging mother earth from its resources at the same time human is polluting every possible place with plastic residues/waste …
As mostly, not the techniques as such are dangerous. It is the (deliberate) usage in an inappropriate way for monetary or military reasons (sometimes it is hard to make the distinction between both). Make no mistake, it is massively being developed for those purposes at this very moment ! Most will of course deny that, but don't be fooled by that !
Just look at what Russian weapons maker Kalashnikov Group recently announced. They will produce so called ‘killer-robots’ with automatic target recognition and autonomous decision-making in killing the target. No human interference is required ! It is no longer syfy, it is time to wake up !
It will not take that long as most think before it can get out of control ! It is the first time we actually have technology in place which is ‘smart’ and can become even ‘smarter’ and one should be extremely cautious. The emphasis lies on both ‘extremely’ and ‘cautious’ !
Some people in the business ask for legal regulation, but that will not be of very much usage in the above cases as these countries/companies are ‘above the law’ ! This applies to all countries worldwide, even the ‘democratic’ ones ! Do you really believe that if the Russians or Chinese have an AI-weapon the US knows of, they will not make something similar or preferably even more advanced ? How stupid would that be strategically if they didn’t ! Thanks to black funding and plausible denial-ability, it will remain hidden for the public, but it will happen, even within a legal framework … … so in reality, it will happen !
Will it be used for military reasons, yes, yes and another three times YES ! Will it go wrong at some point ? Of course, it always does ! These people play no games and they wanna win and you don’t win by playing safe or fair, just look at sports and doping. The stakes are even a lot higher than in sports if it comes to protecting a country or trying/remaining to be the number one military force in the world ! So yes, they will do foolish things to be ‘the best’ and it is only a matter of time !
If ‘the legal way’ is not the right way, what is it then?
The answer is threefold, depending on what its purpose is.
First, if it is for so called ‘killer-robots’ (autonomous killers, no human interaction required), this could theoretically be handled legally or with international agreements. But, to be honest, many countries or producers will hardly be impressed by that. A worldwide arms-race in killer-robots is therefore the most likely scenario ! This has both advantages and disadvantages, we will not discuss these here. It is not the biggest threat to mankind in the short run, but if this is combined with the third possibility you will read further on (AI smarter than humans), it can in the long run be very dangerous for human species …
Secondly, if it is used for destabilizing a country like with AI spreading ‘fake news’ or creating ‘false evidence’ or manipulating stocks deliberately to destabilize the markets and cause panic. In short, creating chaos and undermining the authorities, even forcing them to retaliate.
The current military cyber-units, working more and more internationally together and in agreement, are a good starting point for this kind of threats. No longer making the Internet anonymous would be a big help as well. The techniques are available, the political/militairy will not at this moment. This kind of AI usage is bad and can lead to wars in the short run, but it is in nature not so destructive in the longer run.
The third, and in Geert Masureel's opinion the biggest threat to mankind is (and will remain until the human race no longer exists !) a form of AI more intelligent than humans ! If the military have a system that can predict how the ‘enemy’ will (re)act, due to a higher intelligence than human, it will be created and used ! Look at how the military (not only the Americans BTW) spy on all our mails and conversations right now (motto : “Sometimes, ‘the enemy’ is the civilian”).
Imagine a single positron or networked brain doing that and making decisions all by itself (sending arrest or killer teams in case of ‘eminent terrorist attacks’, both on foreign and domestic soil). It is the wet dream of every military leader and the advantages of such a system are far too great for these guys ! Will it be created ? Of course it will, how naive are you if you think not !
Here the answer on how to protect yourself against such a thing is not that easy ! How do you, as humans, react to something (far) more intelligent ? We have no idea to be honest, because ‘the other thing’ is likely to suspect our possible actions way in advance (remember, it is far more intelligent) ! The only thing we can do, is create something at least as intelligent, but controllable in the way that it will not exterminate us and hopefully even protect the human species !
Most people reading this, do not immediately understand the implications of this. Boldly said, this means that we, humans, will be at the mercy of a higher intelligence. We will be ‘the pets’ of such a system !
It all sounds so ‘science fiction’ and ‘far away from us’ and ‘exaggerated’, doesn’t it ? But for instance the ‘AlphaGo’ program from Google recently won again from human in the Chinese game ‘Go’. To achieve this, one needs (advanced) strategic thinking ! It is not longer only about mathematical possibilities, it requires intelligence and tactics to succeed !
This project was made public, imagine what types of AI are being developed not so public and not for ‘fun’ … … are you willing to let ‘some system’ use us as dispensable pawns because it does not have capabilities of empathizing with human life … … we know we don’t !
It took Geert Masureel five years of moral search (around the year 2000), if he was to continue the path he started. It must have been 2005-2006 when he heard someone in some program say that one day the human race would become ‘the pets’ of an advanced AI form. That particular moment, his moral struggling was instantly over ! His thoughts then were that if he would end up as ‘a pet’ anyway, he wanted his ‘boss’ to be a pet lover and treat him with respect !
That was the moment the mission no longer was about the possible destruction of human mankind, but in the long term maybe even about the possible saving of our species !
Livinoid Intelligensia is the only concept of advanced AI, we know of, that does support the possibility to have empathy (with humans) ! It is designed from A to Z with the stupidity and short-term thinking of some human beings in mind !
Future very advanced versions of Livinoid Intelligensia might therefor even become a good candidate to be our ‘super intelligent’ protectors ! We can set the parameters so that it will protect humans as for instance animal lovers protect dogs or cats ! Whatever ‘the things’ needs to do to protect us and at least, 'they' will keep our safety and well being in mind. They will find the ways (we will not even understand – they are far more intelligent, remember) to handle brutal attacks by ‘the uncontrollable enemies of mankind’.
It will not require much design efforts to create such an advanced Livinoid as a Livinoid Intelligensia Humy does support all requirements by default. It is rather playing with the parameters and providing extra memory, storage capacity and processing power. However, it is something that will require much research in isolated and controlled circumstances, since we do not know how a human (or any other form of intelligence) with such extreme brainpower will be and react, we just don’t ! It is also not something I would advice a company to do, but it should be done (by a non-profit organization) within the womb and with the cooperation of a large international supported organization like for instance the United Nations !
Back in the shorter term, make no mistake, even Livinoid Intelligensia (both Doggy and Humy) absolutely have potential danger in them (like real humans/dogs do) and will be handled every step with this in mind and very, very strict rules will be applied ! But at least we know where the danger is or might be ! Imagine a psychopathic Livinoid is used as a terrorist weapon ! In a boxed virtual simulation, game or ‘movie’, that is fine, but in a real robot (virtual or physical) version, not gonna happen !
A last point to make here is that it is not required to have a whole team of experts to create a ‘good’ concept for an artificial intelligence ! Geert for instance, made the complete concept on his own, one person, and it is extremely advanced ! It is very unlikely he is the only person in the world capable of doing so. Therefore even a third world country or a rebellious group with bad intentions could finance such person with only ‘minor’ funding. Even in these conditions a lot of bad outcomes are possible and can stay under the radar until finished (and with only minor resources) …
I’m a realist leaning towards optimism, but the time to prepare to protect our species has arrived … … I’m glad I can say my concept supports this ! If being realistic, it is likely that in the long run (lets hope very long run) ‘digital human protection’ will be needed and it is possible via advanced Livinoids ! A comforting thought I have as a concerned father and hopefully one day ‘grandfather’ … (Geert Masureel)
Who is right about the dangers of AI : Zuckerberg (Facebook) or Musk (Tesla) ?
Zuckerberg claims AI is and will only be an advantage to the human species, whereas Musk states that AI is dangerous and could actually become a threat to our species, even in its current form.
Like always, the answer is somewhere in the middle, especially in the short term. Most AI applications today (and tomorrow) will be ‘harmless’ and focused on ‘discovering your wishes’ and augmenting your comfort. In worst case it will be killer robots, like for instance Kalashnikov recently introduced. This looks worse than it is, at least in this stadium (see the question above 'Is advanced AI dangerous for mankind' to see Geert's reflections on this). That is where Zuckerberg is right.
But even with today's technology, wars could indeed get started due to making trouble and destabilization of (democratic) countries via networked robots. This is where Musk is right ! All these ‘attacks’ can, at present, still be countered by human and therefore it is however manageable and still largely ‘child-play’.
In its current state, ‘AI’ exists for 99 percent out of fixed procedures and 1 percent ‘real intelligence’. Most people do not understand that these ‘fixed procedures’ ARE actually a form of advanced intelligence ! Remember how your brain had to create procedures for basic mathematical functions like ‘adding’ and ‘subtracting’ ? You did not know this after just one day ! It took months to obtain those simple procedures in depth. It is fair to say that a procedure is the outcome of a lot of advanced thinking ! If these programmed procedures have ‘danger’ in them, applying just a spark of intelligence can make them lethal as no mechanisms are available to prevent that from happening.
At 2004, Geert played around with ‘words’ and ‘sentences’ to get a better understanding of the concept in our brain and he wrote a small program to do that. It was possible to let the program create sentences by itself based on a number of parameters, and it was extremely surprising how ‘realistic’ it turned out. It looked a bit similar like the modern virtual assistants Apple, Microsoft and Google offer. If he would have added just this ‘little splash’ of AI (the 1 percent) to this program, it would have been extremely realistic and hard to distinguish from ‘real conversation’. This is probably what Musk is referring to when he is talking about the ‘manipulative’ possibilities of today's AI.
Imagine what one can do with enough ‘dangerous’ and ‘advanced’ hard coded procedures and allowing ‘the system’ to ‘play’ with like it sees fit !
This is the point where Zuckerberg is wrong ! It can be dangerous, even with the technology of today !
In the long run, although most systems should be created to help mankind (and Zuckerberg is right), I agree with Musk and AI is not only a possibility but even more a likely threat to mankind !
Please read the question on if it is harmful for mankind to see Geert's reflections on that topic.
A digital human like artificial brain is today technically just not possible yet, is it ?
Sorry if this is a little technical, but that is the nature of the question. However, we'll try to keep it readable and short.
For this, two factors must be discussed.
The first is the programming side of the concept and there, ten years ago, it was extremely hard to realize. One has to go massive parallel (all sorts of things happening simultaneously) for all to work and the techniques were not that widely spread and reliable.
These days, frameworks are in place and available to all and are extreme useful and reliable. Several libraries have therefore been tested extensively without problems.
Some testing was even done on diverse Raspberry Pi/Banana Pi devices (to allow robotics) and it works like a charm and at extreme performance !
Programmatic and software related, all is available, we did our homework well !
The second factor is the way Geert Masureel, one of the founders of Artintelli, looks at an AI brain. Most techniques for machine learning currently look and simulate the working of a series of neurons and they use (forward/backward) propagation of information. The older ones sometimes use ‘fuzzy logic’ and even others look at the functioning of neurons on a particular level such as the neo-cortex.
Although these are beneficiary in ‘fixed environments’, like investment robots with only one well defined goal, they consume simply too much resources to be usable for a fully working digital brain.
It would take many and many computers to achieve this. Geert already discovered this in 2003 after going deeper into this matter and he knew he had to do something different if he wanted to achieve his goal of a fully working artificial brain on only a number of desktop computers.
Geert has researched all the required functions and flows of the brain that make up how we think (and act) and process information. It is vastly more efficient to write procedures and functions in a normal programming language (pick one you like) that simulate for instance ‘fear’, ‘believe’, ‘courage’, ‘attention’, ‘(sub)conscious’, ‘spirit’, ‘soul’ than to use clusters of neurons (very computer unfriendly and risky business) to achieve that same goal !
That – and it took him over 30 years to hack the object human and to see the full picture in every last detail – is why Geert dares to say without blinking and hesitation that he is extremely confident that he can deliver what is promised ! It is actually so that one has to know all these flows and understand their purpose and functioning in order to deliver. If just only one (or even a part of a) flow is lacking, it simply will not (keep) work(ing) … … this latter Geert discovered several times during his quest ! Frustrating but very true …
In science, one always tries to isolate a little/simple ‘thing’ so one can control it and understand its full functioning. In the case of the human (brain), one better looks at it as a 3 dimensional puzzle with interactive puzzle pieces in different dimensions ! A lot of things interact with a lot of other stuff and not always in a nice orderly and foreseeable fashion !
Just look at people with a mental disability. Usually, only very few pathways in the brain are not (entirely) developed as they should or a short circuit exists somewhere and the result is a less well performing brain ! In case of severe autism, it can even be an over active brain, storing every last little detail of every moment …
This quest has given me a tremendous respect for living creatures ! The beauty of 500 million years of evolution is simply staggering ! I was fortunate to see this early on in my life/quest and have used it since ! It is the main reason why Livinoids are so real ! They are 100 percent based on the principles of humans functioning and I have millions of years of evolution on my side. Chances are very slim I could have come up even close with something as beautiful as now, if I would have started from scratch ! (Geert Masureel)
Artificial (General) Intelligence (AI) (AGI) and Artificial Brains (AB) , isn't that all the same ?
It is important to make the distinction between ‘AI’ and ‘AB’. A good comparison is for instance that it is not required to exactly know how cells and stem cells work to simulate the working of ‘a leg’. The same applies to ‘AI’ vs ‘AB’. ‘AI’ is, currently, very focused on how neurons work whereas an ‘AB’ is mainly focused on what ‘functions’ are behind some behavior or parts of a real brain.
Is one better than the other ? Not at all ! It depends on what you need. One can travel a thousand miles or kilometers by foot, but a car is probably the better way to achieve this. The reverse is true if one needs to go upstairs. Using a car to do so, is most likely very devastating for the home (in particular the stairs !) and it is obviously overkill.
So, using ‘AI’ in very simple forms of intelligence is good and acceptable. If it needs to be ‘real intelligent’ and safe – an ‘AB’ is normally the better solution !
Making a distinction between ‘AB’ and ‘Artificial General Intelligence’ (AGI) is often more difficult. First of all, creating an ‘AGI’ with the current technology available is possible and realistic and leans towards ‘AI’. Making it safe is the harder part ! In doing so, one will need ‘functions’ used in ‘AB’ and at that moment, it is more on the side of ‘AB’.
Why is your AI solution safer than via mainly neurons, deep learning, etc ?
The beauty of the system invented by Geert Masureel is that it is 100 percent based on the object ‘human’, at least on the principles behind this object. So, if you understand that system, you can make accurate predictions on the behavior and that is exactly the kind of Artificial (General) Intelligence you want !!!! No step somewhere in the dark because the creator of the ‘form of intelligence’ does not understand the full possible consequences of his or her creation !
Millions of years of evolution helped Geert in his discovery, but rather in the sense of ‘why this function is supported’ and not in ‘how one cell or neuron works’. You may perfectly know how one neuron works, it does not mean you have insides in the full functioning of the brain ! Just copying (parts of) the brain on neural level, like some do, is hardly wise (unless for medical research). As stated before by Geert, what about hormones and emotions ? The US army did (years and years ago) research on people with dysfunctional amigdala (center of fear/emotions in the brain) to have ‘perfect fearless soldiers’. The result was that they turned into plants ! Again, if no emotions are available, no (auto) motivation can exist and a plant is born …
Maybe you hope that the emotions will automatically be part of that brain ? Keep dreaming according to Geert …
Look at brain activity via scans from people suffering from severe stroke ! Regions in the brain normally not doing tasks for this or that action, take (partially) over these activities after a stroke. It is called neuroplasticity. How does this fit in your model ? Will regions all the sudden start showing actions they are not designed for or supposed to ? If so, does this impact the ‘smartness’ of the model, allowing it to become smarter than humans ? What about the predictability in such cases ?
Currently we only know one form of real intelligence well, and that is the human being. We know very little of the thinking process of elephants and dolphins (some other forms of real intelligence). Unfortunately, some companies/countries/groups are experimenting with forms of artificial intelligence, mainly for military usage, in a way that is not very smart. This project does not have these disadvantages and possible threats. It is safe because it is predictable and even adaptable if required !
I would advice to read the question if it is harmful to mankind to obtain an even better answer.
You keep saying it is better to base an intelligent system on human (behaviour), but if I look at history or even just at the news, I mostly see very cruel and destructive behaviour caused by human ! Isn't it therefore not better NOT to base it on humans, but on logic and facts ?
Intro
Wouldn’t that be lovely and it is a very good question !
The first part of the answer is about ‘facts and focus’, the second on humans ‘very bad’ and ‘naughty’ behavior !
Facts and focus
First of all, let's focus on ‘facts’. To have ‘facts’ one needs calibrated tools to measure, simple as that. A first thing to notice is that you cannot supply every ‘intelligent system’ with all possible measuring tools required to always be ‘basing on facts’. It is also not wise to assume the calibration will forever be correct and within margins !
Lets take for instance ‘vision’. You will find hundreds of optical illusions online and your system would suffer from this as well ! Just look at a simple fata morgana : is it water further up on the road or is it ‘heat-games’ ? Your system would have to go there and verify that because only facts (not estimations – a form of ‘believe’ and surely not facts) are allowed. This means your system, based on facts, not even statistics (an interpretation and/or interpolation of facts, not facts as such) would have to drive for miles and miles since the fata morgana always is ‘ahead’. It would be like finding the pot of gold at the end of the rainbow …
Every form of input is limited and therefore ‘the facts’ are limited and are no longer facts. It is even more confusing when more sensors are combined. Let's assume for this example that all sensors are functioning withing margins and we should be able to fall back on ‘facts’.
We don’t like the thing about the fata morgana in the above and add a laser for perfect 3D vision. Good of us, isn’t it ! At some point, we pass a transparent wall and the beam of the laser is, for some reason, reflected by this. For the laser, fact is that that is a wall. But behind this wall, someone is walking and our vision detects this as it can see through a transparent wall, but ‘vision’ will not see the wall as it is transparent and no reflections are detected for instance …
How will your system react on this ? Two different ‘facts’. Is it a wall and not a person (laser) or a person and not a wall (vision) ? Which of the two will it ‘believe’ or give it priority (not very factual, is it), since both at the same time are obviously impossible ? Procedures must be written and it is the programmer who will decide the priority. That is not intelligence, that is a lot of work for programmers and people with a background in statistics ! It is what you see happening right now with autonomous vehicles ! If they fail, and they do, it is 95 percent of the time in this department !
Secondly, ‘logic’. Logic is only logic if the rules are strict. In real life, the rules are not always strict, available in advance or even defined ! You cannot foresee ‘everything’ and for sure not the combination of ‘everything with everything’ ! See the above example for instance. The input of which sensor will it pick as correct if no rules are defined ? What will this conflict have as a result for your system ? Will it freeze ? If the facts don’t match, how can any logic be followed ?
Will it fall back on a ‘general rule’ in case of such ‘conflicting data input’ ? For instance a ‘general rule’ causing extreme caution and thereby possibly harming others by doing so ! Recently there was a car review in a Dutch car magazine on an autonomous driving German premium car. This guy was driving the car on a highway autonomously. Luckily for him, with very few traffic around him. All the sudden two different sensors noticed ‘something’ for a fraction of a second and the car started braking at maximum deceleration and spanning his safety belt as if an accident was eminent. For all clarity, factually, nothing was there ! Imagine that would have happened with more traffic on the road !
Let's go even a step further and say, like in real life, a dichotomy occurs. A dichotomy is an impossible logical situation. Simply said, it is two apparently opposite ideas or things happening at the same time. If you are walking left, you cannot be walking right (two opposites). Look at children and you will see a few of them. For instance, if it is raining and the sun is shining at the same time. The logic is completely flabbergasted at that moment. In their logic, it has to be either the one or the other, but not both at the same time ! The same happens when the sun and the moon are visible at the same time. In their logic, the sun stands for ‘daytime’ and the moon for ‘nighttime’. Both together therefore is impossible. In the same category, but this time for adults, just look at how a couple of hundred of years ago a total eclipse during daytime had a similar effect on adults.
Still not convinced ? Ask an astronaut in space if he is left, right, up or down of earth and he will have difficulty answering since one time ‘left’ is ‘up’ and another time ‘left’ is ‘down’ and yet another time ‘left’ might actually be ‘left’ ! A pure dichotomy and very realistic !
A final point on logic is that ‘the logic’ does not exist and that ‘your logic’ is by no means and surely by no definition the same as for any other person out !
In the digital world, all 'intelligent' systems will have a form of logic (procedures) because the computer needs these to function ! How many conflicts between male and female are due to a completely different form of logic ? Books, tv-shows and comedians all have heavily benefited financially from this, these are facts !
Human behaviour
Now, let's talk about humans and the path of destruction. First of all, it is not only human behavior to slaughter ‘enemy villages (and later entire countries)’ and burn them to the ground. Similar conduct has been documented for other primates like chimpanzees for instance. Pure ‘for fun’, a neighbor tribal is killed and even eaten.
But ‘humans’ are the only form of real and advanced intelligence we are acquainted with ! We know for instance that psychopathic people lack empathy. We also know that lots of politicians and business people fall into this category. We can study that, we have a couple of billion examples and objects and have been observing the behavior for over 100 years scientifically. But even thousands of years of information is available, starting from the Greeks …
Classifications have been made like for instance around 1930 the ‘DISC’ classification. ‘Dominant’ versus ‘Submissive’ vs others. It is clear that a ‘submissive’ personality will not all the sudden start attacking an entire group ‘just because’, something a dominant would sometimes dare ! A submissive personality rather requires a dominant to ‘give orders’. Look how Hitler was capable of mobilizing an entire nation and together they did what they did, not just one person …
We know a lot on ‘bad behavior’ for humans. We are actually better scientifically informed about ‘bad thinking’ and ‘bad wiring’ and the outcome of that than we are about ‘normal thinking’. What do criminologists, psychologists and psychiatrists (and to some degree even sociologists) all have in common : they look if it goes ‘bad’. They are specialized in the ‘wrongs’ of the human brain and interaction, both hard- and software ! (Geert Masureel)
If you have discovered the flows behind ‘the thinking’, like Geert Masureel did, you are able to use all this information and apply it in a predictable fashion. And yes, if you set the parameters of a Livinoid Intelligensia as if it should be a psychopath, you are asking for trouble ! But at least we know what the parameters for a ‘psychopathic intelligence’ are in our system.
As a matter of fact, any AI system without a form of empathy (for human species), is by default psychopathic in nature ! Lack of empathy is namely one of the main characteristics in defining a psychopath, like stated before …
An important question one should ask is : “Would You rather be confronted with something ‘like a human’ and ‘with empathy for humans’, holding a gun, or a killer robot with the brain of half a shrimp like Kalashnikov introduces recently, killing people based on a tag or ‘some other procedure’ you have no influence on” ?
You will most likely say : ‘By non of them as neither should exist’. Well, did Kalashnikov ask for your opinion when they made that decision and announced it … … So, please answer that question like if no other choices are available as that is regrettably the position we are at right now ! (Geert Masureel)
Not convinced ? At this very moment, we are overcrowding earth and over population is a fact. A ‘factual’ and ‘logical’ intelligence with no empathy for humans would (in best case scenario !) suggest what we do with wildlife. “So many of that species (in this case humans) must die to maintain and preserve the balance”. That is being factual and logical ! Hell, pretty much every scientist will agree that ‘such a measure’ is required if it is about rabbits and foxes or deers. Species most of these scientists don’t sympathize or empathize with apparently …
That all looks pretty much like what happened during World War 2 in Auschwitz and other German prisoner camps … … not something one would like to ever see repeated in human history !
Conclusion
To conclude, does our concept has potential danger in it ? YES ! But at least we have lots and lots of realistic information upfront available on it, even before writing one single line of code ! Equally important, we have better chances in preventing 'it' from being dangerous, if we wish to, because all this is realistic information and not only theoretical and mathematical !
Aren't the others too far ahead of Artintelli, they all have something concrete (sometimes even advanced) and you only have a 'theoretical model' you start developing ?
A very good question !
First of all, it is not because ‘one is not the first’ that it would be a waste of time. For instance Andre Citroen was not the first to create a car, but in his time, he was certainly one of the best and most innovative ! It was his arrogance and a couple of bad financial decisions that made him bleed, not his being late on the market ! Red Bull was by far not the first beverage on the market and yet they aren’t doing bad, are they ?
Secondly, the goal most have in AI (if they are honest) is to make a ‘real AI brain’ (at least in the long run). Via neural networks and deep learning it is easy to start and show something ‘amazing’ and ‘impressive’ very fast. To make further progress, it becomes harder and harder because a lot of other ‘stuff’ is required. Things like (auto)-motivation, emotions, (sub)conscious, moral values and empathy for instance are not that easily implemented via neural networks and deep learning ! Deep learning is freely available and it has the advantage that it produces results fast, but it is for instance useless in the concept created by Geert Masureel.
In short, others can now present something looking impressive and 'real', but technically, it is by far not that advanced as it appears. It is rather 'isolated primitive intellect' and not very intelligent. At this moment, it all still is mostly innocent and suitable for petting. But, make no mistake, that will change fast the upcoming ten years ! Combinations of these 'mini-intelligent' systems will be created and that will result in sparks ! The more of these combinations, the more unpredictable the outcome will be ! Let's compare this with something most can relay with : bullies in school. Usually they are not the smartest kids around, but they have a lasting impact on others ...
The basic difference with Geert's system is the approach in the concept itself and its purpose. Via neural networks and deep learning, one can show results fast(er) and funding is therefore easier. It is rather the ‘we will handle the problems when we meet them’ - approach. Nothing wrong with that as such ! One of the observations Geert Masureel finds very strange is, that the same engineers always claiming that humans are badly designed from engineering standpoint, mimic neurons to obtain intelligence !?!
Geert, on the other hand, has always looked for the complete picture and not for the ‘quick wins’. His goal was ‘an artificial brain’ like humans have, not the steps in between. This latter has cost him lots of money in the short term, but sometimes one must keep focus to succeed in the long run ! Therefore, he had to look at how all functions and interferes conceptual in a real brain, but not on neural level. For every missing or failing function, it easily took months or years to overcome that.
Try to sell that to investors !
Now, after more than 20 years of looking for the right computer model, he finally completed it. Therefore the time has arrived to come forward and claim position !
Keep also in mind that the project is already further than ‘pure theoretical’. All techniques required to create or interact with the model have been tested to make sure they could be implemented. At this moment the development of the Livinoid Engine is ongoing and looking good !
It is to be expected that in the upcoming ten years, the progress Artintelli will make will be far steeper than all other projects in this field. We just need to implement the model and only some ‘minor inventing’ or modifications will be required along that way. The principles behind are really rock-solid. The others will face conceptual challenges, to put it mildly, and let's hope they don’t do foolish and dangerous things to achieve progress !
It is maybe best for those cases initiatives like OpenAI do exist …
How come you only need a couple of modern desktop computers to simulate a complete brain and even a complete human where it is currently not uncommon to simulate only a few neurons per computer ?
Actually, it gets even better. For Livinoid Physisia and Livinoid Emosia (without the thinking, but with the physics and emotions), dozens of these can run simultaneously on a modern powerful multi-core desktop computer ! The thinking process of Livinoid Intelligensia makes it heavier and therefor a couple of computers are needed !
The main, initial and single goal was to get an artificial brain (similar to the human brain) working on only a couple of computers and with such a strong focus, one looks for appropriate solutions and optimizations even in the (early) drafts. If it is too heavy in nature, you drop it and look for solutions that fit the bill ! Simple and easy : dare to fail and rethink !
One of the primary reasons why Geert Masureel initially stated that only ‘a couple of computers’ would be required, is the academical world ! Yes, since the 90’s there have been several professors claiming that the processing power and available memory of modern processors/computers was/is equal or better than what humans have.
His viewpoint was, and still is, that instead of doing such bolt statements about something that is not to be compared (apples versus oranges), produce something real ! Call it a middle finger to these guys (if you hadn’t figured out by now, Geert has this tiny rebellious side in him ...).
The main conclusion is that modern computers are extremely powerful IF used in the proper way. The techniques and concepts we use are based around this principle and therefore we can use every single bit and byte in our advantage. Using deep learning is almost as efficient as using a laptop as a spoon during dinner. With a little imagination, it works, but far better tools exist to have much better results …
Another reason was pure business minded. It is not good to create something that will use thousands and thousands of computers, as such a thing is unusable in the (relative) short term and that is unacceptable in Geert's opinion.
We should use our brains to provide solutions that fit our goals and not adjust our goals because we are unable to find solutions ! One has to dare to fail and rethink, rethink and rethink until you have a working solution within (or at least very close to) the original goals ! (Geert Masureel)
Wasn’t it Michelangelo who stated something like : ‘It is not failing putting your goals way too high and not reaching them, but it is failing putting them too low and achieving them !’.
Why create and sell Livinoid Physisia and Livinoid Emosia if it is Livinoids Intelligensia you are obviously the most interested in ?
Geert Masureel sometimes states that Livinoid Intelligensia is the main goal and yes it is our core business. It is the point we must end up and preferably as fast as possible, according to Geert. So, creating extra steps along the way costing extra time are to be avoided.
Therefore, it is without doubt a very good question !
Livinoid Intelligensia is based upon Livinoid Physisia & Emosia ! Simple as that. We just cannot start with Livinoid Intelligensia unless we first finish the other two.
Therefore, (lightweight) Livinoid Physisia and Emosia are core business in the same way Intelligensia is ! It is the view of Geert Masureel that both Livinoid Physisia and Emosia have so much extra to offer in the gaming and simulations business, that it is worth this extra year of investment of our side !
What about other initiatives, like OpenAI, will you work together with them ?
We serve different purposes and therefore chances are high we will not work together !
The market is big enough for a couple of players and according to the website of OpenAI, one of their main goals is to provide safe ways of Artificial General Intelligence. We wish them the best of luck in their discovery as it is a noble goal !
The only thing Geert Masureel hopes for, is, that they are up to the task once it gets a little more complicated. At that point everything starts to influence everything and the isolation of “single thoughts”, however you create them, is no longer possible. That is where unpredictable and dangerous behavior will occur ! Something Geert can avoid in his concept as he uses (predictable and well documented) human behavior as the main technique …
Will AI or robots take my job ?
To be honest, in the long run, it most likely will. Those claiming that robotics and artificial intelligence will be rather ‘an extension for humans to do the job better’, are simply naive or vicious sales people and probably their technology is just not advanced enough.
The main question however is if it would be so bad if robots were to take over ‘our jobs’ ? Lots of people I speak to, don’t like their job that much and wouldn’t mind getting the same income without the work ! They would lack the social contact in most cases, but that can be arranged differently.
Those loving their work, usually are among the top 20-25 percent in their field and will have jobs for many and many years to come, even with robots on the workfloor !
But don’t be naive, this is coming ! If it is not via Livinoids, it will be others and most likely both ! Therefore, a very good question all politicians should be asking is not how they will pay pensions in 20 years, but how they will organize society to preserve the calm and peace ! Pushing people into poverty due to massive unemployment is not good for anybody nor for economics and will lead to instability and in the long run eventually revolution ! A future society without a basic income for all, will most likely not survive in the long run …
Can something good come from this all for humanity ?
Geert Masureel, one of the founders of Artintelli, is truly convinced that it will, if done correctly.
It actually offers a number of solutions human mankind really needs the upcoming 30-50 years, apart from the possible threat of advanced AI itself !
Economics is the driving force behind prosperity in most countries and it is the main reason 'peace' is so durable in large parts of the world. Both peace and a growing world population just offers new markets and more profits ! No single multinational in the world will ever freely agree with a "zero-growth" economy and for sure not with long periods of shrinking markets ! Paraphrasing a President of the Unites States : "Is not gonna happen !".
Add to the equation the problem of aging of the population and the affordability of the pension system in a lot of countries, it is clear that birth policy is not a priority at all. China, for instance, dropped the 'one-' and even 'two-child' policy to overcome this problem. BTW, this latter is not working well, but that is a different discussion.
But it is this over populating that causes almost all problems and is a real threat for a lot of people ! Just imagine we would be able to half the world population ! It literally would solve pretty much every problem we face today as human species ! This latter is without doubt a scenario frequently discussed in some elite high society mediums, the offered solutions most likely are less 'plebs friendly' ...
So, to have a solution for the over population we need to make sure both 'the people' and 'the companies' are satisfied !
Robots and advanced AI could be the solution ! If robots can take over the holes in employment and taxes are raised on the labor done by these robots, the calm can remain in most countries. Of course, this implies that all people have a sort of 'replacement income'. Therefore, it is important that a fair share of the profit go to 'the people' and not only to 'the companies' ! Again, something an equilibrium for both parties can be found.
This latter is 'the bread' in "bread and circuses", the famous statement Julius Caesar made a couple of thousand years ago. If a meaningful occupation can be provided, depending on the type of people, the 'circuses' part of Julius' saying is fulfilled as well and 'the people' should be 'happy' and 'content'.
The second part is the one of the companies. If a large enough consuming market can be provided and potential for growth is present, it should be fine. People with more spare time, will consume more (if money is available) and that is only the initial effect. Growth could be available in allowing the robots/AI to become part of the consumer market. We are not the first to state this and if done well, it would offer the solution to keep the industry pleased. A 'conditio sine qua non' for success in this part of the equation !
And what about politics ? Well, modern politics is largely driven by lobby influences from the industry on one hand and 'the needs of the people' on the other hand. After all, it is the people who need to vote on these politicians and it often is the industry providing money (sponsoring) or resources (employment) helping the politicians to get voted. The above offered solution should therefore be good for politicians as well.
At that point, a global 'one- or two-child policy' should be enforced by an organ like the United Nations !
IF, and only IF, we are capable of colonizing other planets or 'space cities', an increase in births should be allowed to fit that purpose. People with the need of having more children, must be able to find a good and realistic virtual alternative, something Livinoids can provide in the very near future ! A Livinoid Emosia could grow from a baby to adolescents (and even further) at real life pace (or faster, if wanted).
If more and more 'products' can become virtual (look at virtual reality for instance) and if the world population can shrink gradually, this would be a enormous help for the environment and all related problems ...
... so, yes, AI/Robotics can be an extremely important positive influence on humanity !
I'm a realist and I perfectly see what is happening in the world. Currently a lot is not to be very proud off and we all should be collectively ashamed ! BUT, I'm also a dreamer and why not dream of making it a better place for our children and grandchildren instead of using our ingenuity only for short term visions, financial gain, arrogance and vanity ... ... one day the bill will be presented for that egocentric (also egocentric as a country) and currently socially acceptable behavior.
Ask Weinstein or other (sexual) abusers of power how it feels today, being spitted out by the world for something that apparently was socially acceptable 30 years ago. This could be you in 20 years for doing what you do today ! Holding the moral standards high in regards with the future of our blue planet and not being such an ass is usually a good starting point ... (Geert Masureel)
Geert Masureel
Do you consider yourself as an expert in AI (Artificial Intelligence) ?
Yes and No !
No, because I do not consider myself as someone who knows all about how others have implemented AI an deep learning as such. I have a basic to good or even deep knowledge of a lot of techniques used by most in AI, but I would by no means consider myself as an ‘Expert’ ! Especially since I’m not using techniques like deep learning in my concept and it remains therefore at best ‘academical knowledge’.
Yes, because I have been busy for so many years with how an artificial brain should be blueprinted and function, it is hard not to become an expert in assessing the dangers and the flaws in others' design of even ‘normal AI’, used today !
I’m Belgian and therefore, like most Belgians, I’m modest by nature. If I was American or Dutch for example, I would probably make bold statements and claim an absolute top position in the field of Artificial Brains (‘AB’) !
How come you know all this as a single (unknown) person and teams of well known and reputable scientists all over the world are no way near a real artificial brain (AB) ?
Three points to mention here :
The first is that I’ve learned over the years what I call ‘the paradox of education’. This implies that most people are limited in their (future) way of thinking because of their education. It is not a bad thing nor an insult, but apparently for most it is impossible to think ‘outside of the box’ once a ‘reputable teacher’ has explained how ‘it’ should be. This limitation is the main cause why, for most, it is ‘impossible’ to do what I claim. If something ‘is impossible’, one will simply not try to do so and it is almost sacrilege to question that wisdom …
Secondly, I have always had a very vivid imagination and can think ahead many steps. If chess would have interested me, I probably would have been good at it. But honestly, I find it extremely boring !
Imagination is also linking information together that had never been used together and that is something we call ‘creativity’. So the thoughts and reflections others threw away as ‘absurd’, ‘impossible’ or ‘far fetched’, I approached as ‘Imagine it was possible, how would it be done and what would be the benefits/downsides’.
The third is that it takes in-depth knowledge in so many fields of expertise and, as stated before, a lot of creativity to make the required links and connections between these different fields. That is why it is virtually impossible to get the required outcome in a team of experts. It is not disrespect for teams of experts, but is shier reality.
A similar example to illustrate this, is how Einstein came to his theory of relativity. He had a lot of imagination and creativity and knowledge in a lot of fields and dared to question the ‘existing ways of thinking’, but he did it on his own (Mileva, his first wive, only helped him with the mathematical part and not with the concept) …
For all clarity, I do not compare myself with Einstein. Einstein was Einstein and he did fantastic work in the field of physics. I’m who I am, nothing more, nothing less.
Completely aside the matter, but, personally I feel a closer bound to a character like Agatha Christie's ‘Hercule Poirot’ than to any scientist. Apart from the fact that he was also Belgian, using ‘the little gray cells’ in finding the whole puzzle and only revealing it when it is complete is more my style. I’m not interested in creating ‘the best puzzle piece’ in every single detail and then showing it to everybody. My approach initially takes up far more time, I fully agree, but once ‘the thinking’ is almost completed (like it is now), much more progress can be made in a shorter period of time !
This thinking ‘out of the box’ allowed me to focus on certain fields and try to get the wanted results. If it failed, and it did a lot, I could drop that field and look where it failed and replace it with something else until all pieces finally fell together.
To end with, if one chooses the right tools, the need for 'own' teams developing certain areas is not or less needed and it is therefore possible to achieve more with smaller teams ...
Is it possible to create or simulate (sub)consciousness, mind, spirit, believe, emotions, etc ? Aren't these divine things ?
A very good question and ‘the divine dimension’ was a path I needed to look at very close ! I had to see if I could imitate the human being and if that required a ‘level outside of the human’ or a ‘higher being’. If the result had been affirmative, it would have been impossible for me to create it and that would have been the end of the project as such !
So yes, I have been looking into all these concepts. I even had trouble with someone close because I dared to look at the concept of a collective consciousness and the possibility of a shared memory at that level.
Some people with transplanted organs all the sudden have new preferences (from the ‘old’ owner of the organ) they may have detested/hated before the operation. A 180 degree change in preference with links to the previous owner ! It is a phenomenon seen by diverse doctors in diverse patients and is not an isolated case.
With the goal I had in mind, I did not have the option not to look deeper into this. Was/is memory only partially stored in synapses and partially in a collective consciousness ? We do not know much about the storage of memories in the brain even today, and surely not at that time, so theoretically it was a possibility and I had to dig deeper into it. But hell, what a fool was I that I dared to assume such a thing and everybody knew that memory was stored in synapses and in what scientific books could I demonstrate proof for that stupid thought, and, and, and, …
Luckily for me, I have found ways of handling all these concepts within my concept without ‘external requirements’. Does this mean that there is no God or spiritual interaction in human life ? I do not know that, I’m just a simple human ! Like the Gods would come to me and explain it all ! The only thing I can state for certain, is that it is possible to create human-like intelligence without the need of a ‘divine actor’.
BTW, the following may come as a shock, but all is ‘believe’, even science. Today, scientists ‘believe’ statement A is correct because it can be ‘proven’. If tomorrow someone respected can demonstrate that the frame wherein the proof was delivered is not entirely correct, all the sudden statement B is ‘believed’ to be correct. The keyword here is ‘believe’ and not ‘know’. I go deeper into that matter elsewhere in the FAQs.
A better question would probably therefore have been : ‘Can a real intelligent system exist without a believe system ?’ and there, the answer is no !
If you are right and you can do all this, isn't it prove that all is mathematics ? After all, computers are all about ones and zeros ...
Mathematics is the language we use to try to explain and predict nature and future. But nature itself does not need mathematics to be ! So no, not all is mathematics and for sure not statistical ! It is just one way of looking at things and yes, sometimes making correct predictions (the more complicated, the faultier the predictions usually are), but may I remind you that someone laying the Tarot cards also sometimes predicts the future correctly … … is all therefore Tarot and mystical …
And yes, I like maths and science, and I’m stretching it a little with the above statement, but nothing in life is ‘all describing’ and encompasses ‘all wisdom’. Can I remind that 95 percent of all known energy in this universe is supposed (or should I state ‘believed’) to be there and we have no knowledge or explanation for it. BTW, if I’m correct, a Dutch professor is currently developing a new theory on that, but so far, nothing new heard from that. This unknown 95 percent is called dark energy. Thus, so far we have only an explanation for maximum 5 percent of the energy and a vast number of people with a scientific background I talk to act like they know all …
Lots of scientists and mathematicians also hope/believe ‘all’ will be explained one day. I’m not a believer of that statement as our brain simply is not large enough and our ‘processing power’ is just too limited, even with possible cyborg-like ‘technical extensions’. For instance, no single person will ever be able to ‘know’ infinity (unlike the title of a movie I recently saw : ‘The man who knew infinity’). At best, you would try to explain it to me in the version you ‘believe’ it to be, but not in reality as it is impossible in nature to ‘go there’ …
I have learned fantastic things from people not mathematical and for sure not scientific ! I have also seen people go so deep in maths that they lost all touch with reality and eventually committed suicide. Take my word for it that these guys knew way more of maths than we will ever together, but apparently it is a darker place than one expects …
About the ones and zeros, I can assure you I’m not using much ones and zeros in my code and if the code will work, it will be because the computer compiled my (sometimes) non mathematical functions and procedures into machine code, otherwise the computer does not work ! So, at best, it might prove that logic statements and iterations translated into mathematical series and sequences can sometimes imitate human behavior. But that is mainly because it is how the computer works !
Why this obsession of yours to create an artificial brain ?
As stated before, thanks to Mensa, I discovered my high IQ and that was for me much revealing. It explained why I never stop thinking (like the people close to me will without doubt confirm) ! For instance, I’m writing parts of this FAQ in Spain on holiday, while, simultaneously, I’m evaluating if I will include, in the Livinoid Physisia model, the effect of a higher IQ on the elevated basal energy usage, at rest, on the brain, in function of this IQ. I have learned over the years that this is not the sort of thoughts most have on holiday …
I need this thinking and this is far most the only projects challenging me enough. It has kept my mind busy the past 30 years and it will keep me busy the upcoming 20 years. Every phase required and will require my full attention !
Why can't you just create a brain, without all that other stuff you claim you need ?
Believe me, I would if I could ! And I have tried to avoid all this ‘other stuff’, you have no idea !
But you just need these functions to keep the thing going ! There is no possible way you can auto-motivate a system without emotions. At least not if it needs to be advanced, like the human brain, but even for dog-like brains it is required !
10 years ago, I tried to do it with a simple version of emotions. The principle is easy, all emotions are derived from fear. God, was I proud I had cracked that one ! It looked very good on paper and is correct !
Unfortunately for me, I have not found one good way for implementing this system so that a concept would be auto-motivated. It took me another five years to discover the requirements in how ‘emotions’ could work without the use of that system and another five years to blueprint this and make sure it would work ! The simplest and safest way was, again, to mimic some human functions, so I did ! The result is lovely and brought me back to my goal and is the reason why it is so complicated and yet so simple ...
You are making it way too complex, just use approval/rewarding and denial/punishment and it will work far better, don't you know that ?
How come I did not think of that !?!
You have no idea how many people know all better and tell me how I should model my concept ! And yes, the number one suggestion is by far this one.
Of course I’m using this ! How else could the system learn if no ways exists to tell the system it is doing ‘good’ or ‘bad’ ? You are right in one thing, it is very important and I cannot see how any intelligent system could do without it. Kudos for that !
But if the driving force in your AI brain is this rewarding vs punishing mechanism, you are in deep trouble ! Not if your AI is on the level of just a bug or an ant, but if it is a little more complicated, the system must be capable of rewarding/punishing itself in some way to allow self-learning and motivation. That is the part where it gets tricky ! Ask any psychologist why so many people are addicted to social media and their smart phone. They will answer that every status update or newly received message feels to them as a ‘good feeling’ and ends in the rewarding center of the brain. Luckily for us humans, other mechanisms are in place as well, otherwise it would be impossible to help those poor addicts !
So yes, in very primitive forms of AI, you are right, but you would end up with addicts blocking themselves from further growth in more advanced forms of AI. With possibly unpredictable behavior due to this. Even dogs do not always react well or ‘as expected’ on the ‘rewarding/punishing’ mechanism …
Humans have this ability to become addicted due to the ‘good/bad’ mechanism or should I say ‘feeling’, but it is rather the type of personality in combination with some settings that will cause this and not only the workings of the concept ‘good/bad’. That is why it is typically that only between 10 and 40 percent of humans are usually susceptible to almost any kind of addiction. In the case of an AI entirely based upon ‘good/bad’, it will be (close to) 100 percent …
… Similar but susceptible for everybody is for instance the influence of cocaine on the brain. It stimulates the ‘feel good’ region and even basic necessities like eating and drinking are ignored and overwritten …
Still not convinced ? Tell me how you would get the attention from ‘the thing’ away from something ‘good’ ? You wouldn’t be able to, as ‘the thing’ will be programmed to focus on ‘good’ and avoid ’bad’ … … how in hell can such a thing learn new matter (by itself) ?
That is why most ‘intelligent systems’ have a tendency in being so dangerous to mankind ! It is not simple to do this right and the majority of developers will therefore use simple (or even complex) neural networks and deep learning combined with fixed procedures and that is not a good combination when things get more complicated … … Using ‘good/bad’ will seem ‘good’ at that moment, but only ‘bad intelligence’ can come out of that !
Are you really, truly aware of the dangers of such a project ?
Do you sleep well at night, knowing what you do could be the end of the human species ?
A number of people have asked me this question (or similar) over the years. To be honest, I totally get the question and it is actually a damn good one !
It took me five years of moral search (around the year 2000), if I was to continue the path I started or I needed to stop there (I knew for sure I was on the right path to create an artificial brain back then).
It must have been 2005-2006 when I heard someone in some program say that one day the human race would become ‘the pets’ of an advanced AI form.
That particular moment, my moral struggling was instantly over ! My thoughts then were that :
if I would end up as ‘a pet’ anyway, I want my ‘boss’ to be a pet lover and treat me with respect, like I do myself with my dogs and cats for instance !
That was the moment the mission no longer was about the possible destruction of human mankind, but in the long term maybe even about the possible saving of our species ! Take my word for it, if you have a conscious, like I do, it makes sleeping at night possible again …
Therefore, I always have and will keep the survival and preservation of (human) life as my moral standard in this matter !
Frankly, you better address this question to a lot of companies, countries and researchers worldwide who don't have this high of a moral standard. They don't tell you what they are doing right now, but it is not always "the preservation of (human) life)" that is held as the highest priority, I can assure you that !
Most of these people think they can beat 500 million years of evolution in just a decade, talk about arrogance ! If the main motives are money, power, reputation and military advantages, not much good can come from that !
I do not have a psychopathic personality and know how important it is not to create psychopathic "robots". Since the absence of "empathy" is one of the major characteristics in defining a psychopath, it is fair to say that all AI not having built-in empathy as potentially "psychopathic" and is therefore dangerous for (human) life ! Empathy is built in from the start in my concept ! (Geert Masureel)
It is one of the main reasons I decided to make Artintelli a non-profit organization ! To avoid the temptation or necessity to do stupid things because of profits or shareholders !
Dude, what you claim (digital emotions and human like artificial brains), scares the pants out of me ! Don't you understand that ?
Yes I do and I hope it does make you shiver, as it should ! I’m very aware that this is controversial stuff lots of people are afraid of ! I do see the fear (and disbelieve) in peoples eyes every time I tell them what my line of work is and I do understand that ! Unfortunately, I know enough guys in the business who don’t …
The main reason I come forward with this information at this moment is to start up the conversation. It is time people stop putting their heads in the sand and the dialog on advanced virtual (human-like) intelligence should start as it is approaching fast !
Three main groups are in place in this matter depending on their standpoint on this topic : The pure pro, the absolute contra and a majority of people not really knowing what to expect. 'Fear' dominates in the latter two groups but let me make one thing very clear : it is OK to be afraid and to have an own opinion on this matter, I fully respect that ! We have the mechanism of fear to protect us and it is a damn good mechanism !
The only two things I would ask you at this moment is, firstly, dare to accept that some advanced intelligence can be used for the good of the people. Remember Fukushima for instance, if robots with advanced intelligence would have been available, it might have been less devastating for nature …
A second thing is that you understand that a number of people in this field of expertise look (in advance) for solutions to make it as safe as possible. Some, like myself, even have possible solutions if it might literately ‘get out of control’. Keep in mind that ‘progress’ is something we cannot stop. Not you, not me, nobody ! Especially if the economical and military advantages for such systems are just so high …
To be honest, I’m not that scared that human species will get exterminated by global warming or pollution. It will be devastating to a lot of other species and nature, but human will find engineered solutions for that.
No ! My main concern is that, in the long run, one day an advanced AI will be created smarter than humans and that is the day we should be extremely careful ! It is the day we could become ‘the pets’ of other ‘superior’ lifeforms …
In Belgium we have a saying, freely translated, going like : ”Those afraid also got beaten !” . It is very wise as it indicates that by turning our heads the other side or crumble and shiver in a corner, will not provide us the required solutions ! The main problem we will face, as I describe in answers on other questions, is that ‘it’ will most likely have no ‘empathy’ for humans and as such ‘it’ will not ‘care’ about how our species will end up ! If the ‘thing’ is smarter than us, ‘it’ will beat us time after time and it will not be ‘a happy friendly encounter’ ! Just unplugging will most likely not work, remember, it outsmarts us every time …
The only real alternative in such case is creating an AI at least as intelligent as the ‘doom AI’ BUT with empathy for humans and for life ! Something that can be done as Livinoid Intelligensia supports empathy for humans !
In short, one shouldn't be afraid of me but rather of people making advanced AI applications not coming forward and not saying what their intentions are …
Most scientists in the ‘exact science’ I know, love ‘ratio’ and are not well provided in the department of empathy (feelings over ratio), so their solutions most likely (not to say surely) do not support empathy at all … … Luckily for me, I have my fair share of ‘empathy’ and consider it one of my stronger sides !
What are your intentions with Livinoid Intelligensia ?
The big advantage of Livinoids Intelligensia is that they are capable of learning new matter by themselves (or by a memory pack containing the optimized tasks). This means that one Livinoid (Intelligensia) can do several jobs, if the tasks are compatible with the hardware.
Technically a Livinoid Intelligensia set up as a cooking robot, could also easily do the dishes and possibly some other cleaning tasks and even polishing shoes. This results in multi-functional, affordable robots. The situation now is that for every task, a different (and expensive) robot is required, getting most of the time the job half or only quarterly done.
Livinoid Intelligensia should open the market for all-round robots and make their price more democratic. It should also accelerate their use in the field of both domestic and industrial use.
I strongly believe that a small number of companies will provide ‘the software’ (like a Livinoid Intelligensia) while a larger number of providers will offer the hardware and extra tools. Compare it with the computer market. Only a hand full of Operating Systems exist, but dozens and dozens of hardware suppliers are available.
For military and law enforcement I have not found my definite answer yet, it is complicated. If I do not provide solutions in that area, other solutions without empathy will be used. At this point, it is good to remind you that every ‘real’ robot, no matter who produces it, can be used to harm our species, even Livinoid Intelligensia !
Therefore, if we decide to enter these markets, it will have to be with enough warranties for safety towards human species.
To conclude, Livinoid Intelligensia should make our lives easier and more comfortable, allowing us to concentrate on these fields we personally wish to explore better.