🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

Why A.I is impossible

Started by
116 comments, last by Alexandra Grayson 6 years, 4 months ago
45 minutes ago, grumpyOldDude said:

Why? If humans coded the logic by which the "AGI" thinks or operates and builds upon... AGI would develop (because of infinite and fast self programming resource) to be more advance than us , but why would it be alien to us

Well the question is what do you define as thinking? If thinking is simply defined as small pieces acting in accordance to produce some sort of output to an input, then wouldn't a search engine qualify as thinking? It's not human thinking, in the sense that it thinks using some sort of search algorithm. Something similar could be said for a natural language engine. A navigation algorithm does also think in that sense. Moreover, AGI would emerge from these sort of things interacting with one another in ways we didn't foresee. Kind of like the general goal of machine learning. So if we cannot foresee how these things would interact with one another, how it would utilize algorithms, what it would emphasize, etc., it's form of thinking would seem alien. We wouldn't see it as 'thinking' necessarily. 

Then there's just hardware. Machines are built on transistors, and inherently use base two. We are built on neurons and count on 10 fingers. We perceive through eyes, ears, skin, nose, etc. Machines can perceive differently. Machines use different means to perceive similar things. Moreover, machines can perceive things we simply cannot. These are the reasons I think any machine intelligence would be alien.

No one expects the Spanish Inquisition!

Advertisement
4 hours ago, mikeman said:

Well, as long as we're talking about it...

https://en.wikipedia.org/wiki/Chinese_room

Maybe I am wrong but it looks to me that Chinese room thought experiment was beaten by deep learning.

Here is why:

lets assume that language is a detection/description tool in this case for the machine. So lets throw into deep learning  all the symbols combinations and combinations of combinations (higher level in neural network) etc. Now we should have a tool to describe a state.

Now we need to get a knowledge about the states- therefore now we need to apply deep learning onto human society and applied topics to the translations. Bham, now the computer knows that granny + grandad equates to 150% more Christmas gifts on average than grandad alone. Now lets talk about the Easter when grandad is in spa- I think doable to some certainty.

 

 

Two foundational elements of a "thinking machine" already exist.  Deep Mind can learn and adapt through trial and error, and there are many different means of creating a "self programming" computer.  These are two key elements of a "thinking machine".  I can create a "self programming simulation" and I am not even a programmer.  If you think in terms of centuries instead of years, if we can already do these two things it seems nearly a certainty too me that we will have "thinking machines" within a few centuries, and maybe even a lot sooner than that.

The issue than becomes how you define "intelligence".  Even if we reach the point that machines can actually "think", is thinking alone intelligence?  Is something like a "soul", if such a thing even exists, necessary for it to truly be considered to be a "thinking machine".  Does it need to be sentient to have truly achieved the goal?  

I think a "thinking machine" is a near certainty, considering that we already have some of the basic building blocks for doing that and 300 years (for example) is a very long time to work the rest of it out.  So if you are just talking about a "thinking machine" I think that is an eventually certainty.  Commander Data, on the other hand, is a lot more than just a "thinking machine".  So the definition of "intelligence" is the key to this discussion, otherwise everyone is likely to be talking about different things.

"I wish that I could live it all again."

 

21 hours ago, deltaKshatriya said:

If thinking is simply defined as small pieces acting in accordance to produce some sort of output to an input, then wouldn't a search engine qualify as thinking? It's not human thinking, in the sense that it thinks using some sort of search algorithm. Something similar could be said for a natural language engine. A navigation algorithm does also think in that sense.

             ...

So if we cannot foresee how these things would interact with one another, how it would utilize algorithms, what it would emphasize, etc., it's form of thinking would seem alien. We wouldn't see it as 'thinking' necessarily. 

 Search engine, navigation algorithm don't come near qualifying as a thinking machine. Someone presses a button and out comes the output. They don't think independently, don't make decisions independently., They are not creative. They don't make choices. A chess program doesn't have a mind, and as such its decisions are not really independently, they are programmed decisions They only obey your commands 

21 hours ago, Kavik Kang said:

I think a "thinking machine" is a near certainty, considering that we already have some of the basic building blocks for doing that and 300 years (for example) is a very long time to work the rest of it out.  So if you are just talking about a "thinking machine" I think that is an eventually certainty.  Commander Data, on the other hand, is a lot more than just a "thinking machine".  So the definition of "intelligence" is the key to this discussion, otherwise everyone is likely to be talking about different things.

In the future, at best I can see machines having only a pseudo-human-mind 

A machine can simulate human mind or intelligence, but it would be missing self awareness + independent creativity (for instance independently designing and constructing another machine based on their independent intuition) + social endeavours.

You might say but Termites do not have self awareness i e they cannot recognise themselves in a mirror. But they meet the other 2 requirements  

 

can't help being grumpy...

Just need to let some steam out, so my head doesn't explode...

4 hours ago, grumpyOldDude said:

Why? If humans coded the logic by which the "AGI" thinks or operates and builds upon... AGI would develop (because of infinite and fast self programming resource) to be more advance than us , but why would it be alien to us

Because we don't code the logic. 

No-one is going to write an AGI the way we write "normal" computer programs. You can't write 


if (isHappy()) smile();
else if (isAngry()) frown();

AGIs are simply way too complex for this. We don't even know how our existing machine learning algorithms work, and in many ways, we can't know.... the datasets are simply too complex for us. This might also be the reason that we don't understand consciousness. It could be that "consciousness" is simply an emergent property of extremely complex data processing (possibly an abstraction?).

I would recommend this as a simple primer to machine learning

 

if you think programming is like sex, you probably haven't done much of either.-------------- - capn_midnight

I think the greatest limitation to our development of a sentient artificial intelligence is currently limited by our model for the human mind and how intelligence works. The model is extremely limited and poorly understood, even by neuroscientists and brain surgeons. However, given enough research and time, that particular scientific model will progress towards higher levels of correctness (which is interesting in a different way, because it would be the first model which is self aware).

As far as intelligence goes, I think its more of an emergent property of our neural topology. There's nothing magical or fancy about it, and to some people who wish to see magic where it doesn't exist, this may be disturbing on an existential crisis type of level. A narcissistic part of our identity wants to believe we're unique and special, but the reality is that we're really not and that may be hard to deal with.

To say that creating sentient artificial intelligence is "impossible" is a completely foolish and absurd claim which hints at a level of unawareness/ignorance on your part. Just because you don't know how to do it, doesn't mean it isn't possible. Although our current scientific models for general intelligence have big gaps, there is no guarantee that those gaps will continue to exist far into the future. You just can't say with reasonable certainty what type of technological achievements will never be possible, because you'd just be applying modern ignorances towards the future.

Super Intelligent Humanoid Sophia looks very creepy indeed.

World's first robot citizen 'Sophia' gets her own legs 

can't help being grumpy...

Just need to let some steam out, so my head doesn't explode...

Damn robots!!!

 

"I wish that I could live it all again."

The dude in the first post said AI is impossible because the brain doesn't control us, it is our 'soul' or 'spirit' or whatever. Therefore AI would be impossible to replicate.

If you ask anyone who is into religion and the bible, they will tell you that animals have no soul.

https://heritagebbc.com/bible-question-and-answer-archive-1/iii-1-do-animals-have-a-spirit/

So where does their intelligence and capacity for learning come from then?

 

11 hours ago, slayemin said:

To say that creating sentient artificial intelligence is "impossible" is a completely foolish and absurd claim which hints at a level of unawareness/ignorance on your part.

Huh, that's interesting because I've always stated the exact opposite

 

11 hours ago, slayemin said:

I think the greatest limitation to our development of a sentient artificial intelligence is currently limited by our model for the human mind and how intelligence works. The model is extremely limited and poorly understood, even by neuroscientists and brain surgeons. However, given enough research and time, that particular scientific model will progress towards higher levels of correctness (which is interesting in a different way, because it would be the first model which is self aware).

For me, this is a huuuge stretch.  Though I won't say it's impossible, I think the current way people frame the context of this discussion, when they talk about creating sentient or self aware machines, they are way way off.  Especially if someone thinks it's gonna come out of a computer science laboratory. 

All this being said though I think this brings up something important about the nature of our subjective experience, or the nature of our self awareness.  The two philosophical camps are these; that our everyday conscious experience is an illusion, a by-product of neural activity and we are just passive observers hopelessly clinging to the idea that free choice is a thing and that we exert some influence on our lives.  The alternative is that we do have free choice and that it's because of our conscious decision making that we conduct ourselves the way we do.  The former position is held by most of the skeptical community, Danniel Dennett, Sam Harris and others.  The later is held by the majority of people, including myself.  However, if free-choice is actually an influencing agent in the universe that must mean it abides by some sort of rules, measurable rules that perhaps science could peer into.

Now that being said though slayemin, Scientific inquiry is something which is abstracted outside of our conscious experience.  It is a tool we use for understanding those things which is are reduce-able and which can be repeatedly studied and analysed by others.  Trying to use science to understand how self-aware systems work is next to impossible because as someone else pointed out earlier in this post, How do they really know anyone else really has a subjective conscious experience?  For this reason I don't think our current model for analysing the world is going to provide significant insights.  And I'm 99.999999% confident no computer science lab is gonna create a self-aware machine.

This topic is closed to new replies.

Advertisement