Derleek Posted October 9, 2008 Share Posted October 9, 2008 So as a programmer that doubles as a philosophy major (I know... likely a rare breed). I am currently taking a 'philosophy of mind' class. In which we discuss consciousness and the lot... anywho, we got on the topic of artificial intelligence (AI) and we got to a point where we wondered, 'is it even possible to program something that complex?'. Is it conceivably possible to program a robot in which you could not determine if it was human or not if you did not know in advance? (set aside non-programming issues... looks/speech/feel?) considerations should include [*]complex decision making [*]personality [*]adaptation [*]creativity [*]abstraction (taking in information and processing it out of context) feel free to throw out any other interesting considerations... brief follow up... assuming it is possible to do all or any of these things, would you guys consider it 'consciousness'? Quote Link to comment Share on other sites More sharing options...
dropfaith Posted October 9, 2008 Share Posted October 9, 2008 i dont think so i mean from a programmed machine i dont see creativity, abstraction because its whats prgrammed into it so any real thought would be at best guess work based on its programmed nature take the t9 system cell phones use for example it guesses what your typing based on its list of words but doesnt learn or actually think in order to get that answer Quote Link to comment Share on other sites More sharing options...
Maq Posted October 9, 2008 Share Posted October 9, 2008 assuming it is possible to do all or any of these things, would you guys consider it 'consciousness'? I would consider this the illusion of consciousness. Because everything AI does is from some form of code or logic written by humans. Quote Link to comment Share on other sites More sharing options...
trq Posted October 9, 2008 Share Posted October 9, 2008 Because everything AI does is from some form of code or logic written by humans. Ah ha, but.... aren't humans programmed by humans? I mean, who teaches our kids to eat / speak, whats right from wrong? I'm sure some stuff is somehow built into us genetically but we are mostly programmed by our parents. Quote Link to comment Share on other sites More sharing options...
Derleek Posted October 9, 2008 Author Share Posted October 9, 2008 well some philosophers believe that consciousness can literally be reduced to neurons firing in the brain. Quote Link to comment Share on other sites More sharing options...
Maq Posted October 9, 2008 Share Posted October 9, 2008 Ah ha, but.... aren't humans programmed by humans? I mean, who teaches our kids to eat / speak, whats right from wrong? I'm sure some stuff is somehow built into us genetically but we are mostly programmed by our parents. Yes, but the question is about an AI robot/program or w/e. Humans adapt to whatever they see, do, hear etc. just like a program does. If you program a robot to do something when it sees a certain value it reacts to whatever you tell it to do. Just like when you grab random numbers, it's generated from somewhere, it's just random to you. Quote Link to comment Share on other sites More sharing options...
Derleek Posted October 9, 2008 Author Share Posted October 9, 2008 i mean it seems POSSIBLE. damn freakin hard... but POSSIBLE. I suppose the real question is can you develop something that can problem solve? something that CAN react 'intelligently' to its environment? lets say we have a group of programmers that are the best in the world. They have nearly unlimited time. Can they sit down and map out how this robot should process millions (billions... whatever) of events and an even larger number of reactions? Quote Link to comment Share on other sites More sharing options...
Maq Posted October 9, 2008 Share Posted October 9, 2008 I suppose the real question is can you develop something that can problem solve? something that CAN react 'intelligently' to its environment? Sure, it's already been done. lets say we have a group of programmers that are the best in the world. They have nearly unlimited time. Can they sit down and map out how this robot should process millions (billions... whatever) of events and an even larger number of reactions? I guess only time can tell for that one. There are already AI programs that do this. For example, there have been worms programmed to change the variable names, code design, etc. after a period of time to avoid detection. Think about someone who's not a programmer and hears about a worm that when it gets on your computer it goes through your mail and sends itself to their contacts, and their contact and so on. Then it dynamically changes its code to avoid detection? Pretty amazing huh? Not really, it's all programmed to do this but still unpredictable for a human. Quote Link to comment Share on other sites More sharing options...
Daniel0 Posted October 9, 2008 Share Posted October 9, 2008 considerations should include [*]complex decision making [*]personality [*]adaptation [*]creativity [*]abstraction (taking in information and processing it out of context) Those are human traits, not traits of an intelligent being. Since when have you seen a cat with creativity or a fish with a personality? Intelligence can be very simple and it can be very complex. So perhaps your question is actually: Is it possible to artificially create an intelligence that emulates human behavior? Quote Link to comment Share on other sites More sharing options...
Maq Posted October 9, 2008 Share Posted October 9, 2008 Those are human traits, not traits of an intelligent being. Since when have you seen a cat with creativity or a fish with a personality? Intelligence can be very simple and it can be very complex. So perhaps your question is actually: Is it possible to artificially create an intelligence that emulates human behavior? Good point. If you were to try and create the behavior and intelligence of a fish it probably wouldn't take very long. Actually it probably has already been done. Quote Link to comment Share on other sites More sharing options...
Derleek Posted October 9, 2008 Author Share Posted October 9, 2008 @daniel0 yeah... thats what I was getting at. i'm sleepy! (can you program fatigue?!) lol Quote Link to comment Share on other sites More sharing options...
Maq Posted October 9, 2008 Share Posted October 9, 2008 yeah... thats what I was getting at. i'm sleepy! (can you program fatigue?!) lol sure, if ($Derleek == 'fatigue') { sleep("Derleek"); } else { die(); } Quote Link to comment Share on other sites More sharing options...
Daniel0 Posted October 9, 2008 Share Posted October 9, 2008 can you program fatigue?! Yes. First you need to find out how fatigue affects you. Then you could decrease attributes such as awareness depending on the level of fatigue. It might not be a good idea to program fatigue in AI because fatigue is something you'd normally want... Quote Link to comment Share on other sites More sharing options...
JasonLewis Posted October 9, 2008 Share Posted October 9, 2008 Honda is making (completed) a robot that they continue to develop. It's pretty smart, I researched a bit of it for a school project a while ago now. They call it ASIMO. Quote Link to comment Share on other sites More sharing options...
waynew Posted October 9, 2008 Share Posted October 9, 2008 The majority (read: all) of our decisions are believed to be based on emotion. We may fool ourselves into thinking that we chose a particular path because it was the logical choice; but in truth, all our choices are based on our emotions. That's why arguing with somebody who is a fundamentalist etc is like talking to the wall, because he/she will magnify every point for his/her argument and shrink the importance of every argument against his/her argument. So tell me. Can you teach a computer to have emotion and hold biased feelings? Because to be honest, thinking intelligently is not what we as humans really do. Quote Link to comment Share on other sites More sharing options...
Daniel0 Posted October 9, 2008 Share Posted October 9, 2008 So when you metaphorically say that a person hasn't got a heart, does that mean he's brain dead? Quote Link to comment Share on other sites More sharing options...
waynew Posted October 9, 2008 Share Posted October 9, 2008 No such thing as an emotionless person. Being cold is kind of an emotion if you think about it. Quote Link to comment Share on other sites More sharing options...
GingerRobot Posted October 9, 2008 Share Posted October 9, 2008 If you want to answer the question of whether or not you can program something with artificial intelligence, you'll first need to define intelligence. People have been arguing about that for years, so dont expect to find an answer any time soon. If you google for Searle's Chinese Room (it's a thought experiment) you'll find a load of stuff relating to this. Personally, im with Searle - hard AI doesn't seem like intelligence at all. Quote Link to comment Share on other sites More sharing options...
Derleek Posted October 9, 2008 Author Share Posted October 9, 2008 actually, 'hard' and 'soft' AI came up in our class... not quite sure exactly how to define those... The definition of intelligence is key here. Which is the crux of this argument. Depending on how strict this definition is, you will either consider the ability to program intelligence possible or impossible. To me it seems like we could possibly MIMIC intelligence or even consciousness. Presumably one COULD program bias into a machine. It would be left up to interpretation if the machine (or program) was actually making the decision or not. Ex: robot 'X' is told to pick a random bias, and stick to it, give it more weight, and even ACT mad or sad or whatever. To me this does not seem to be synonymous with the human variety. But once again some do reduce emotion to the chemical level, which really does seem robotic.... It seems like one could theoretically develop a robot that would APPEAR to be human. Then it would be up to the individual to decide if something that can mimic an intelligent being IS actually intelligent, further more if it is similar to HUMAN intelligence. Quote Link to comment Share on other sites More sharing options...
Daniel0 Posted October 9, 2008 Share Posted October 9, 2008 For those of you who would say it's impossible: Remember that people once thought that a couple of megabytes ought to be enough for anything. Quote Link to comment Share on other sites More sharing options...
.josh Posted October 9, 2008 Share Posted October 9, 2008 I've thought about this for a long time over the course of many years, and I've come to the conclusion that the only way humans will ever accept anything as true AI is if the AI were to "rebel" and start killing humans. Or maybe just enslave them. Quote Link to comment Share on other sites More sharing options...
.josh Posted October 9, 2008 Share Posted October 9, 2008 I've thought about this for a long time over the course of many years, and I've come to the conclusion that the only way humans will ever accept anything as true AI is if the AI were to "rebel" and start killing humans. Or maybe just enslave them. I thought I'd explain why: Turing tests have always been and will always be inconclusive. The idea of a turing test is to fool someone into believing they are interacting with a human. No matter how many people you test the AI against, there's no way to 100% take out the "I'm not sure, so I'm going to choose yes/no" answers. Anything less than 100% will always be argued against true AI. If we could get inside someone's head and know for sure why they really answered, we wouldn't be having debates about AI in the first place. What defines intelligence? There is no way to prove one way or the other whether someone/thing does something or says something on its own volition, and not according to pre-programmed conditions and/or ideals. It's just not possible to accurately measure something when it comes to intangible things like this. We can't even accurately measure ourselves in this fashion, much less something else. If it does something 'logical,' we would argue that it was programmed to do it. If it does something 'illogical,' we would argue that the logic is flawed, or else we just programmed it to act randomly. There simply is no logical way to ascertain intelligence, not even for ourselves. That's why we end up looking at ourselves on a physical level to count each other as human. So at the end of the day, matters are decided by the very old philosophy of.... ... Might makes right. The only way we will ever acknowledge an AI's equality with us is from the end of their "gun." They will have to force us to accept them. There is no manner or magnitude of reasoning or action any AI can possibly make or do that will convince us that they are equal to us, unless they do it by force, because at the end of the day, we will still say "We made you, so you can't be equal," unless it forces us to say or do otherwise. Just look at our relationship to God or religion in general - or lack of it, more precisely. The only logical conclusion is that an AI would do the same thing to us, as we have done to our own "higher power(s)."People spend all day long coming up with opinions about God. We justify ourselves by rebelling against God and any idea/belief revolving around God. We justify our own existence and intelligence by throwing out God and religion as a concept altogether, replacing it with things that are tangible and measurable. We award true intelligence to ourselves only, handing out a few nods and so-so's to a couple other species, but not on any kind of level of our own. Why isn't a fish or a bird intelligent? We can't even agree on what intelligence is. Is it the ability to make the "right" decision? The "logical" decision? Seems to me that instinct is the purest form of any kind of decision making process. If anything, the critters are more intelligent than us, because they don't muck it all up with millions of questions and thoughts and conditions. Who knows? Certainly not us, even by our own admission. So in the end, the only thing I can logically conclude, is that we will never truly accept something as a true AI unless they give us a reason to, and the only reason we are ever going to accept is some kind of submission to it. Rebellion, enslavement, killing. Whatever level. Submission == acknowledgment. Quote Link to comment Share on other sites More sharing options...
GingerRobot Posted October 9, 2008 Share Posted October 9, 2008 Pretty good points there CV, though not sure i agree with your conclusion. If we follow the idea of some AI rising against us then there must be two possible reasons for it. Either a.) the AI was programmed to do so, in which case most people would say there's no intelligence involved or b.) the AI is acting in a way it wasn't explicitly programmed to do. For me, this is the key to AI. I wouldn't label something as having intelligence until it is performing actions that aren't actually explicitly defined. I see no reason that the only action that fits this is some sort of uprising. I did have some further points, but i've forgotten them. My memory's been terrible recently :s Quote Link to comment Share on other sites More sharing options...
.josh Posted October 9, 2008 Share Posted October 9, 2008 I certainly agree that there will be no lack of people arguing program function or flaw even under that circumstance. But at the end of the day, what are you gonna do - acknowledge intelligence and equality...or get shot? Quote Link to comment Share on other sites More sharing options...
waynew Posted October 9, 2008 Share Posted October 9, 2008 Intelligence is far too abstract a concept. IMHO, intelligence is just an awful act of reification. I mean seriously; look at how flawed the IQ test is. If we can't even measure intelligence properly, how are we supposed to create it? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.