I really don't think we have decided on anything at all, I think that the term AI means a machine that can learn and feel like a human, i think that there is definitely nothing in there about surpassing human intelligence--I think that superior powers are not implied at all, I think what is implied are feelings and an ability to learn, for that is what is hardest to program
we've already covered what AI SHOULD be. it should surpass human thought processes, and have "Superior powers of mind," like in zack's dictionary ... thing. SO, if we're making artificial intelligence, we should compare it to us, having us be ranked under standard powers of mind. that way, the AI should be better than us, and we should try to get rid of our flaws. once we make the AI, it will know its own flaws as well, and it could either remake itself with even fewer flaws, or make a completely new version with fewer flaws. but I don't think perfection will ever be reached. ESPECIALLY not if we inculde our OWN flaws when we're trying to make something superior. ... I think I got off topic...
You're right, perfection is completely subjective, it's different for every person. But still, why would you build an intelligence that would CHOOSE to stay at it's current level? Why would that be an advantage? As for wanting to be normal, we are still building a robot, not a human, and if we are building this robot to be smart, it will still understand that it is not a human and is in fact designed to be better than them.
I think for our purposes, perfection means self-improvement, self-awareness, and an understanding of humans with a desire to help them. Come to think of it, that's the definition of a perfect human as well.
PS: A message for our faithul (non-existant) readers - You can comment on posts by clicking THINK at upper right-hand corner of every post so GET TO IT!!!!
Perfection is a very personal thing. Perfect for me may be way to high standard for someone, and way to low standard for someone else. I think that perfection is to have something that has the ability to chose what it wants, and has the ability to stay on level, or surpass, depending on its feelings and the environment. In a lab setting, it may become all it can due to influence of people around it, but in reality, it may want to be normal like any other person.
hello! it's katie. i'm invading. i've been told not to be too idiotic so you don't have to worry. well, maybe you should worry a little bit. after all, it is me...
so! sticking to the topic. artificial intelligence. hmm. it seems you guys have two different definitions of the perfect AI; some people believe it should be completely humanlike to be perfect, some people think it should surpass humans and lose all the qualities that make us weaker, making it some sort of invincible superhuman. i don't know. i don't really think perfection is possible. if we're trying to define intelligence, we should make a clear definition of perfection, too.
Alex then said this on IM, but he was too lazy to post it:
you're right---but by my definition of perfect AI---it must be able to discriminate or form societies. why make organisms afraid of becoming more advanced---we're not specifically afraid of that, we're afraid of what the stronger and better versions might do
So your definition of perfect AI, then, is to include all the human flaws that make people want a perfect artificial friend? No, by making AI you should be improving upon humans, not just remaking them, defect and all. As for being afraid of what the stronger and better versions might do, the only way to not have that fear is to keep getting better. To adVANCE!
PS: Now THAT is a skillful quote, if I ever did see one.
But Alex, a human who is resistant to changes that help it is not a healthy human. The changes the AI makes to itself IS self-presevation. If it didn't constantly make itself better inevitably, a better program would emerge, and our hero would be obselete. And in the world of software, obsolecense is death. The bottom line is that AI is NOT human, and it would know that. The AI would not be part of a race, and it wouldn't have to worry about descrimination, and it would know that. Why build a robot afraid of becoming more advanced?
ok....what I mean is that to me, perfect AI would be human AI, with whatever processing speed we could manage, it really doesn't matter, what is important is that is can feel human emotion and desires, and the strongest desire in the human body is the desire for self preservation, something that must be instilled in perfect AI, and because there is a need for self-preservation, it would not allow itself to be changed too much, because that would not be self-preservation, much like the x-men and regular humans--only different by a few chromosomes, but yet fighting because of fear for self-preservation, or bean and humans, within the range of normal variation, but still discriminated against out of fear that the human race will change into something unhuman..i figure everyone should get at least one of those examples...if you don't ask--probably won't post for a while..i'll be away--be back in about 2 weeks
why would we want AI to be human? AI has SUPERIOR powers of mind, not just EQUAL powers of mind. and I strongly disagree with alex about limiting change. that is relying on tradition instead of fact, which is ignorant and blind to the truth.
Man, you guys need to be more clear on what you say. And it's a he now? Whatever, my response to what I THINK you said, Bruno, is why would it need to change its programming to make a decision? That's the point about what we are talking about, a program intelligent enough to enhance its own efficiency through changing it's own programming. If it's needing to change it's programming everytime it needs to make a decision, it's failing miserably.
Well, AI would also need to be able to change his program if he wants, not to make it more eficiant, but to make any decision that comes along. They'd need to be able to make irational decisions to be human.
I'm sorry, Alex, but that was REALLY unclear and I don't get it. But either way, what is too much change? I'm not who I was 10 years ago, or even 3 years ago. What you're saying is exactly the opposite of AI; an intelligence that is resistant or ingnorant to change is not a very high one. I think we need a definition of real intelligence to know what ARTIFICIAL intelligence is.
The capacity to acquire and apply knowledge.
The faculty of thought and reason.
Superior powers of mind.
Well, I guess that's something, but I think there should be something in there about adaptaion. So an AI is a program that can acquire and apply knowledge, reason prperly, and adapt to new situations. Thinking about it now, that seems pretty possible, with the right technology.
I really don't understand how you can justify that they are always changing----to the degree that they are changing---so are humans, to make true AI, it would have to be instilled with a value of self-preservation, which would mean that it would never change too much in fear of not being itself anymore
I guess you're right Kevin. A perfect AI could never exist, because a perfect AI would be alble to deal with anything, but you can't anticipate everything, and neither could even a perfect AI. Unless it had precognitive abilities. Now THAT would be a challenge.
PS: KEVIN! WHERE'S MY POLL THANG???
PPS: Maynard is the singer for TOOL, and I said for Maynard's sake instead of God's or Pete's or Christ's because I believe in Maynard a lot more than I believe in those other people.
I had a really good post but I accidentally deleted it, but it was some thing like this. You'd have to have perfect people to have perfect AI. People aren't perfect because they have the ability to think when they make decisions and make the wrong decision, sometimes multiple times in the same repeated situation. So we cant have real perfect AI until we have experience perfection.
P.S. Only add comments, and sorry for the thingy, wont happen again. P.P.S. If you have media player, go to http://www.angells.com/fun/ieskins/wmpskins/index.htm for cool skins
a perfect AI would never be done. the perfection wouldn't last. it would be constantly changing due to what it did. if it was always doing the exact same thing over and over, it might reach a perfection, but there is no real point in a computer that adds the same two numbers again and again. if it had real data, the data would never be the same, and slight variations would make the computer edit its code over and over again. it would never stay the same for long, because there is no perfect AI.
Well, it sounds like we all have agreement. At least, Kevin, Tess, Alex and I, but there are still a bunch of other people who aren't here. Oh well. Maybe I'll type up an "informal suggestion" page and host it on my site. Then whoever wants to see the "informal suggestions" can. Anyway, I think the AI thread has kinda fizzled out, unless you guys have something more to say, but I'm empty. Actually I'm not. Heh. The question that comes to mind for me about Alex's first post on the 29th, is what is perfect AI? If you're thinking a robot with incredible logic and problem solving skills and adaptations to changes in circumstances, then yes, a less-than-perfect AI programmed to refine and innovate its own code could create perfect AI. But if you think of perfect AI as a simulation of the human psyche, then that could never happen, becasue the human intelligence is tailored perfectly to reside in a human body. There are all sorts of situations that affect the human psyche that just don't happen to robots. For example, a lot of people get irritable when they are hungry. Robots don't get hungry. Also, people aren't programmed, and the only way you can give instuctions to a robot is by programming. There would have to be a new way to drive the intelligence of the robot. These aren't really questions they're only open ideas, but it's something to think about.
I know how to deal with this comments/subject lines thing.
Zack completely summed up what I was trying to say, except he said it much better. THe main thrust it this, if you type- be saying something- and think before you type- because one thing we do not wanna do, (I have a post about this on my xanga-link on left) is delete entries, like bruno's middle finger-please say stuff, that's what makes the site good, but please also don't say silly stupid pointless stuff-and now we can get back to the lovely synthetic life-immortality-etc. discussion
I agree with you Alex, we all say dumb stuff, and have our stupid little typing quirks. I like the way it is now, no rules, but maybe we should have...an informal...suggestion...that if you're gonna post something, make sure you're saying SOMETHING about the topic at hand, or suggesting a new topic if the last has tapered off. Although it's important to me that this site looks cool, and I really like doing all the coding and such, we really shouldn't have whole conversation threads about what color people's text should be (yes, I know I've started most of those bits). Stuff like that should be like, a short comment at the beginning or end (for me, preferably end) of a sincere addition to the disscussion. I'm not saying you can't have fun and be silly, but contribute -- stuff like Bruno's stupid pseudo-ASCII middle finger (he deleted that post, so we're ok now): NOT COOL!!!!! Also, personal stuff, unless it's part of some anectdote, or you're trying to make a point, doesn't belong here. Most of us already have our own pages to blab on, and if you don't, you already belong to Blogger if you're on here, it's no big deal to just make another blog for yourself, and you can put whatever you want on it. We can even all have links to our homepages there to the left, some of us already do. And if you other guys think Alex and I are totally wrong, don't just shut up and carefully read every post you make thinking, "Is this ok with Alex and Zack?" If you have something to say, SAY IT, for Maynard's sake!!!!! In fact, I WANT you to have something to say, it's what makes it interesting, which is what this is all about. Just THINK about it.
(PS: HERE'S where I talk about un-topic stuff - I STILL don't know for sure if anyone wants comment stuff or subject lines (except kevin). So tell me what you want. Or I'll just do what I want, and I know you don't want that. Also, if anyone wants a couple (that means two, and no more) of links in that list there to the left, just give me the URLs and titles for the sites. Post 'em here (as long as you say something else too) or IM me or e-mail me, whatever, I'll take care of it. Unless you just don't care, in which case you should tell me that, too.
PPS: Just a suggestion, Alex, but if you're gonna do that hyphen thing, I think one dash at a time is enough, you know? Or periods, and commas, eh? Just something to ponder on...)
well, now I have another post---one off topic-----I want everyone to read the second word of the title---THINK-----that is the purpose of this site----and there is some excellent thinking going on here and some excellent conversations------however---there is also a lot of crap----some people don't seem to understand that the purpose of this site is to have deep interesting conversations, where we get so confused because our arguments are so deep that we accidently take the other person's side------the purpose is not to spend weeks discussing what color the site should be or what the name should be------THEY DON'T MATTER-----the only purpose of this site is to be a forum------its not supposed to look pretty or have a nice name----that's al tertiary----sure that's nice if we wanna------but NOT THE PURPOSE-----let's keep with the good discussion here---and if you don't feel deep or you can't keep to that level of seriousness---please don't post at all, almost all of us have our own blogs or xangas or webjournals for silly musings----this site is for something deeper then that.
I must disagree with Tess's post on Jully 28th at 1:36:20 PM eastern standard time------why can't machines evolve----make a machine the right way----and it could refine its own code---there's an interesting theory on matrix essays regarding something similar to this-----to create perfect AI--feeling AI----you could use less-then-perfect AI, and eventually the result could be perfect AI
I definately wouldn't want to be immortal. that would mean I would be almost completely artificial, and I would either be insane or not me anymore. if whatever was killing off my organs got to my brain, it would be pointless to replace my brain, because I would be a completely different person. with a different set of memories, I wouldn't have my personality. I would have to relearn EVERYTHING. it would be like switching the hard drive on a computer, you have to reinstall everything. the computer doesn't have any of the old problems, it has nothing of the old computer other than the same capabilities. replacing someone's brain would be a complete waste of time, and would hault evolution. why keep old outdated people running when you can make new people easier?
Kevin, all you said about making humans is true, but I'm curious, do you really want to live forever? do you think its a good thing? and we woouldnt really have the resources to immortilize every human on the planet. loved ones would die off and there wouldnt be all that many people since it would take tremendous effort to build at least three people alone ya know. It's a nice thought, never dying, but at the same time its a bit irrational. There's so many aspects to immortality that many people do not realize. It's not all that glamorous.
exactly....theres no possible way to create a human with real emotions out of something that didnt start human. Humans have evoloved just like you said Zack, but at the same time a robot can't evolve as a human. It's utterly impossible. yeah like both of you guys said, they can possibly evolove technologically, but all the same theres just too many damn elements to a human mind. But about making humans, there's a lot of reasons why that could cause problems. heh heh besides the fact that we have too many of us anyway, I personally feel like it would morally wrong. Even if we managed it, and the human-thing actually could feel emotions and give and recieve other elements of these emotions, imaging how the thing WOULD feel. I know I wouldnt want to know that I was just some experiment and I didn't have parents. Besides, why would we want to make humans if we already have enough of that going on here. Humans breed like rabbits already, we dont need to make more destructive creatures because we think we can play "god"
by the time they figure out how to make enough spare synthetic human parts to build a person, there will be much more research done on the human brain, and it will probably be understood. but, by then, life spans might shoot up suddenly, because if your organs fail there would be a legal, ethical, and easy way to replace them.
you could build people at any age. you could very easily build them with organs that don't grow. in that case, people would be like cars or complicated machines today, just need a tune-up every now and then.
lastly, if someone found out how to stick together a bunch of working human parts and make a live human, people could live FOREVER. if you die once, they could just keep your blood moving to your brain until they figure out why you died, then replace that organ, zap life into you, and you'd be alive again.
That would be truly strange, to be able to manufacture humans. That could go anywhere. With enough knowlegde, you could build just about any organism you wanted. Unicorns, dragons, whatever. That would be...interesting.
With humans, organs are no problem, it's just all about getting the right chemical reactions to occur. The real obstacle would be the brain, because noone really knows how it works, like how it stores memories and where emotions come from. And how would you get the body to start? Shock the heart? But how do you get the heart to keep going after you've shocked it with out the brain?
(Maybe we should all go read Frankenstein)
Speaking of Frankenstein, what if artificial people COULD be made? What about God, right?
Would you build the human as an infant, or as an adult?
PS: You guys still haven't told me whether or not you want subject lines and/or comments!!! C'mon, I live to serve!
you can give a computer anything. anything can be done twice, everything can be predicted. you could program a computer to be human, but it would be pointless, it would just make the program run slower. and, if you also programmed a nonhuman part to edit the program and raise efficiency, it would just take out the human-making parts... then we would be back to where we were before.
if we wanted to make a human, it could be done. eventually, there will probably be synthetic organs and bones and everything, to put to rest the arguments about stem cell research and cloning humans to harvest organs. then, we would just have to put together a bunch of peices and give it a brain and we'd have a synthetic human that would have all of the same hormone fluctuations, moodyness, and irrationality.
That's true, Tess, we'd just be creating more stuff to opress. Technology always gets better, but at a human pace. If there WERE robots like that, they would progress, technologically, so much faster than us. Also, you have to figure out why humans have emotions to see why you'd want emotions on a robot. Sure, the robot would be easier to realte to and work with, but if you think about it, everything we have in us we evolved, and everything an organism evloves is to help that organism reproduce. So you can't just make a human conciousness for a robot, because it's not a human. Problem solving skills, ok. Emotions, desires, myabe not.
\there is just no point to "giving" emotions and the like to AI. And generally speaking, programs all have limits but its entirely possible that sooner or later, just like Kevin said, these machines are gonna start getting too smart and i guess you could say self programmed and the world is gonna be run by super smart machines. fun. has anyone seen the movie, AI? Ok that movie was slow, and didn't end when it should have, but I think it did an awesome job of explaining the absurdity of programming a machine to recieve and give love/other feelings. It just doesn't work and in no possible way can it be considered "real". There are too many elements to emotions to program to a computer. The best we can do is give away something unique to organic life claiming it to be the dawn of a new revolution, yet at the same time we are creating something that is not programmed to stop at anything, so when it gets too powerful, how the hell are we gonna terminate it? humans suck.