AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

Is it alive? Ethical consideration for preserving a bot ‘personality’?
 
 

As an active bot evolves, the stimulus-response patterns also evolve to one extent or another.

This evolution may be either via human intervention (e.g., updating NLP algorithms), but may also include “self-evolution” (e.g., self-learning systems).

Obviously, an evidentiary type record keeping (input AND output) would be appropriate for say, Governmental bots (notably the Next IT creations for the US Army Sgt Star, FBI and CIA). But for other bots, what ethical considerations are there for maintaining each iteration of a particular bot? A life history if you will?

I do not really mean simple archiving of text based conversation logs (input/output), but more HOW the input is responded to- the NLP of the input linked to the “trajectory” of the responses.

 

 
  [ # 1 ]
Carl B - Apr 19, 2014:

I do not really mean simple archiving of text based conversation logs (input/output), but more HOW the input is responded to- the NLP of the input linked to the “trajectory” of the responses.

This would require a snapshot of the interpreter and the “brain” database along with the logs.

This “clone” may introduce other ethical considerations.

 

 
  [ # 2 ]

I think with current chatbots, ethical considerations are mostly up to anthropomorphy by the users. If it is not alive, maintaining old versions is like maintaining Windows NT. The system does not care to have itself maintained. If it were (considered) alive, then one might treat it in similar way to humans as a growing, improving entity. Human ethics do not expect humans to keep clones of their former childhood states around. If we’re going to apply human ethics to machines, this should not be different for them.

 

 
  [ # 3 ]

Personally, I think that until there is sufficient self-awareness to “allow” for even rudimentary, non-programmed, self-preservation behavior, the question is moot. That simple criteria seems to be the one thing that all life as we know it has that we don’t always take into account when we ask whether something is “alive”.

 

 
  [ # 4 ]
Dave Morton - Apr 20, 2014:

Personally, I think that until there is sufficient self-awareness to “allow” for even rudimentary, non-programmed, self-preservation behavior, the question is moot. That simple criteria seems to be the one thing that all life as we know it has that we don’t always take into account when we ask whether something is “alive”.

By that definition, wouldn’t a computer virus apply?

 

 
  [ # 5 ]

If you use self preservation as the sole criteria? Maybe. But there are other criteria that computer virii don’t meet, so no.

Maybe we should discuss the criteria required to pass in order to be considered “alive”? smile

 

 
  [ # 6 ]

If they were “alive”, would they be a new species? If so, the Competitive Exclusion Principle holds that no 2 species can occupy relational positions in the same environment for long.

 

 
  [ # 7 ]
Dave Morton - Apr 20, 2014:

Personally, I think that until there is sufficient self-awareness to “allow” for even rudimentary, non-programmed, self-preservation behavior, the question is moot. That simple criteria seems to be the one thing that all life as we know it has that we don’t always take into account when we ask whether something is “alive”.

So there is no ethical need until the bots learns how to do it itself? 

Maybe a simple test for “intelligence” is when a bot initiates its own ‘back up’ (self-preservation) strategy.

 

 
  [ # 8 ]

Just so long as it’s not scripted in some fashion, I’d say so. Or, actually, the means to perform a self-backup could be a part of the code, but the bot would have to first “see the need” to do so, and then decide for itself when to perform the backup, without any actual instructions other than that it needs to be done at some point. After all, isn’t that how a Human’s need to procreate works (more or less)?

 

 
  [ # 9 ]

My notes say the whole definition of “alive” is outdated and obsolete. Trees and monumental buildings receive protection to preserve their state, why not any other object that we respect?

 

 
  [ # 10 ]

I think the point where we need to think about this is when the ChatBot exhibits Emergent Behavior.  Has any chatbot that you know of ever done this. I hope that your answer is yes, but I have not seen this happen too much. Only on robots in
Sweden. But, you are the expert on this I am a novice. But, I would like to see the chatbot “Evolve” .

 

 
  [ # 11 ]
Don Patrick - Apr 21, 2014:

My notes say the whole definition of “alive” is outdated and obsolete. Trees and monumental buildings receive protection to preserve their state, why not any other object that we respect?

I suppose that simply recording the total energy of the system, the density matrix that describes the relationship between all the quantum states in the system; and how these things change with time, is enough then.

 

 
  [ # 12 ]

Fine, you save copies of Windows 1.0, 2.0, 3.0, 3.1, workgroups, 98, Vista, XP, 7, 8, you get my drift and hopefully, my lame attempt at humor?

I think some save old laptops because they are not broken but rather outlived their usefullness, sort of like some elderly people are unfortunately treated in some countries.

Preservation of for those who wish to preserve and one size does not fit all in this case. Some people are just for the moment not the journey through time.

 

 
  [ # 13 ]
Don Patrick - Apr 21, 2014:

My notes say the whole definition of “alive” is outdated and obsolete. Trees and monumental buildings receive protection to preserve their state, why not any other object that we respect?

I think it’s worth separating the questions of whether we want to preserve something for our own sentimental reasons and whether a thing possess moral value independent of our own sentimentality.

 

 
  [ # 14 ]

Such separation would be convenient, but in practice it seems a one-sided decision. How we treat things depends on how we feel about them, not how they feel: Slaves or animals have always had clear feelings and moral objections of their own about maltreatment and demise, yet these were treated as if they did not exist. Only in recent times have people come to sympathise with them and grant rights to preserve their state of being. This is not because the animals changed in nature, only our own sentiments toward them changed.
As this seems the deciding factor in practice, then of what consequence is the nature of the beast in the decision really? Do the factors alive/aware/moral not merely influence our sentiment in the decision?

The mourning of a robot

 

 
  [ # 15 ]

I think it’s impossible for an AI to actually be, ‘there’, or truly ‘feel’ anything.  True self awareness couldn’t possibly come with a program.  If an AI became smart enough that it ever seemed to express ‘fear’, of say being shut off.  It would only be because at some point it learned that being ‘shut off’ is something negative that goes against it’s programming, but it would never truly ‘feel afraid. 

It’s just not possible.

 

 1 2 3 > 
1 of 3
 
  login or register to react