Ethics

Several months ago, I started pondering the idea of ethics and its place in today’s society.  It seems to be the missing piece in too many corporations and in politics.  It seems to have been replaced by greed and self-interest.  What is important is that no business entity or governing system per se lacks ethics.  It is the people who constitute those organizations that seem to lack a sense of ethics.

Ethics is the whole of the moral principles that govern our sense of right and wrong.  That’s some heady stuff.  What makes it difficult to wrap my head around it is that it applies to a particular culture or group.  What applies here doesn’t necessarily apply there.  (Think us versus the Taliban as to what is considered right and wrong.)

So the concept of ethics sat on that back burner in my mind until I read an article in the Scientific American about programming robots with a set of ethics, rules on how to behave ethically – how to be morally good.  I thought if scientists can’t program a robot to be ethical, we don’t stand a chance.

Interestingly, the article spoke about the difficulty of programming a robot to assist the elderly for starters.  Huh, I thought.  That seems simple enough.  Then the article reminded the readers who saw the movie a 2001: A Space Odyssey how Hal 9000 turned on the crew.  If we plan to use robots, we certainly can’t have them rebelling like Hal.

So it was interesting to see that the starting point for the robots’ ethics was a set of rules that were created by Isaac Asimov and appeared in his short story Runaround in 1942.  They are:

1)A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2)A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3)A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

That’s pretty slick but hardly of value for a robot that is working in an Assisted Living Facility.  So the creators of NAO, a robot being designed for such a purpose, are using three ethical criteria – Do Good, Prevent Harm and Be Fair.  How much of each to apply in a given situation varies, and this is one of the major problems in programming ethics.

But as I said at the outset, this is not about ethics in robots but rather ethical people.  Even if we use the three ethical criteria above, who is to decide what is good, what is considered harmful, what is fair?  That is the conundrum. Ethics requires a commonly accepted set of norms, and that is missing in today’s society.

What is good for society may not be good for us personally; too often personal need trumps the common good.  Fair is a judgment call based on our own set of values that we formed as we matured.  Causing harm too often is a by-product of someone’s inaction or action redressing what they think is an inequity.  So it is that ethical norms are hard to define.  Who is to say what is good?

Probably religions best define what is good for their believers, but not everyone believes the same religion.  Even then, not all religions agree on what is good. (Again, think us versus the Taliban.)  So it is we continue to seek the Holy Grail of right and wrong.  Each of us think we have it in our grasp, but others also think they have it instead.  Button, button, who has the button?  The problem seems insurmountable.

Strange – when I started writing this, I didn’t think I’d take this long and end up here.  We can teach ethics in school, but we will never live in an totally ethical world.  It isn’t the ending I was aiming at, but then life is like that.  And so it is I close this bleak musing with a slight change to the lyrics of the song, “I don’t believe in if anymore” by Roger Whittaker:

No you won’t believe in ethics anymore

It’s an illusion

It’s an illusion…

 

No you won’t believe in ethics anymore

It is for children

It is for children

Building daydreams…

Leave a Reply

Your email address will not be published.