To the editor: Those that are involved about hurting the emotions of sentient artificial-intelligence pc techniques ought to learn Mary Shelley’s 1818 novel “Frankenstein” and ask themselves what sort of a monster we could have created. (“If your phone had feelings would you treat it differently? It could happen sooner than you think,” Opinion, Jan. 2)
Many a long time in the past, once I was a graduate pupil at UC San Diego, I used to be fighting a primitive pc software program bundle. A professor informed me, “Keep in mind, Jack, the pc is meant to be your slave, not the opposite manner round.”
One could start to ask the query: Who’s serving whom?
Keep in mind the scene from Stanley Kubrick’s 1968 movie, “2001: A House Odyssey,” the place the sentient pc HAL refuses to let Dave reenter the ship? HAL concluded the mission was too vital to depend on people to finish it.
Prognosticators debate the ethics of sentient AI versus the potential dangers of the computer systems taking up. My speculation: They have already got. Most individuals simply haven’t realized it but.
Jack Debes, Santa Monica
..
To the editor: Brian Kateman’s opinion piece distilled the challenges that AI poses for humanity. We carbon-based life types are creating silicon-based life types and are to this point ill-prepared for the implications.
Our ethical code and ethics generally is a information, and but our monitor file with different carbon-based life types (chickens, hogs, cows and so forth) doesn’t construct confidence that we’ll meet this new problem efficiently.
The important thing distinction? Chickens don’t management our future. AI is being given entry to the whole lot people have ever discovered and created.
We’re blindly giving AI management over our lives and livelihood one click on at a time. Prefer it or not, AI life types will quickly (in 10 years, possibly fewer?) make judgments earlier than doing what we wish them to do. Solely then, when our instructions flip into conversations, will we notice what we’ve misplaced.
Merrill Anderson, Laguna Seashore
..
To the editor: In case you want extra fantasies and delusions within the new yr, Kateman requires the necessity to construct a relationship with know-how (morally talking) and stop “struggling” on the a part of robots which will end result if we don’t.
He writes: “Perhaps a degree will come sooner or later the place we’ve got extensively accepted proof that robots can certainly suppose and really feel. But when we wait to even entertain the concept, think about all of the struggling that can have occurred within the meantime.”
No, of us, we can’t let these robots (of the long run) endure. As for these poverty-stricken youngsters, effectively, we did what we may inside cheap limits.
Juan Bernal, Santa Ana
..
To the editor: A news article you recently published says most individuals have accepted AI.
Not me, and never my colleagues.
An Emmy-nominated author, I’ve seen my work devastated as if by a plague. For many years, I earned a snug revenue by writing unique songs and customized speeches. Now, nothing. Creativeness is out of date.
Who determined to place all inventive artists out of labor? And why gained’t anyone do one thing about it?
Molly-Ann Leikin, Thousand Oaks
..
To the editor: I recommend we work on caring for the emotions of animals — and different people — earlier than even contemplating worrying in regards to the emotions of machines.
Thomas Bliss, Los Angeles