In addition to paid-joblessness, there are several major societal issues that exponentially-increasing technological advances demand consideration. Questions to be answered include:

  • Do we need to protect ourselves from robots deciding that, once they are able to reproduce and maintain themselves, that the world has no use for us fallible humans?
  • What moral and ethical considerations need to be considered – or is it sufficient to require that the software for every robot should include Isaac Asimov’s 1942 Three Laws of Robotics:
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • What ethical decisions can a robot or AI generally be allowed to make, especially life-and-death decisions?
  • What regulations, covering design, security, privacy, data protection, etc. should be implemented to restrict the application of AI, how could they be enforced effectively – and who should be doing the enforcing?

The first and most dramatic issue is whether robots will become so powerful that they will see no value in the existence of human life. This argument has become one-sided – concentrating on robotic development, and ignoring how developments will affect humans.

Undoubtedly, current robotic research is designed to make robots more capable physically than humans, while continuing to make them more human-like. Bi-pedal robots will soon be covered by a material that will look and feel like skin. Their facial features will likewise be indistinguishable from ours. A robot connected to the internet is already far more capable in terms of knowledge and reasoning than any person. The most challenging research area involves emotions, and considerable efforts are being made to at least simulate emotions like empathy and even love. (The Japanese with an increasingly older demographic are developing robotic caregivers for the elderly, and are leading research into emotional simulation.)

But much of the same research is being used to increase human capability. Exoskeletons are being used in the construction industry and to help people without any physical capability to walk on their own. Prostheses are replacing lost or damage limbs with more capable artificial versions. Research will result in artificial eyes that see better, ears that hear better, and voices that will speak in multiple languages – and eventually the neocortex will be connected wirelessly to the internet. Even now, thoughts are being captured by sensors and transmitted to machines that understand them – without any physical intervention.

So while robots are being developed to become increasingly human-like, people are being developed to become increasingly robot-like. Is it not likely that the two developments will merge into a single race?

The videos and reference articles below are a brief representative part of the ongoing discussion of possible answers to these questions.



At what point do robots become a threat to society (AI & Society – 2018-05 – Big Think)

Richard Dawkins: Why AI might run the world better than humans (The Robotic Age – 2017-09 – Big Think)


A skeptic's guide to thinking about AI (AI & Society - 2018-10 - FastCompany)

Skeptical insights about AI, including: AI is not neutral; AI usually relies on a lot of low-paid human labor; Don’t just talk about ethics, think about human rights; We need to hold government and corporations accountable; Questions designers using AI should ask.

Show full article

Establishing an AI code of ethics will be harder than people think (AI & Society - 2018-10 - MIT Technology Review)

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them? Technology often highlights peoples’ differing ethical standards. A crowd-sourced survey on moral decisions for self-driving cars showed huge variation across different cultures. Establishing ethical standards also doesn’t necessarily change behavior.

Show full article

Can a machine be ethical? Why teaching AI ethics is a minefield (AI & Society - 2018-05 - Big Think)

How should an autonomous artificial intelligence act when life is on the line? Philosopher James Moor categorizes machines into four ethical groups with different ethical abilities: Ethical impact agents, Implicit ethical agents, Explicit ethical agents, and Full ethical agents.

Show full article

Why AI can’t solve everything (AI & Society - 2018-05 - Big Think)

AI has opened up a wealth of promising opportunities, but it has also led to the emergence of a mindset: ‘AI solutionism’, the philosophy that, given enough data, machine learning algorithms can solve all of humanity’s problems. This disregards important AI safety principles and sets unrealistic expectations about what AI can really do for humanity.

Show full article

AI marks the beginning of the Age of Thinking Machines (AI & Society - 2018-05 - VentureBeat)

A thoughtful review of Henry Kissinger’s article in The Atlantic, in which Kissinger questions whether we understand the consequences of such a sweeping technological revolution as AI represents. He fails to distinguish between the current narrow-AI and the future artificial general intelligence. But he has a valid concern that AI could end critical thought by humans (as computers have replaced arithmetic skills).

Show full article

Women are less likely to be replaced by robots and might even benefit from automation (AI & Society - 2018-05 - Big Think)

Research shows women are better positioned than men to resist the automation of work and possibly even benefit from it. Women are overrepresented in industries that require high levels of social skills and empathy (such as nursing, teaching and care work). Women in advanced economies generally have higher levels of education and digital literacy, giving them a comparative advantage in a labour market that is continuously transformed by technological innovation.

Show full article

Arguments for central planning or for a socialism-capitalism hybrid led by technocrats (AI & Society - 2018-05 - Big Think)

A Chinese professor argues that wealth disparity can be resolved by using AI to back central planning. An American economist argues that technology has destroyed capitalism and that technocrats need to lead a government, not be excluded from it.

Show full article

Why learning to code won't save you from losing your job to a robot (AI & Society - 2018-05 - TechRepublic)

Learning to code won’t protect your job when computers become smart enough to build code for you—which is already starting to happen. Much code today is not that creative – it’s more like building LEGO bricks. There will still be a need for high-level coders, and for engineers solving difficult problems or performing important research.

Show full article

Google’s Duplex AI demo just passed the Turing test (AI & Society - 2018-05 - ExtremeTech)

Google gave an amazingly lifelike demo of its Assistant making phonecall reservations for a haircut and for dinner at a restaurant. The British computer scientist, Alan Turing, devised the Turing test as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human. The second call could claim to have passed the test!

Show full article




Shelly Palmer   *   FutureScope   *   Kurzweil   *   MIT Technology Review

Terms & Conditions                    Privacy Statement

©  Ronnick Enterprises 2018