In addition to paid-joblessness, there are several major societal issues that exponentially-increasing technological advances demand consideration. Questions to be answered include:

  • Do we need to protect ourselves from robots deciding that, once they are able to reproduce and maintain themselves, that the world has no use for us fallible humans?
  • What moral and ethical considerations need to be considered – or is it sufficient to require that the software for every robot should include Isaac Asimov’s 1942 Three Laws of Robotics:
    1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
    2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
    3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  • What ethical decisions can a robot or AI generally be allowed to make, especially life-and-death decisions?
  • What regulations, covering design, security, privacy, data protection, etc. should be implemented to restrict the application of AI, how could they be enforced effectively – and who should be doing the enforcing?

The first and most dramatic issue is whether robots will become so powerful that they will see no value in the existence of human life. This argument has become one-sided – concentrating on robotic development, and ignoring how developments will affect humans.

Undoubtedly, current robotic research is designed to make robots more capable physically than humans, while continuing to make them more human-like. Bi-pedal robots will soon be covered by a material that will look and feel like skin. Their facial features will likewise be indistinguishable from ours. A robot connected to the internet is already far more capable in terms of knowledge and reasoning than any person. The most challenging research area involves emotions, and considerable efforts are being made to at least simulate emotions like empathy and even love. (The Japanese with an increasingly older demographic are developing robotic caregivers for the elderly, and are leading research into emotional simulation.)

But much of the same research is being used to increase human capability. Exoskeletons are being used in the construction industry and to help people without any physical capability to walk on their own. Prostheses are replacing lost or damage limbs with more capable artificial versions. Research will result in artificial eyes that see better, ears that hear better, and voices that will speak in multiple languages – and eventually the neocortex will be connected wirelessly to the internet. Even now, thoughts are being captured by sensors and transmitted to machines that understand them – without any physical intervention.

So while robots are being developed to become increasingly human-like, people are being developed to become increasingly robot-like. Is it not likely that the two developments will merge into a single race?

The videos and reference articles below are a brief representative part of the ongoing discussion of possible answers to these questions.



TED interview by host Chris Anderson with Ray Kurzweil (AI & Society – 2019-02)

Sophia, a humanoid robot (Robotics/Home – 2018-09)

At what point do robots become a threat to society (AI & Society – 2018-05 – Big Think)

Who should be liable for robot misbehavior? (Robotics/Home – 2018-04 – ZDNet)

Drone technology implementation held up by security and privacy concerns (Robotics/Home – 2018-03 – TechRepublic)

Richard Dawkins: Why AI might run the world better than humans (AI – 2017-09 – Big Think)



New AI systems are personalizing learning (2019-09 - Singularity Hub)

Ahura is developing an AI-based product to capture biometric and other data (facial and eye movements, fidget scores, voice sentiment, word usage) from adult learners with the goal of using it to identify the most effective teaching method for each learner. Tests shows learning speeds increase 3-5 times.

Show full article

MIT Report: The Work of the Future (2019-09 - MIT)

Combining the expertise of prominent labor economists and MIT’s top engineers and roboticists, the Report shows how major technology advances in artificial intelligence and robotics may not result in better jobs and wages.

Show full article

AI is turning thoughts into speech. Should we be concerned? (Ethics - 2019-07 - Fast Company)

Yuval Noah Harari, a professor of history at the Hebrew University in Jerusalem, has a vision of the future where humans and machines become one – a reality that is not so far away.

Show full article

AI is turning thoughts into speech. Should we be concerned? (Society & AI/Ethics - 2019-04 - Big Think)

The moral dangers of AI, especially concerning privacy, continue to be an issue. Big Tech sacrifices security for convenience, while consumers are playing right along. An automated task is not necessarily a better option. AI has a bright future ahead. We just need to ensure the consumer fascination with bright and shiny data-collecting toys doesn’t overwhelm our moral sensibilities in using these technologies soundly. So far, we’re fighting an uphill battle.

Show full article

AI – Science Fiction to Science Fact (AI & Society - 2019-02 -

Article on the history, present and future of AI with an explanation of terms. All contained in an excellent pictogram.

Show full article

Shared autonomous vehicles could transform American cities built around car ownership (AI & Society - 2018-11 - MIT Technology Review)

As autonomous-vehicle companies continue testing, we will find ourselves redesigning society to accommodate that technology in addition to concerns about safety. Autonomous vehicles will enable entirely new modes of transportation and vehicle management that could accelerate the decline in private car ownership. What will then become of the rich ecosystem of infrastructure, services, retail, and cultural experience that has grown up around automobiles?

Show full article

A skeptic's guide to thinking about AI (AI & Society - 2018-10 - FastCompany)

Skeptical insights about AI, including: AI is not neutral; AI usually relies on a lot of low-paid human labor; Don’t just talk about ethics, think about human rights; We need to hold government and corporations accountable; Questions designers using AI should ask.

Show full article

Establishing an AI code of ethics will be harder than people think (AI & Society - 2018-10 - MIT Technology Review)

Lawyers, activists, and researchers emphasize the need for ethics and accountability in the design and implementation of AI systems. But this often ignores a couple of tricky questions: who gets to define those ethics, and who should enforce them? Technology often highlights peoples’ differing ethical standards. A crowd-sourced survey on moral decisions for self-driving cars showed huge variation across different cultures. Establishing ethical standards also doesn’t necessarily change behavior.

Show full article

10 ethical AI criteria and laws for global consultations (AI & Society - 2018-09 - CATA)

To ensure that AI is developed, designed and adopted in ways that serve human wellbeing and the global social good. CATA is proposing 10 focus points for global consultation: Human Responsibility; Physical Integrity; Moral Integrity; Privacy; Neutrality; Mental Integrity; Wellbeing; Education; Ethical Behaviour; Skewing of Opinion.

Show full article

Can a machine be ethical? Why teaching AI ethics is a minefield (AI & Society - 2018-05 - Big Think)

How should an autonomous artificial intelligence act when life is on the line? Philosopher James Moor categorizes machines into four ethical groups with different ethical abilities: Ethical impact agents, Implicit ethical agents, Explicit ethical agents, and Full ethical agents.

Show full article

Why AI can’t solve everything (AI & Society - 2018-05 - Big Think)

AI has opened up a wealth of promising opportunities, but it has also led to the emergence of a mindset: ‘AI solutionism’, the philosophy that, given enough data, machine learning algorithms can solve all of humanity’s problems. This disregards important AI safety principles and sets unrealistic expectations about what AI can really do for humanity.

Show full article

AI marks the beginning of the Age of Thinking Machines (AI & Society - 2018-05 - VentureBeat)

A thoughtful review of Henry Kissinger’s article in The Atlantic, in which Kissinger questions whether we understand the consequences of such a sweeping technological revolution as AI represents. He fails to distinguish between the current narrow-AI and the future artificial general intelligence. But he has a valid concern that AI could end critical thought by humans (as computers have replaced arithmetic skills).

Show full article

Women are less likely to be replaced by robots and might even benefit from automation (AI & Society - 2018-05 - Big Think)

Research shows women are better positioned than men to resist the automation of work and possibly even benefit from it. Women are overrepresented in industries that require high levels of social skills and empathy (such as nursing, teaching and care work). Women in advanced economies generally have higher levels of education and digital literacy, giving them a comparative advantage in a labour market that is continuously transformed by technological innovation.

Show full article

Arguments for central planning or for a socialism-capitalism hybrid led by technocrats (AI & Society - 2018-05 - Big Think)

A Chinese professor argues that wealth disparity can be resolved by using AI to back central planning. An American economist argues that technology has destroyed capitalism and that technocrats need to lead a government, not be excluded from it.

Show full article

Why learning to code won't save you from losing your job to a robot (AI & Society - 2018-05 - TechRepublic)

Learning to code won’t protect your job when computers become smart enough to build code for you—which is already starting to happen. Much code today is not that creative – it’s more like building LEGO bricks. There will still be a need for high-level coders, and for engineers solving difficult problems or performing important research.

Show full article

Google’s Duplex AI demo just passed the Turing test (AI & Society - 2018-05 - ExtremeTech)

Google gave an amazingly lifelike demo of its Assistant making phonecall reservations for a haircut and for dinner at a restaurant. The British computer scientist, Alan Turing, devised the Turing test as a means of measuring whether a computer was capable of demonstrating intelligent behavior equivalent to or indistinguishable from that of a human. The second call could claim to have passed the test!

Show full article



Shelly Palmer   *   FutureScope   *   Kurzweil   *   MIT Technology Review

Terms & Conditions                    Privacy Statement

Cancel Newsletter Subscription                        Ronnick Enterprises 2018, 2019