You are here
Home > News > Cyber Insurance > Designed to help: AI and the rise of the ‘cobots’

Designed to help: AI and the rise of the ‘cobots’

Good to see the insurance market thinking ahead about the use of robotics. Worth downloading and reading the reports whether you’re in the insurance business or have a potential risk that needs covering…:

[…] Artificial intelligence (AI) and robotics are changing the world, and the insurance industry had better change with it.

Two new reports from Lloyd’s spell out the risks – and opportunities – as technological change continues to accelerate.

One report focuses on the rapid emergence of collaborative robots, or “cobots” – devices that help humans by extending their physical capabilities.

While cobots account for only 3% of the total robotics market, the figure is expected to reach 34% by 2025.

Their increasing popularity stems from the fact they are cheaper, smaller and smarter than regular robots, and they are moving beyond factories into sectors such as agriculture, healthcare and retail, where they help people with jobs that are “dirty, dangerous, repetitive and difficult”.

Fear of robots putting humans out of work may be misplaced, the report says.

“Robots, particularly cobots, rarely replace workers; they replace tasks. They often help workers through decision-making, or physical handling, rather than replacing them.”

[…]

Lloyd’s second report highlights increasing use of AI.

While AI has been around for 60 years, Lloyd’s says its “recent, rapid escalation” has awakened awareness of its complex ethical, legal and societal challenges.

Areas of insurance that could be affected include:

  • Product liability and product recall. Recalls could become larger and more complex. AI machines cannot be liable for negligence or omissions, so who is?
  • Third party motor. Assignment and coverage of liability will be difficult due to a shift of responsibility from human drivers to automated vehicles.
  • Medical malpractice. AI is being used to help diagnose conditions, and an error could amount to negligence.
  • Cyber. As chatbot technology develops, it is increasingly difficult to tell humans and AI apart, which could make it easier to carry out phishing scams. This raises questions about what types of insurance would be available to cover against such losses.
  • Fidelity. Fraudulent activity from employees could be exacerbated. Fraud may increasingly come from staff with access to IT systems rather than those with financial authority. The emergence of “deep fakes” is also a concern around identity fraud. Deep fakes are AI systems capable of generating realistic audio and video.
  • Political risks. The weaponisation of AI “could take many forms” and AI might contribute to events such as expropriation, wars, terrorism and civil disturbance.

Again, the rapid development of AI presents business opportunities for insurers.

Original article here

Peter Glock
Over 30 years of designing, building and managing telecoms and IT services. Primarily working with large enterprise and professional services businesses in Asia, North America, continental Europe and the UK. Information security professional, secret physics nerd.
https://brownglock.com

Similar Articles

Leave a Reply

Top
%d bloggers like this: