Top 5 Insurtechs in the US


Ethical and responsible technology is a must in the modern work environment. So much so, that only four per cent of respondents in Zelros‘, the AI B2B software vendor, report, that surveyed over 1,000 people across the US, Canada, the UK and Europe, believed ethical, secure and responsible technology not to be a priority by insurance companies. 


This reflects the overall consumer trend of people wanting ethical and sustainable choices from companies. So how can insurance companies be responsible and ethical when it comes to implementing technology?

Zelros has three tips on how insurance providers can adopt AI technology ethically, responsibly and securely.

Tip #1: Maintain transparency

While AI technology is great for businesses and people alike, consumers still need to be aware of what and how the technology is being used. Making sure consumers are aware of AI is a key component to being an ethical, responsible and secure company. Companies should be able to answer the following questions and make these decisions known to the customers they serve:

  • Who designed this?
  • What is the technology’s main purpose?
  • Why such a decision?
Tip #2: Employ a diverse workforce

The best way to ensure that all insurance customers receive the right coverage and attention that they need is to employ a diverse workforce. A diverse staff can help reduce bias in AI algorithms, resulting in more individuals across all races, genders and backgrounds getting the coverage they need.

According to a report by Accenture, only 35 per cent of insurers have inclusive design or human-centric design principles in place to support human-machine collaboration. In order to get the most out of your AI, diversified human involvement needs to exist in many aspects of the company.

Tip #3: Set and follow ethical standards and regulation

Setting clear ethical standards and regulations and following through with them shows accountability and responsibility.

As regulators and compliance teams continue to analyse the use of AI across the insurance industry, there has been an increase in the number of laws that have been put into place. This includes the Insurance Distribution Directive in Europe, the Explainable Artificial Intelligence project in the US, and the California Consumer Privacy Act. There is also a general AI Code of Conduct created by the National Association of Insurance Commissioners.

“Ethical, responsible and sustainable practices need to be top-down approaches that are intentional in protecting consumer data. Public trust is more important than ever for all industries operating in the digital age,” explains chief growth officer of Zelros Linh Ho.

“With talks of government regulation on the horizon when it comes to the implementation and responsible use of AI, insurance companies need to take a serious look at how they’re using consumer data.

“Maintaining transparency, prioritising diversity and setting clear standards when it comes to AI technology usage are key components in being an ethical and responsible company. It is important to look within your organisation to ensure it is building a culture of transparency and fostering a sense of trust for employees and customers alike,” Ho concluded.

Founded in 2016 by Christophe Bourguignat and Damien Philippon, Zelros is using artificial intelligence and machine learning technology to help insurers provide policyholders with the right coverage for their needs as they occur in real-time.

  • Francis Bignell

    Francis is a journalist with a BA in Classical Civilization, he has a specialist interest in North and South America.



Image and article originally from thefintechtimes.com. Read the original article here.