Save over 40% when you secure your tickets today to TNW Conference 💥 Prices will increase on November 22 →

This article was published on June 17, 2019

AIs should be legally liable for their mistakes soon


AIs should be legally liable for their mistakes soon

Assigning blame can often be a complex task. A person’s actions, intentions, and state of mind all come into consideration when meting out judgement over any wrongdoing. Across various industries, examples of medical malpractice and reckless endangerment are continuous eventualities that insurers need to deal with. The matters of risk and responsibility are key factors taken into account when any kind of insurance policy is being drafted, and when any insurance investigation is being carried out.

Historically, insurers have had to consider only the human aspect of parties involved in any insurable matter. Today, things are a lot more complex. As AI development has increased in scope, we have been left with programs with the sophistication to be integrated into areas of infrastructure that have effectively given them direct input into life-or-death situations.

Today we have everything from driverless cars to advanced scanning platforms that can identify a patient’s need for immediate treatment. AI is now involved in the making of decisions over which, previously, only humans were in a position of control.

Of course, these AI programs have not universally been handed the keys to the metaphorical car and been allowed to operate without any human oversight. In most cases, AI is being used as a decision aide, or driver aide in the case of autonomous vehicles. That being said, Tesla may require drivers to be sat in the front seat for now, but the development of fully autonomous cars by developers (including Google) has been rapid, and they will soon be deployed on public roads.

Milestones such as this are ushering in a new set of considerations for insurers, and the legal profession as a whole. AI is emerging as enough of an independent entity that the considerations surrounding its placement in the framework of legal liability are being forced to change.

BDJ spoke to a number of leading lawyers and insurers to gain an insight into how these industries have responded to the increasing placement of AI programs within their hierarchies and infrastructures, and how the matter of liability is changing.

AI is already embedded into law insurance

The reality of AI within the legal and insurance industries is that the technology already has an established presence. That’s not to say it’s experiencing seamless adoption, though.

Joanne Frears is a solicitor at Lionshead Law, specializing in commercial, IP and technology law. She is also an acclaimed expert in technology adoption into the legal field, including AI, and recently spoke at the Legal AI Forum in London. She says there is AI confidence within legal firms is mixed.

“There’s about 70 percent hesitancy to use it, and that’s a cultural hesitancy, that’s a law firm saying, ‘We don’t need to innovate – we’ve innovated enough – it’s not broken, so we don’t need to fix it,’” Frears says. “Then there are the larger firms that handle such big cases that AI is warranted as a business tool, and they’ve been very early adopters.

“Now, if you look at the normal cycle of hype, some of these programs are falling away already, but, generally speaking, a lot of them have been well received.”

The positive reception would not have occurred if there was substantial doubt over these programs’ abilities. However, introducing AI platforms into questions of legal liability concurrently calls into question the very liability of these platforms’ actions and the human input into them.

“Where does the liability lie? Should it lie with their programming companies? Are they getting it sufficiently right? I think the view is, yes they are – they’re pretty accurate, these AI,” says Frears.

The use of AI in insurance mirrors the wider use of the technology across other industries, in that AI is largely not given comprehensive control over the processes in which it’s involved. Key checks and balances by humans still remain. Lawyers and insurers are tasked with looking at the findings of the AI, and then bringing their interpretation to these figures.

Insurers largely rely on this two-tier system of risk assessment, and subsequent liability judgement. However, trepidation surrounding the fallibility of artificial intelligence still exists. This may be misplaced, though, as the risk associated with its involvement is not necessarily warranted.

“That’s where the risk lies for insurers, because, generally speaking, the human element is not necessarily as accurate as the AI interpretive element,” explains Frears.

So, considering the disparity between human and machine accuracy, does this mean AI should be preferred? Fouad Husseini is the founder of the Open Insurance Initiative, and author of The Insurance Field Book. He believes the progress of AI development is ushering in a new state of autonomous infrastructure that will hugely affect the insurance industry.

“AI technologies are developing at a phenomenal pace, reflected, for instance, in increased autonomous capabilities and the accelerated deployment and integration of such technologies in many everyday gadgets, equipment and vehicles. AI will soon become so ubiquitous that much of the software and equipment we use will be communicating together independently without human intervention,” claims Husseini.

AI’s impact on the insurance industry

Insurers recognized early on, at least in the UK, that the development of AI was going to have a huge effect on their industry. Frears tells of how she was contacted by insurance companies with which she had previously worked and asked about the types of questions that they themselves would have to ask in order to know who they would task to deal with AI integration.

There was a time where the matter of insurers being ready for the onset of AI’s presence was a matter of urgency, with the need to hastily adapt for this new aspect of risk and liability determination.

“I don’t think it’s a scramble or a patch-up any longer. I think there was a time when it was – there are almost certainly going to be some insurers who haven’t quite got it yet, but by and large the bigger ones have got it. They understand it’s a worldwide phenomenon, and the UK insurance market is one of those that is incredibly sophisticated, and will have to remain that way,” Frears says.

Husseini paints a slightly different picture here, though, commenting that insurers are tasked with adapting their industry as AI proliferation increases: “Risk assessment has to continually play catch-up with the uncertainties being introduced, such as the impact of these new technologies and the potential for catastrophic events.”

This reactionary nature of the insurance industry, and the larger legal framework connected to it, has resulted in new legislation being passed in order to cater for AI’s growing scope of applications.

One such example of this is the Automated and Electric Vehicles Act 2018 – a piece of UK legislation that was passed quickly through both the House of Commons and the House of Lords. Legislators have faced criticism in the past for dragging their heels on bringing in new statutes to cater for emerging tech, but in this instance, Frears thinks progress has been made:

“I think they’ve been quite good – not wholly successful yet, but quite good at looking into the future and saying, ‘This is going to come, so let’s prepare for it,’ and had they not been, then I don’t think we would have seen the Automated and Electric Vehicles Act being passed after such a relatively short consultation.”

Despite this relatively speedy example of legislative response to AI development, Husseini believes that legal frameworks are still lacking.

“A robotic or autonomous system is referred to as an artificial computational agent,” Husseini says. “Legally, there is little research into the treatment of artificial agents and this explains the lack of regulatory guidance.”

Man and machine are not equal

The passing of such legislation does not negate the challenges that come with the integration of AI into traditional insurance areas such as automobiles, however. Mixing human actions with a machine’s inputs, such as in Tesla’s autopilot functionality, introduces several new layers of situation-dependent liability.

Frears explains that, in the past, the driver was the only insured party in an automobile scenario, but with AI input into the driving process, it now needs to be established just who or what had the deciding influence over the accident.

“If it’s a product fault, a consumer has the right to expect the product he or she buys from a manufacturer will be roadworthy and safe, so that switches the liability back to the manufacturer. But, otherwise, the insurers will now pay out irrespective of whether or not the driver is insured,” Frears says.

“It’s blurred the line completely, and turned on its head where the insurance actually lies – it’s no longer the driver who’s the insured party in a motor scenario, and I think that’s going to inform how the liability for AI generally will work.”

Insurance following the software involved in an accident is a key provision laid out in the Automated and Electric Vehicles Act. The key factor is whether the driver has engaged the traditional manual controls before the accident – and determining this will be a crucial part of an insurer’s investigation.

This blurring of the lines between man or machine-related liability is somewhat inevitable when a dual state of control is given over the operation of automobiles. As Husseini points out: “Causation, intent and responsibility get increasingly difficult to untangle when AI is involved.”

And it’s not just the automotive industry that is having to develop a sense of equilibrium between AI integration and traditional human oversight. Husseini posits further instances of AI use in industry areas known for highly volatile liability cases.

“The medical profession, for example, has introduced new complexities for underwriters of medical malpractice covers,” he says. “Digital health deploying AI in disease recognition, genetic testing, virtual nursing, surgical robots – these all introduce the risks of mismanaged care owing to AI errors and the lack of human oversight.”

Liability of AI developers

One of AI’s most exciting aspects is its potential to grow and develop beyond the parameters of its original intended use. “I’ve got clients who operate AI and they can’t tell me how the program works any longer. They can tell me what it set out to do, but they don’t know how it’s doing it any longer – and that’s the point of machine learning, isn’t it?” says Frears.

Such potential does pose potential implications for insurers, though, especially when the AI program is being used in scenarios such as autonomous driving. Although the act of driving includes a large degree of unpredictability, and AI programs need to be able to adapt to changing road conditions and other unexpected hazards, there is still the need for unbreaking and traceable protocols that do not stray from predetermined parameters.

With AI programs being expected to perform a certain duty, and with a certain level of predictability, there has been speculation that liability in the event of accidents could begin to fall on AI developers. Some have used the comparison of regular electrics manufacturers – if a phone explodes because of a poorly made battery, that opens the door for insurance liability.

AI is a lot more complex, however, as Frears points out: “So, will programmers be dragged into court? It’s possible, but they’re more likely to be dragged into court as experts to explain how the AI had actually worked on a forensic basis, rather than a liability basis, because these products will stand alone.”

The possibility still exists, though, of programmers appearing in court to argue their liability in an accident or insurance claim involving a human participant. “The application of joint and several liability may mean the party with the largest resources footing much of the damages awarded – in this case, the manufacturer,” says Husseini.

There have already been strong indications from officials in the UK that AI developers, namely in the autonomous automobile sector, could face prosecution if their products are deemed to be negligent. A recent statement from Department of Work and Pensions spokesperson Baroness Buscombe stated that current UK health and safety law “applies to artificial intelligence and machine-learning software”.

Under the Health and Safety Act of 1974, there is scope for company directors being found guilty of ‘consent or connivance’ or neglect, with a potential sentence of up to two years in prison. This can be a difficult area to prosecute, however, as it needs to be established that directors effectively had a hand on the wheel in the roll-out of their products.

In the case of start-ups, though, due to their smaller workforce, it may be easier to establish a direct connection between directors and software releases. Fines imposed would be relative to the companies’ turnover, although those with a revenue greater than £50 million could face unlimited penalties. The key distinction that will need to be made is whether or not these AI programs behaved in a way that is deemed to be reasonable. In this respect, they are being brought more and more into line with the standards applied to humans, as a key consideration of many areas of civil and criminal liability is the standard of what a reasonable person would do.

AI complexity is not yet at a stage where actual mens rea needs to be considered, although this is an area that may need to be considered as the technology develops, and deep-learning algorithms further develop the ability of independent reasoning.

Indeed, despite this level of AI sentience not yet being a reality, Husseini notes that the potential for AI programs to develop in ways beyond their original design and intention is a real possibility because of the danger of data corruption.

“What’s the level of protection that these systems have against highly concealed adversarial inputs and data-poisoning attacks?” he asks. “Most present-day policies protecting against general commercial liability, auto liability, professional liability and products liability do not address these risks properly.”

The volatility of international politics

Another area that may influence the evolution of AI liability is the current turbulence relating to Brexit. As previously stated, AI programs are already employed by law firms and insurers, scouring documents and case laws. Frears points out that English courts have more than a thousand years of case laws, but that, in recent decades, these laws have been aligned with Europe’s.

As international upheavals such as Brexit take place, matters of applicable case laws and all the considerations involved in them become more complex, and AI platforms need to be set up to accommodate these changes. “If it’s not provided for, I think questions would be asked whether or not this program was appropriate,” she says.

Insurers’ AI adoption going forward

Aside from these complexities, the process of insurers continuing to adopt AI infrastructure into their business models looks to continue without major roadblocks.

“I think it’s going to be really smooth,” Frears says. “The actuaries who still rule the insurance industry know numbers – they love data and they love to have the certainty that big data can give them and that AI can crunch through. The scenarios that it can consider are far greater than most actuaries have the opportunity to do in their entire life, so, for that reason, it gives the insurance companies a massive amount of certainty.”

Both Frears and Husseini point out examples of more general adoption of AI by insurers and lawyers, beyond the technical aspects of liability-risk assessment and document processing. For example, companies are now using AI technology in the form of chatbots and robo advisors, and in their marketing departments, which helps to establish it as a more general tool throughout their organization.

But when it comes to the long-term continuation of AI integration and the resultant changes in legal liability considerations that need to be made, Husseini believes there is more to be done:

“While there are some in the legal profession who are conducting research and providing studied opinions on the treatment of myriad issues relating to liability, independent agencies or initiatives could be set up and financed by stakeholders in the legal, manufacturing and insurance industries to work with policymakers in drafting an improved legal framework,” he concludes.

Because of the complexity and diverse range of potential applications of AI, the response in adapting liability frameworks is likely to need an equally diverse pool of resources to sufficiently cater for it in a timely manner.

This post was written by John Murray for Binary District, an international сollaborative technology community which creates unique competency-based workshops and events on new technologies. Follow them on Twitter.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with