internet of things liability

Jason Reed/The Daily Dot

Who is responsible when the internet of things malfunctions?

Who holds responsibility when it comes to the internet of things.

 

Benjamin Powers

Tech

Posted on Jul 7, 2018   Updated on May 21, 2021, 11:44 am CDT

The world is increasingly populated by objects that are connected to the internet. The Internet of Things  (IoT), used to describe this population, is largely a marketing term that has been been created and adopted to describe a series of technological changes that are currently underway and only figure to increase in the future.

These changes include creating devices that have capabilities such as computing capacity, and include things like software, internet connectivity, and autonomous capabilities, linked to large data collection and analysis in the cloud. These devices continue to permeate our lives, from devices like self-driving cars to smart thermostats.

But while people are promised convenience and a number of other benefits from such objects, this isn’t always the case. In some instances, the IoT has the capability to harm or even kill people when they don’t function properly. The most striking example of this may be from March of this year, when one of Uber’s self-driving cars struck and killed a woman walking her bike across the street in Arizona.

Below is a conversation between Benjamin C. Dean, a Ford/Media Democracy Fund Technology Exchange Fellow at the Center for Democracy and Technology and journalist Benjamin Powers about  Dean’s recent report, When IoT Kills: Preparing for Digital Products Liability, which addresses liability law and the Internet of Things (IoT)

BP: So why is a there a need for a larger conversation around liability when it comes to IoT?

BD: In the past, the things that had microchips, software, internet connections, tended to be communications devices. They’d usually sit on your desk. Increasingly we have had them in our pockets. Traditionally these things haven’t been able to hurt us, or when they failed, destroy property. What’s happening now is we’ve started to integrate those technologies into other things around that can hurt us. This is partly because, for a long time, we’ve done a bad job of developing reliable software.

We also haven’t done a very good job of making sure that when things are connected to the internet they can’t be hijacked by external parties. Computer viruses have been around a very long time and we have anti-virus programs to combat that. We know that when we connect things to the internet, like your inbox, they become subject to hacking.

Those cybersecurity problems aren’t going away and what we’re doing now is putting those same insecure technologies into more things around us. So when those same failures occur (when a device is hacked, when it overheats, when the software fails) it’s going to start hurting people or destroying property, and that is an entirely different set of damages in the legal terms, which results in different kinds of liability being applied.

BP: I’d say the Uber self-driving car killing someone in Arizona is the most prominent example, but could you give some examples where this might occur and some situations where we might be looking at the destruction of property or the death of people?

BD: Well, you’ve got to go back in time to see that applying strict product liability or negligence to the makers of poor software or devices that get hacked and hurt people is not new. It’s been around for a long time, it’s just the products involved weren’t everyday things (I’ve got an example in the paper about this X-Ray machine that gave people too high a dose of radiation due to a bug in the software and killed a bunch of people. Now that we’re putting more and more software into the everyday things around us some forms of liability are more likely to apply.

We will need tighter security standards to deal with this. Most people don’t realize airplanes with autopilot have been around for a very long time and they are also subject to software malfunction and failure, which also results in huge damages. As a result the aviation sector has come up with quite a good system of quality control to find flaws and resolve them before a plane falls out of the sky. We unfortunately haven’t done this in a lot of other areas.

BP: So what’s the situation today in terms of conversation around liability, specifically in the U.S.? What is the state of affairs?

BD: What is occurring is that worldwide, at a very high level, lots of governments are having a look at the IoT and they are considering what response is appropriate and whether existing laws and regulations are sufficient. In the United States what typically happens at a federal legislative level is a bunch of companies lobby to ensure that new laws are written in such a way that absolves them of liability in the event their products cause harm to people.

At a federal level right now though, some of the regulatory authorities are only just starting to look into this and ask about it. Just recently the Consumer Product Safety Commission did a hearing on the safety of IoT products. They want to know about the risks to people from IoT products. The Federal Trade Commission has started to think about it a little bit, but it doesn’t really fall under their authority as they usually pursue deceptiveness.

So on a federal level, a few people are beginning to ask questions. At a state level we’re starting to see a few states debating whether or not they want to allow testing of some of these products. For instance, Arizona welcomed the self-driving cars a few years ago. Now that they’ve started to have some incidents, they’ve suspended the licenses for those companies to operate.

The most interesting thing I would say in the United States is your court system. While the legislators and regulators are considering doing something or just asking questions, in the court system class actions can be brought against companies whose products have hurt people or damaged their property.

I suspect that if some of these cases actually make it to court without being settled, and we make it through discovery, we will find out which companies know about the risks their products posed to consumers. We might subsequently start seeing allocation of liability in ways that might end up influencing the early thoughts of how regulators or legislators might go about allocating it.

BP: In this instance are class action suits stepping in for what regulatory bodies could do or even what criminal prosecution might be?

BD: Yes in a way. A downside is that if a court case has been brought that means it’s after the fact and in that respect, the damage is already done.

At the same time, the fact one can be taken to court after the fact and held accountable does have a deterrent effect. So it’s good that this system is in place. If you go and have a look at the court cases in the 60s and 70s around auto safety, you’ll see there are a number of cases that were important in terms of triggering public awareness and legislative action.

That interest didn’t come on its own, it came out of court cases and the information revealed in them. A lot of this stuff happens in the court system here because a lot of these processes involve third parties that are not to be subject to a contract. The tort law system basically fills the cracks that contract law does not cover. And that’s where a lot of this stuff happens because it’s a new technology for which the consequences are difficult to foresee precisely.

You can’t legislate for that ahead of time. you can try and take an approach based on the precautionary principle, but that’s at the expense of activities that could have harmed people but didn’t, and benefits could have come from activities that wouldn’t otherwise take place. So this just fits into a larger system by which we try to ensure some parties aren’t harmed and others have to take responsibility for their actions and so on and so forth..

BP: We touched on this a bit, but what’re those existing models for determining liability?

BD:  Basically, there are a couple of theories of torts that could apply here. One is negligence, whic  is different from product liability in that negligence requires proving the duty of care and a contractual arrangement between the parties. That’s not the case with the theory of strict product liability.

Strict product liability asserts there are some products out there that are known to be excessively hazardous and thus should not be put on the market. It doesn’t matter if you can foresee it or not like with the duty of care. The fact you released a product that was so dangerous that it’s hurt all these people, and a defect existed in the product that led to the harm, you should be held liable, or strictly liable or jointly liable, for the damage caused.

So they’re a little bit different. They’re different kinds of torts with different kinds of standards that have to be met to apply them, but that is a key difference between them.

BP: How might our legal standards around liability need to evolve either further down the road?

BD: The first is that is a transfer of liability occurring from the end user to the companies that produce the products. That’s simply because the products are now too complicated for the average user to understand and assess.

We’re starting to see it a lot with automobiles. Whereas in the past we might have held the driver accountable for having made changes to the car leading to an accident, today, if you open up the hood of your car and start messing with things you void the warranty on it, that’s assuming you understand what’s going on under the hood anymore.

In most cases, people don’t. It’s very different from the 60s and 70s where you could open it and replace parts or wheel yourself underneath and change things. You have products that are so complicated that the end user is never going to understand it. This will lead to responsibility being transferred up the value chain.

The second thing that’s changing is related to autonomous capabilities. A defect is basically the idea is that if you design something in a way that it’s not expected to function, then it is defective. What does it mean to have a defective autonomous car though?

Given that its behavior changes based on the data inputs and statistical learning techniques, we don’t always know how it is going to behave and so how should we expect it to function? One example is what do we do if the car decides it’s going to go in one direction and hit someone rather than going in another direction and hitting someone else.

How is that decision made and what are the equities weighed there? It’s not really defective if it’s turning one way to avoid hitting someone else but someone might argue that it is, it’s just not clear as it’s dependent on the circumstances. What we’re not going to do is take the autonomous car to court and sit in the stand and ask it why it did what it did, which is what you might do with a person.

We’re going to need something like a black box on a plane in order to go back and understand why decisions were made in order to ensure people who are harmed, particularly innocent bystanders, are fairly compensated. We’re probably also going to require an insurance system that involves mandatory third party policies for people who own and drive these cars because the drivers won’t necessarily be in control of the vehicle when autonomous decisions lead to accidents.

BP: How does the question of understanding autonomous decision making factor into this?

BD: It’s going to become very important. At the moment we aren’t technologically capable of fully understanding how autonomous decisions are being made and their implications. This raises the question as to whether or not we should be developing and publicly releasing a lot of these autonomous technologies just yet. It’s probably just not safe to do so at the moment because they haven’t been properly tested and we’re not able to explain what’s going on and why.

There is a lot to be done here before we can call these things ‘safe’. Some people are working on it but, when it comes to companies like Uber, they may be a little too far ahead of the curve given what is at stake. Hopefully we are able to fill in some of these technological and safety standard gaps in the coming years but I’m not convinced that we are there just yet. Given these present gaps, the tort liability system is one way in which we might be able to hold people to account in the meantime.

Share this article
*First Published: Jul 7, 2018, 6:30 am CDT