Computers should be given legal personality so that they can be sued as a way to stop fear of liability stifling innovation, a report by the International Bar Association (IBA) has suggested.
It also proposed scrapping income tax as the primary revenue source for national exchequers in a wide-ranging report contributing to the International Labour Organisation’s global debate on the future of work.
The International Federation of Robotics predicts that more than three million industrial robots will be in use in factories around the world by 2020, while at least one in three jobs is thought to be vulnerable to artificial intelligence (AI) and robotics.
One of the 10 legal disciplines the report covered was litigation, saying that as the use of AI became more popular, the likelihood of it causing damage to people or property was going to increase.
“AI systems with harmless goals may behave in harmful ways… Indeed, insofar as AI can accumulate experience and train itself, it can make unpredictable decisions independently of the will of its developer and eventually cause damage in the pursuit of its goals,” the report said.
“Regardless of how well designed and programmed AI products or services are, extrinsic factors beyond the machine’s control may take place and trigger an incident, causing injury or damage to third parties.
“This will certainly be the case with, for example, automated cars, where there is no possibility to control all the surrounding environment of the car.”
The IBA pointed out that machines have been behaving unpredictably and causing damage or injury to humans for some years, with 61 robot-related injuries and deaths have been reported in the US over the past 25 years.
“The majority of the incidents were caused by industrial robots. This was the case, for instance, with a Japanese worker in a motorcycle factory who was killed by an AI robot.
“The robot identified the employee as a threat to its mission and considered that the most efficient way to eliminate the threat and pursue its programmed goal was by pushing the employee into an adjacent machine.”
There have also been well-publicised deaths involving driverless cars.
The sophistication of new AI-based technologies meant that, under the law as it currently stood, it may be very difficult to establish the liability of someone involved in the creation or use of the machine.
“The factual patterns arising out of, for instance, an accident with a driverless car, when the vehicle has autonomously and independently opted to provoke the damage that actually occurred (which may have helped to avoid more serious damage), may be extremely complex.
“To determine if there was some fault regarding the operation of the AI system, some technological knowledge and expertise is required that will not only make the proof of any wrongdoing extremely difficult but also make litigation much more expensive.
“The application of conventional product liability principles may not be adequate in a fault-based legal system.”
But the authors found “no a priori reason to prevent autonomous AI machines from being granted with a legal status, as there was no reason to, in principle, prevent corporations and other legal fictions from acquiring their legal status”.
It added: “In a case in which no wrongdoing can be established and it is proved that the action was carried out autonomously by a machine, it may be unreasonable to blame any other person rather than the machine itself.”
The report said that, in addition to questions about liability, other questions may arise. “For example, could a program that is malfunctioning claim ‘insanity’? Or if the program is affected by a virus, could it claim that it was under ‘coercion’?”
It did not come to a conclusion, but said “legal scholars, legislators and tech experts must endeavour their best efforts to collaborate on determining the solution which, on the one hand, will not obstruct the technological progress and, on the other, will respect the fundamental principles of liability law”.
All the different reports highlighted the positive impact technological advancements have had on the global workforce, but said that “whether it is mass unemployment due to automation, or a reduction in diversity caused by AI bias, the negative effects of digitalisation cannot be ignored”.
Jurisdictions must consider implementing new laws, or updating existing ones, “to ensure the continued prosperity of the global workforce”.
On tax, the IBA said exchequers’ reliance on income tax and value added tax as their primary revenue source needed to change if there was mass unemployment and a rise in alternative income generation, such as gig working, in the future.
“As there is no obvious alternative income source for national exchequers, the report considers that new types of taxation, such as digital or robotics tax, may need to be implemented to protect the global economy and promote growth.”
Els de Wind, incoming co-chair of the IBA’s Global Employment Institute, said: “Advancements in technology have changed our society dramatically, both for the better and the worse.
“This is especially true in the workplace, where developments such as AI and blockchain are simultaneously paving the way for innovation and rendering certain job roles obsolete. Preparation is key to ensuring the global economy not only survives, but thrives in this ever-changing world.”