The United States Department Justice (DOJ) has asked the United States Sentencing Commission to update its guidelines to provide additional penalties for crimes committed with the aid of artificial intelligence. 

According to a legal alert published by the law offices of White & Case, the recommendations seek to expand well beyond established guidelines and would apply not only to crimes committed with AI, but to any crime aided or abetted by even simple algorithms.

Current guidelines, per the legal alert, only cover so-called “sophisticated” systems. Ostensibly, the new guidelines would make AI involved in criminal activity an accessory, with the legal system punishing the person responsible for its application to criminal activity.

The document didn’t give specific circumstances, but cited concerns that technology currently available to the public could make certain criminal activities easier, amplify their potential scope and scale, and help offenders avoid detection and apprehension.

Enhancement penalties

The sentencing commission isn’t required by law to accept the recommendations of the Justice Department, however its mandate does require their consideration.

Were the recommendations to pass, then certain crimes — with a likely emphasis on those often considered “white collar crimes” — would be eligible for a sentencing enhancement if alleged perpetrators are convicted.

Sentencing enhancements are basically extra mitigating factors a judge considers when deciding on the stiffness of a given penalty. In the US, judges are often given leeway at their personal discretion when it comes to sentencing.

In the case of a crime where a judge might decide that the use of AI is an enhancing factor, for example, a defendant who may have been given a minimum sentence would receive the minimum sentence plus an additional penalty attributed to the enhancement.

AI regulation

While the justice system is beginning to adapt to the modern world of AI at the individual criminal level, there still exists very little in the way of regulation and policy aimed at AI developers and publishers.

Companies in the US, for example, have faced a number of class action and individual lawsuits over their alleged use of personal data to train AI systems without expressed consent. But, to the best of our knowledge, the US government doesn’t currently regulate the use of personal data to train AI models.

In the EU, however, numerous US-based big tech companies have faced dozens of lawsuits and massive fines over the same data usage.

Related: EU watchdog sues Elon Musk’s X over alleged AI data violations