AI System 206 Replacing Human Prosecutors in China

  • by:
  • Source: UncoverDC
  • 09/19/2023

The Chinese government has developed the AI system 206, an artificial intelligence prosecutor designed to alleviate the workload of prosecutors in China. The system is allegedly capable of prosecuting "Shanghai's eight most common crimes." The AI was 'trained' using "17,000 real-life cases from 2015 to 2020." Fears abound that the technology could be weaponized by the state, potentially charging citizens for political dissent.

According to a Dailyalts.com story in 2019, the AI was first officially tested in a Shanghai court in early 2019. It was also trialed in several provinces in 2018. The system can perform the following tasks with an alleged 97% accuracy:

      • Transcribe testimony
      • Transfer physical data and documents to electronic databases
      • Display relevant parameters immediately, such as time, place, people, behavior, and consequences
      • Identify defective or contradicting evidence
      • Respond to oral commands to display evidence and information on screens around the courtroom
      • Inter-connect with judicial, procuratorial, public security authorities and courts

The crimes that the AI can prosecute "include 'provoking trouble' —a term used to stifle dissent in China, credit card fraud, gambling crimes, dangerous driving, theft, fraud, intentional injury, and obstructing official duties."

In February of 2017, the Political and Judiciary Commission under the Central Committee of the Communist Party of China tasked courts in Shanghai to develop and test an AI system to assist with prosecutions. Between 2017 and 2019, Shanghai allocated more than "400 people from courts, procuratorates, and public security bureaus, working with more than 300 IT staff from tech giant iFLYTEK" to develop the technology. The company was established in 1999 and is a well-known speech recognition and artificial intelligence developer. From the iFLYTEK website:

"Since its establishment, the company is devoted to cornerstone technological research in speech and languages, natural language understanding, machine learning, machine reasoning, adaptive learning, and has maintained the world-leading position in those domains. The company actively promotes the development of AI products and their sector-based applications, with visions of enabling machines to listen and speak, understand and think, creating a better world with artificial intelligence."

The system runs on a standard computer that employs algorithms based on evidence collection from "102 common cases" programmed into the system. The AI can pull up questioning models for law enforcement and "can help the judge find fact, authenticate evidence[s], protect the right to appeal and judge impartially on the trial, so as to prevent wrongfully convicted cases," according to Guo Weiqing, president of Shanghai No 2 Intermediate People's Court. It translates voice into characters and "can distinguish between questioner and responder." The software captures salient elements of cases from electronic files. According to NotTheBee, some cities in China have used AI "to monitor government employees' social circles and activities to detect corruption." 

The technology is also being used in the U.S., according to InterestingEngineering.com. With the backlog of cases piling up due to COVID, the use of AI has picked up to "streamline" judicial processes. The use of forensic algorithms is particularly popular in the collection of evidence:

"U.S. Fingerprint matching software aims to correctly identify suspects with staggering speed and precision, facial recognition helps law enforcement agencies track people down, and probabilistic genotyping can work wonders to assist investigators in determining if a genetic sample from a crime scene is linked to a person of interest or not."

AI comes with some inherent risks, however. AI can be hacked, and systematic bias and lack of transparency can be problematic. An Oct. 2020 paper that explored the risk of AI in medicine highlighted some of the risks, and the questions explored can easily be applied to the AI being used in courts:

"[T]he use of 'black box' AI in medicine [poses risk]...the physician needs to disclose to patients that even the best AI comes with the risks of cyberattacks, systematic bias, and a particular type of mismatch between AI's implicit assumptions and an individual patient's background situation.

Unfortunately, the best AI also tends to be the least transparent, often resulting in a 'black box' (Carabantes 2019). We can see which data go into the AI system and also which come out. We may even understand how such AI systems work in general terms, i.e., usually through so-called deep neural networks. Yet, we often cannot understand why, on a certain occasion, the AI system made a particular decision, arrived at a particular diagnosis, or performed a particular move in an operation." 

Humans design AI and, therefore, AI is potentially subject to the "same flaws present in the humans who design it."

A 2017 District of Columbia court case showed the risks associated with the use of AI to prosecute a case. Apparently, faulty evidence was presented in the case of a juvenile defendant who had been given probation in his case. However, after all parties had agreed, a computer AI program was used to dispute the decision, resulting in possible jail time for the juvenile. Public defender, Rachel Cicurel, challenged the decision. The algorithm allegedly used factors, "some beyond his control," to determine he was at high risk for future criminal activity. Among those factors was that he reportedly held "negative attitudes toward police" and lived in government housing.

"Cicurel and her team challenged this algorithm-driven AI program. Tracing it to its source, she found it was a thesis written by a 20-year-old graduate student. The paper had never been examined, validated, or accepted by any scientific community. In other words, the algorithm-driven AI program was Garbage In and was producing Garbage Out. When informed of this, the trial judge invalidated the test."

Another case in Wisconsin highlighted the dangers of using algorithmically generated "high-risk scoring." Paul Zilly, who allegedly stole a lawnmower, was given a "high-risk assessment score" in his 2013 case, indicating he would likely commit crimes in the future. A judge looked at the score from a system called COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) and overturned Zilly's plea deal, handing him a two-year state prison sentence and three years of supervision. COMPAS is a case management and decision support tool used by courts to determine recidivism.

China has aggressively used AI in almost every sector of society, including the pandemic, "to improve efficiency, reduce corruption and strengthen control. Most Chinese prisons have also adopted AI technology to track prisoners' physical and mental status, with the goal of reducing violence," according to reporting by the Korea Times.

Get the latest news delivered daily!

We will send you breaking news right to your inbox

© 2024 uncoverdc.com