Berkeley universities

UC adopts recommendations for responsible use of artificial intelligence

Read the report

Responsible Artificial Intelligence: Recommendations to Guide the University of California’s Artificial Intelligence Strategy

University of California President Michael V. Drake, MD, has adopted a set of recommendations to guide the safe and responsible deployment of artificial intelligence in unified communications operations.

With Drake’s action, UC becomes one of the first universities in the country to establish general principles for the responsible use of artificial intelligence (AI) and a governance process that prioritizes transparency and accountability in decisions about when and how AI is deployed.

Head of Stuart Russell
Stuart russell
Courtesy photo

“Artificial intelligence has great potential, but it must be used with the proper care and caution,” President Drake said. “The Presidential Artificial Intelligence Working Group has given us a roadmap to deploy this promising technology in a way that protects our community and reflects our values, including non-discrimination, security and privacy. “

Artificial intelligence refers to machines, computer programs, and other tools capable of learning and solving problems. “Increasingly, AI is being deployed in higher education institutions to improve efficiency, refine decision making and improve service delivery. Still, its use can present significant risks, ”said Stuart Russell, professor of computer science at UC Berkeley and co-chair of the task force. “Unrepresentative datasets or poor model design, for example, can unintentionally exacerbate problems of discrimination and bias. “

To navigate this thorny terrain, President Drake tasked an interdisciplinary group of UC experts in August 2020 to develop recommendations for appropriate oversight of AI in university operations. The group consisted of 32 faculty and staff from all 10 campuses and reflected a wide range of disciplines, including computer science and engineering, law and policy, medicine, and the social sciences.

Portrait of Alexandre Bustamante
Alexandre bustamante
Courtesy photo

As part of its efforts, the task force interviewed dozens of experts and stakeholders across UC and conducted a survey of campus information and technology officers. to better understand how AI is implemented and the governance and oversight mechanisms in place, Alex said. Bustamante, UC’s Compliance and Audit Manager, who co-chaired the working group.

“The overwhelming majority of survey respondents were concerned about the risks of AI-based tools, especially the risk of bias and discrimination,” Bustamante said. “The UC community wanted proper oversight mechanisms. Our recommendations establish a governance process for ethical decision-making in the procurement, development, and monitoring of AI on campuses. “

The CU will now take measures to operationalize the main recommendations of the working group:

  1. Institutionalize the principles of responsible AI of the CPU in procurement and monitoring practices;
  2. Establish campus-level guidance and system-wide coordination to advance the principles and directions of the working group;
  3. Develop a risk and impact assessment strategy to assess AI-based technologies during procurement and throughout the operational lifespan of a tool;
  4. Document AI-based technologies in a public database to promote transparency and accountability.

Working group co-chair Brandie Nonnecke, founding director of the CITRIS Policy Lab at UC Berkeley and expert in AI governance and ethics, said Drake’s adoption of the recommendations is expected to generate strong industry interest. of higher education.

“We are one of the first universities and the largest public university system to develop a governance process for the use of AI,” said Nonnecke. “I hope these recommendations inspire other universities to establish similar safeguards.”

Brandie Nonnecke in a vineyard
Brandie nonnecke
Courtesy photo

Universities are starting to adopt AI-based tools for tasks ranging from chatbots that answer common admission questions to automated scanners that review applicants’ resumes. But tools are often purchased and deployed on an ad hoc basis, Nonnecke said, and lack the kind of systematic process to assess fairness, accuracy, and reliability that CPU is currently putting in place.

“It’s good that we are putting these processes in place now. Other entities deployed AI and then realized that it produced discriminatory or less effective results. We are at a critical point where we can establish governance mechanisms that provide the necessary scrutiny and oversight, ”she said.

The task force focused its research and findings on four areas where the use of AI in UC operations posed the greatest risk of harm: health; human ressources; police and campus security; and the student experience. In each of these areas, they looked at how AI is currently being deployed or will likely be deployed to UC and made recommendations.

“We are reviewing functional use cases to identify how we can use our principles to guide effective strategies to mitigate the negative effects of AI,” said Nonnecke. “This work was important in showing UC how to effectively translate principles into sound practices.”

“We are also asking for a public database of all AIs who pose more than moderate risk to individual rights. Transparency in the use of AI-based tools is essential to ensure that our actions are responsible for the ethics and values ​​of our community.


Source link

LEAVE A RESPONSE

Your email address will not be published. Required fields are marked *