Startec

Startec

Procedural justice can address generative AI’s trust/legitimacy problem

Mai 19, às 19:30

·

6 min de leitura

·

6 leituras

Tracey Meares is the Walton Hale Hamilton Professor and Faculty Director of the Justice Collaboratory at Yale Law School. More posts by this contributor Sudhir Venkatesh is William B. Ransford...
Procedural justice can address generative AI’s trust/legitimacy problem

Tracey Meares is the Walton Hale Hamilton Professor and Faculty Director of the Justice Collaboratory at Yale Law School.

More posts by this contributor

Sudhir Venkatesh is William B. Ransford Professor of Sociology at Columbia University, where he directs the SIGNAL tech lab. He previously directed Integrity Research at Facebook and built out Twitter’s first Social Science Innovation Team.

More posts by this contributor

Matt Katsaros is the Director of the Social Media Governance Initiative at the Justice Collaboratory at Yale Law School and a former researcher with Twitter and Facebook on online governance.

More posts by this contributor

The much-touted arrival of generative AI has reignited a familiar debate about trust and safety: Can tech executives be trusted to keep society’s best interests at heart?

Because its training data is created by humans, AI is inherently prone to bias and therefore subject to our own imperfect, emotionally-driven ways of seeing the world. We know too well the risks, from reinforcing discrimination and racial inequities to promoting polarization.

OpenAI CEO Sam Altman has requested our “patience and good faith” as they work to “get it right.”

For decades, we’ve patiently placed our faith with tech execs at our peril: They created it, so we believed them when they said they could fix it. Trust in tech companies continues to plummet, and according to the 2023 Edelman Trust Barometer, globally 65% worry tech will make it impossible to know if what people are seeing or hearing is real.

It is time for Silicon Valley to embrace a different approach to earning our trust — one that has been proven effective in the nation’s legal system.

A procedural justice approach to trust and legitimacy

Grounded in social psychology, procedural justice is based on research showing that people believe institutions and actors are more trustworthy and legitimate when they are listened to and experience neutral, unbiased and transparent decision-making.

Four key components of procedural justice are:

  • Neutrality: Decisions are unbiased and guided by transparent reasoning.
  • Respect: All are treated with respect and dignity.
  • Voice: Everyone has a chance to tell their side of the story.
  • Trustworthiness: Decision-makers convey trustworthy motives about those impacted by their decisions.

Using this framework, police have improved trust and cooperation in their communities and some social media companies are starting to use these ideas to shape governance and moderation approaches.

Here are a few ideas for how AI companies can adapt this framework to build trust and legitimacy.

Build the right team to address the right questions

As UCLA Professor Safiya Noble argues, the questions surrounding algorithmic bias can’t be solved by engineers alone, because they are systemic social issues that require humanistic perspectives — outside of any one company — to ensure societal conversation, consensus and ultimately regulation—both self and governmental.

In “System Error: Where Big Tech Went Wrong and How We Can Reboot,” three Stanford professors critically discuss the shortcomings of computer science training and engineering culture for its obsession with optimization, often pushing aside values core to a democratic society.

In a blog post, Open AI says it values societal input: “Because the upside of AGI is so great, we do not believe it is possible or desirable for society to stop its development forever; instead, society and the developers of AGI have to figure out how to get it right.”

However, the company’s hiring page and founder Sam Altman’s tweets show the company is hiring droves of machine learning engineers and computer scientists because “ChatGPT has an ambitious roadmap and is bottlenecked by engineering.”

Are these computer scientists and engineers equipped to make decisions that, as OpenAI has said, “will require much more caution than society usually applies to new technologies”?

Tech companies should hire multi-disciplinary teams that include social scientists who understand the human and societal impacts of technology. With a variety of perspectives regarding how to train AI applications and implement safety parameters, companies can articulate transparent reasoning for their decisions. This can, in turn, boost the public’s perception of the technology as neutral and trustworthy.

Include outsider perspectives

Another element of procedural justice is giving people an opportunity to take part in a decision-making process. In a recent blog post about how OpenAI company is addressing bias, the company said it seeks “external input on our technology” pointing to a recent red teaming exercise, a process of assessing risk through an adversarial approach.

While red teaming is an important process to evaluate risk, it must include outside input. In OpenAI’s red teaming exercise, 82 out of 103 participants were employees. Of the remaining 23 participants, the majority were computer science scholars from predominantly Western universities. To get diverse viewpoints, companies need to look beyond their own employees, disciplines, and geography.

They can also enable more direct feedback into AI products by providing users greater controls over how the AI performs. They might also consider providing opportunities for public comment on new policy or product changes.

Ensure transparency

Companies should ensure all rules and related safety processes are transparent and convey trustworthy motives about how decisions were made. For example, it is important to provide the public with information about how the applications are trained, where data is pulled from, what role humans have in the training process, and what safety layers exist to minimize misuse.

Allowing researchers to audit and understand AI models is key to building trust.

Altman got it right in a recent ABC News interview when he said, “Society, I think, has a limited amount of time to figure out how to react to that, how to regulate that, how to handle it.”

Through a procedural justice approach, rather than the opacity and blind-faith of approach of technology predecessors, companies building AI platforms can engage society in the process and earn—not demand—trust and legitimacy.


Continue lendo

Showmetech

Motorola Razr Plus é o novo dobrável rival do Galaxy Z Flip
Após duas tentativas da Motorola em emplacar — novamente — telefones dobráveis, eis que temos aqui a terceira, e aparentemente bem-vinda, tentativa. Estamos falando do Motorola Razr Plus, um smartphone...

Hoje, às 15:20

DEV

Mentoring for the LGBTQ+ Community
Once unpublished, all posts by chetanan will become hidden and only accessible to themselves. If chetanan is not suspended, they can still re-publish their posts from their dashboard. Note: Once...

Hoje, às 15:13

TabNews

IA: mais um arrependido / Déficit de TI / Apple: acusação grave · NewsletterOficial
Mais um pioneiro da IA se arrepende de seu trabalho: Yoshua Bengio teria priorizado segurança em vez de utilidade se soubesse o ritmo em que a tecnologia evoluiria – ele junta-se a Geoffr...

Hoje, às 14:37

Hacker News

The Analog Thing: Analog Computing for the Future
THE ANALOG THING (THAT) THE ANALOG THING (THAT) is a high-quality, low-cost, open-source, and not-for-profit cutting-edge analog computer. THAT allows modeling dynamic systems with great speed,...

Hoje, às 14:25

TabNews

[DISCUSÃO/OPINIÕES] – Outsourcing! O que, para quem, por que sim, por que não! · dougg
Quero tentar trazer nesta minha primeira publicação, uma mistura de um breve esclarecimento sobre o que são empresas de outsourcing, como elas funcionam e ganham dinheiro, mas também, ven...

Hoje, às 13:58

TabNews

Duvida: JavaScript - Desenvolver uma aplicação que vai ler um arquivo *.json · RafaelMesquita
Bom dia a todos Estou estudando javascript e me deparei com uma dificuldade e preciso de ajuda *Objetivo do estudo: *desenvolver uma aplicação que vai ler um arquivo *.json Conteudo do in...

Hoje, às 13:43

Showmetech

Automatize suas negociações com um robô de criptomoedas
Índice Como o robô de criptomoedas Bitsgap funciona?Qual a vantagem de utilizar um robô de criptomoedas?Bitsgap é confiável? O mercado de trading tem se tornado cada vez mais popular e as possibilidades de...

Hoje, às 13:13

Hacker News

Sketch of a Post-ORM
I’ve been writing a lot of database access code as of late. It’s frustrating that in 2023, my choices are still to either write all of the boilerplate by hand, or hand all database access over to some...

Hoje, às 13:11

Showmetech

14 chuveiros elétricos para o banho dos seus sonhos
Índice Chuveiro ou Ducha?Tipos de chuveiro elétrico9 fatores importantes para considerar na hora de comprar chuveiros elétricosMelhores chuveiros elétricosDuo Shower LorenzettiFit HydraAcqua Storm Ultra...

Hoje, às 11:00

DEV

Learn about the difference between var, let, and const keywords in JavaScript and when to use them.
var, let, and const: What's the Difference in JavaScript? JavaScript is a dynamic and flexible language that allows you to declare variables in different ways. You can use var, let, or const keywords to...

Hoje, às 10:21