The Label Blog

The Battle for Transparency: AI Researcher’s Open Letter to Tech Giants

We’re in the midst of an AI revolution, where machines that can learn, think, and even create are no longer just science fiction. It’s an exciting time, but it’s also a moment that calls for responsibility, especially when it comes to transparency. That’s why we at the Training Data Project (TDP) are buzzing with excitement about the recent push for open access to AI systems by over 100 top artificial intelligence researchers.

More than 100 of the AI and machine learning industry’s top researchers have written an open letter to several tech giants calling for increased transparency behind their innovations. Specifically, they are advocating for access to their systems to promote safety testing of the tools now being used by consumers worldwide. They argue that current restrictions, meant to prevent misuse, actually hamper the crucial work of ensuring these tools are safe for millions of users.

The Call for Transparency

AI companies are taking an increasingly aggressive stance against outside auditors looking to execute independent testing of their tools and services. Independent research into AI systems and their operating systems is essential when ensuring that machine learning services are operating safely, ethically, and legally. Aside from promoting regulatory compliance, researchers and voices of authority within the industry want to ensure that consumers can enjoy the services that AI/ML technologies provide without violating copyright, committing fraud, or taking part in the generation of misleading content and information.

Conducting this type of research requires purposefully breaking use agreements and terms of service that are being introduced by AI companies. A lack of coordination between companies

and researchers means that actions are being taken under users’ accounts and logins. Considering AI companies are still in the process of developing methods to monitor and punish those violating terms of service, researchers fear a disproportionate crackdown on users who bring negative attention to their business.

The Signatories: Voices of Authority

In recognition of the importance of outsider audits, OpenAI has offered special programs that enable researchers to access their systems for review. However, the signatories argue that this method fosters an environment of favoritism as companies hand-select the evaluators. Those voicing their concern for the future of AI and transparency include a list of distinguished participants ranging from specialists in AI, research, policy, and law.

This call for transparency and safety in AI isn’t just academic—it has real-world implications for government and defense. In government, transparent AI could enhance public services, making them more efficient and fair. In defense, it could mean the development of systems that are not only powerful but also aligned with ethical standards, protecting citizens without overstepping boundaries.

Stanford University’s Percy Liang, Pulitzer Prize-winning journalist Julia Angwin, Renée DiResta from the Stanford Internet Observatory, and Mozilla fellow Deb Raji, who has pioneered research into auditing AI Models. Many former representatives of governments from around the world have also voiced their concern including Marietje Schaake who held a position within the European Parliament, as well as Brown University professor Suresh Venkatasubramanian, a former adviser to the White House Office of Science and Technology Policy.

Targets of the Letter: AI Giants Under Fire

The letter itself is addressed to a list of AI and tech giants including OpenAI, Meta, Anthropic, Google, and Midjourney. The letter comes as a result of discovered vulnerabilities within AI tools including ChatGPT. Instances have occurred in which researchers have been able to break past safeguards through simple actions such as translating English inputs into less commonly spoken languages.

The authors of the letter made it clear that their goal is not to undermine the tech companies, but instead to help them as they grow and their tools continue to expand their reach across the globe. Imploring tech firms for a legal and technical safe harbor where interrogation can take place, researchers are searching for a method to inform companies of potential problems their tools are facing. By advocating for a greater level of coordination to ensure the safety of these tools, researchers are hoping for a more fruitful relationship than what was experienced with social media companies.

Many social media companies did everything they could to exclude independent forms of research from taking place. As a result, researchers found other ways to share possible sources of insecurity or malpractice coming from tech giants. Many took to social media or other platforms to spread warnings in what are referred to as “gotcha” moments after people found problems. This method of communicating problems was hurtful to both the public and large companies. The number of problems that companies would publicly address became narrowed as they were forced into a defensive and adversarial position against consumers, feeling that they had been “caught with their pants down” thanks to a lack of communication channels for constructive feedback.

Conclusion

The open letter represents a powerful rallying cry for an increase in transparency in the development and deployment of artificial intelligence and machine learning technologies. By advocating for access to AI systems for independent safety testing, these researchers are not only seeking to safeguard consumers from potential risks but also to uphold ethical and legal standards in AI innovation. The challenges posed by the aggressive stance of AI companies towards outside auditors highlight the critical need for collaboration and coordination between stakeholders to ensure responsible AI development.

Moving forward, tech giants must heed the call for transparency, offering avenues for researchers to access and address potential harms while fostering a culture of accountability and openness in the AI ecosystem. Only through concerted efforts and cooperation can we navigate the complexities of AI advancement and promote the safe and ethical use of these transformative technologies for the betterment of society.

At TDP, we stand for Transparent, Readily available, Unbiased, Standards-based, and Traceable data practices. Our mission is to ensure AI training data is not only high quality but also reflective of the diverse society it serves. We’re here to champion a future where AI works for the benefit of all, grounded in trust and openness. Let’s take this moment to rally around the the calls for greater transparency and accountability in AI. It’s a chance to shape a future where AI not only advances technology but also upholds our values and works for the common good.

LinkedIn
Forward