[ad_1]
In order to ensure fairness for its AI models, Oasis Labs partners with meta using cutting-edge technology in the field of privacy assessment. Oasis Labs has built the platform with Secure Multi-Party Computation (SMPC) that helps Meta to protect information when the company asks users on Instagram to fill out a survey to identify their race and ethnicity as part of Meta’s technology partnership.
It is anticipated that the project will further develop fairness measurement in machine learning models, which will have a positive impact on the lives of individuals around the world and be beneficial to society in general.
Among its many attributes, this platform will serve as a major player in a project that aims to identify whether AI models are fair and allow for appropriate mitigation, which is a first-of-its-kind program and a major step toward predicting fairness in AI models.
AI and Privacy Meet
There is an off-platform survey being introduced by Meta’s Responsible AI, Instagram Equity, and Civil Rights teams to people who use Instagram. A voluntary request will be made to the users on a voluntary basis for them to share their race and/or ethnicity.
In order to protect the privacy of the users’ survey responses, the information, which is collected by a third-party survey provider, will be secretly shared with third-party facilitators in such a way that neither the facilitators nor Meta can learn what the users have answered.
A fairness measurement will then be computed by the facilitators using encrypted predictions from AI models, which will be cryptographically shared by Meta, which will then be reconstituted by Meta into aggregate fairness measurement results based on a combination of de-identified results from each facilitator.
The cryptographic techniques used by the Meta platform allow it to measure for bias and fairness while providing the data contributors with high levels of privacy protection as they contribute sensitive demographic measurement information to the platform.
Esteban Arcaute, Director of Responsible AI at Meta, said:
“We seek to ensure AI at Meta benefits people and society, which requires deep collaboration, both internally and externally, across a diverse set of teams. The Secure Multi Party Compute methodology is a privacy-focused approach developed in partnership with Oasis Labs that enables crucial measurement work on fairness while keeping people’s privacy at the forefront by adopting well-established privacy-preserving methods.”
It is anticipated that Oasis Labs, with the help of Meta, will explore additional privacy-preserving approaches for more complex bias studies. It is hoped that by exploring novel uses of emerging Web3 technologies supported by blockchain networks to reach billions of people around the world, they will be able to reach billions of people worldwide. Basically, the goal is to provide further global accessibility, audibility, and transparency regarding the methods for conducting a survey and gathering survey data, as well as the use of this data for measuring.
[ad_2]
Source link