saj-shafique-mIalf1tw6-w-unsplash-2048x1439

Increasing our efforts to combat violent extremism online

Microsoft is now increasing its promises and financial support to the Christchurch Call, a crucial multi-stakeholder campaign to remove violent extremist and terrorist content from the internet.

These additional pledges, which are aimed at empowering researchers, enhancing openness and explainability around recommender systems, and promoting responsible AI safeguards, are a part of our larger efforts to improve the responsible use of AI. We are pledging an additional $500,000 to the Christchurch Call Algorithms Partnership in support of these promises to promote research on privacy-enhancing technologies and the societal effects of recommendation engines driven by AI.

Progress that matters following the Christchurch catastrophe

After the devastating attack on two mosques in Christchurch, New Zealand, three years ago, Prime Minister Jacinda Ardern urged leaders in business, government, and civil society to work together to develop effective solutions to the problem of the spread of terrorist and extremist content online. The Christchurch Call to Action was started by Prime Minister Ardern and French President Emmanuel Macron two months after the disaster, and it has since grown to encompass 120 governments, online service providers, and civil society organisations in order to advance this crucial and challenging work.

Although some progress has been made, more work has to be done, as tragically demonstrated by tragedies like this year’s massacre in Buffalo, New York. It is crucial that business leaders attend the Christchurch Call 2022 Leaders’ Summit taking place in New York right now.

Microsoft has vowed to follow the nine actions outlined by the industry to combat terrorism and violent extremist content as one of the original supporters of the Christchurch Call. We have engaged with business, civic society, and governments over the past three years to achieve these objectives, notably through the Global Internet Forum to Counter Terrorism (GIFCT). Together, we have made progress in addressing these online harms and showing the effectiveness of multistakeholder models in tackling difficult social issues. The gathering today gives the neighbourhood a chance to come together, assess our success, and—most importantly—look to the future.

Understanding how technology, particularly through AI systems that promote information, might contribute to the spread of harmful content is an important issue that needs further attention. These technologies offer considerable advantages by assisting individuals in processing vast amounts of information in ways that enhance their creativity and productivity. Helping consumers cut back on their energy use, kids find useful information, and farmers predicting weather patterns to increase agricultural production are a few examples. However, the same technology may contribute to the dissemination of bad content.

The problems have been brought up and the necessity for more aggressive action has been eloquently discussed by Prime Minister Ardern in recent months. It is difficult to define the risks associated with this technology, as she has already mentioned. However, given the stakes, we must confront these threats head-on. The potential damages are numerous and varied, and as technology advances, it interacts progressively more intricately with social issues. Research must be conducted through significant multi-stakeholder partnerships between industry and academia, predicated in part on increased industry transparency about how these systems operate.

Microsoft agrees to take the following actions in order to further these objectives:

empowering scientists

To enable business and academia to explore important issues, we need strong relationships. We promise to offer the following in order to support this crucial effort:

The Christchurch Call Algorithms Partnership has support from: We are entering into a new collaboration to study the effects of AI content recommendation systems with Twitter, OpenMined, the governments of New Zealand, and the United States. Starting with a pilot study as a “proof of function,” the partnership will investigate how privacy enhancing technologies (PETs) might promote greater accountability and understanding of algorithmic outcomes.

Promoting openness

Additionally, we are taking action to improve user control and transparency for Microsoft-developed recommender systems. We are specifically:

new transparency features are being released for Azure Personalizer. We are introducing additional transparency features for Azure Personalizer, a service that gives enterprise clients widely applicable recommender and decision-making capability that they can incorporate into their own products, to further the understanding of recommender systems. Customers will be made aware of the most crucial factors that have affected a recommendation as well as the appropriate weights of each factor thanks to this new functionality. By providing their end users with this functionality, our customers may assist them better understand why, for instance, a product or article has been shown to them and the purposes for which these systems are employed.

Increasing openness at LinkedIn: With its continued use of AI recommender systems, LinkedIn is making significant progress toward fostering transparency and comprehensibility. This includes the regular release of informative articles on the feed, such as what content appears, how its algorithms function, and how users can customise and personalise their content experiences. On their engineering blog, LinkedIn has also discussed their approach to responsible AI, how they incorporate justice into their AI products, and how they develop transparent and understandable AI systems.

Maintaining the development of safety nets for responsible AI

The current debate over recommender systems emphasises how critical it is to give AI system development careful consideration. Humans have a lot of options when it comes to the use cases and objectives that AI systems will be put to work for. For instance, the owner of an AI system like Azure Personalizer that recommends content or actions chooses which actions to track and reward as well as how to integrate the system into a product or operational procedure, ultimately determining the system’s potential benefits and hazards.

To ensure that all AI systems are utilised appropriately, Microsoft is actively expanding its responsible AI initiative. In order to share the lessons we are learning from this approach and contribute to the wider conversation on responsible AI, we recently published our Responsible AI Standard as well as our Impact Assessment template and guide. In the instance of Personalizer, we have made a Transparency Note available to help our customers better understand the technology, the factors to be taken into account when selecting a use case, and the key features and system constraints. We look forward to carrying out this crucial work in the future so that AI’s advantages can be appropriately exploited.

Going forward

We are aware that more has to be done to foster a secure, healthy online ecosystem and guarantee the ethical application of AI and other technologies. The Christchurch Call Leaders’ Summit is a crucial step in this process and a welcome reminder that no organisation, government, or group can accomplish this goal on its own. We also need to hear from young people, as today’s dialogue has reminded me. While young people may be at risk from online hate and toxic online ecosystems, the 15 young people who make up our Council for Digital Good Europe tell us that they also have the enthusiasm, idealism, and tenacity to help create healthier online communities. We owe young people our best efforts to assist them in creating a safer future in a world where the influence of technology is more closely tied to the underlying health of democracy.

Total
0
Shares