AI systems shed light on root cause of religious conflict
Artificial intelligence can help us to better understand the causes of religious violence and to potentially control it, according to a new Oxford University collaboration. The study is one of the first to be published that uses psychologically realistic AI – as opposed to machine learning.
The research published in The Journal for Artificial Societies and Social Stimulation, combines computer modelling and cognitive psychology to create an AI system able to mimic human religiosity. An approach which allows for better understanding of the conditions, triggers and patterns for religious violence.
The study is built around the question of whether people are naturally violent, or if factors such as religion can cause xenophobic tension and anxiety between different groups, that may or may not lead to violence?
The findings reveal that people are a peaceful species by nature. Even in times of crisis, such as natural disasters, people tend to bond and come together. However, in a wide range of contexts they are willing to endorse violence - particularly when others go against the core beliefs which define their identity.
Conducted by a cohort of researchers from universities including Oxford, Boston University and the University of Agder, Norway, the paper does not explicitly simulate violence, but, instead focuses on the conditions that enabled two specific periods of xenophobic social anxiety, that then escalated to extreme physical violence.
Justin Lane, a DPhil student in the Institute of Cognitive & Evolutionary Anthropology, who is a co-author on the work, and led the design of the model used and data collection, said: ‘Religious violence is not our default behaviour – in fact it is pretty rare in our history.’
Although the research focuses on specific historic events, the findings can be applied to any occurrence of religious violence, and used to understand the motivations behind it. Particularly events of radicalised Islam, when people’s patriotic identity conflicts with their religions one, e.g. the Boston bombing and London terror attacks. The team hope that the results can be used to support governments to address and prevent social conflict and terrorism.
The paper focuses on two cases of extreme violence, firstly, the conflict commonly referred to as the Northern Ireland Troubles, which is regarded as one of the most violent periods in Irish history. The conflict, involving the British army and various Republican and Loyalist paramilitary groups, spanned three decades, claimed the lives of approximately 3,500 people and saw a further 47,000 injured.
Although a much shorter period of tension, the 2002 Gujurat riots of India were equally devastating. The three-day period of inter-communal violence between the Hindu and Muslim communities in the western Indian state of Gujarat, began when a Sabarmarti Express train filled with Hindu pilgrims, stopped in the, predominantly Muslim town of Godhra, and ended with the deaths of more than 2,000 people.
Of the study’s use of psychologically realistic AI, Justin said: ‘99% of the general public are most familiar with AI that uses machine learning to automate human tasks like - classifying something, such as tweets to be positive or negative etc., but our study uses something called multi-agent AI to create a psychologically realistic model of a human, for example – how do they think, and particularly how do we identify with groups? Why would someone identify as Christian, Jewish or Muslim etc. Essentially how do our personal beliefs align with how a group defines itself?’
To create these psychologically realistic AI agents, the team use theories in cognitive psychology to mimic how a human being would naturally think and process information. This is not a new or radical approach – but it is the first time it has been applied physically in research. There is an entire body of theoretical literature that compares the human mind to a computer programme - but no one has taken this information and physically programmed it into a computer, it has just been an analogy. The team programmed these rules for cognitive interaction within their AI programme, to show how an individual’s beliefs match up with a group situation.
They did this by looking at how humans process information against their own personal experiences. Combining some AI models (mimicking people) that have had positive experiences with people from other faiths, and others that have had negative or neutral encounters. They did this to study the escalation and de-escalation of violence over time, and how it can, or cannot be managed.
To represent everyday society and how people of different faiths interact in the real world, they created a simulated environment and populated it with hundreds - or thousands (or millions), of the human model agents. The only difference being that these ‘people’ all have slightly different variables – age, ethnicity etc.
The simulated environments themselves have a basic design. Individuals have a space that they exist in, but within this space there is a certain probability that they will interact with environmental hazards, such as natural disasters and disease etc and at some point, each other.
The findings revealed that the most common conditions that enable long periods of mutually escalating xenophobic tension occur when social hazards, such as outgroup members who deny the group’s core beliefs or sacred values, overwhelm people to the point that they can no longer deal with them. It is only when people’s core belief systems are challenged, or they feel that their commitment to their own beliefs is questioned, that anxiety and agitations occur. However, this anxiety only led to violence in 20% of the scenarios created - all of which were triggered by people from either outside of the group, or within, going against the group’s core beliefs and identity.
Some religions have a tendency to encourage extreme displays of devotion to a chosen faith, and this can then take the form of violence against a group or individual of another faith, or someone who has broken away from the group.’
While other research has tried to use traditional AI and machine learning approaches to understand religious violence, they have delivered mixed results and issues regarding biases against minority communities in machine learning also raise ethical issues. The paper marks the first time that multi-agent AI has been used to tackle this question and create psychologically realistic computer models.
Justin said: ‘Ultimately, to use AI to study religion or culture, we have to look at modelling human psychology because our psychology is the foundation for religion and culture, so the root causes of things like religious violence rest in how our minds process the information that our world presents it.’
Understanding the root cause of religious violence allows people to use the model to both contain and minimise these conflicts, as well as increase them. However, used effectively, this research can be a positive tool that supports stable societies and community integration.
Off the back of this research, the team have recently secured funding for a new two-year project with the Center for Modeling Social Systems in Kristiansand, Norway. The work will help the Norwegian government to optimise the refugee integration process by studying demographic shifts related to immigration and integration in Europe such as the Roma in Slovakia, and the resettlement of Syrian refugees in Lesbos to Norway.