Features

Making artificial intelligence ethical

Dr Paula Boddington is a research associate at Oxford’s Department of Computer Science, specialising in developing codes of ethics for artificial intelligence.

What drives you in your field?

I find the philosophical and ethical questions posed by developments in artificial intelligence fascinating.

There are visions of the development of AI that press us to ask questions about the limits and basis of our values – if AI radically changes the nature of work, for instance, perhaps even abolishing it for many, we have to reappraise what we do and don’t value about work, which raises questions about why we value any activity. Such questions about extending human intelligence and agency with AI are in fact, honing in on the most fundamental questions of philosophy, about the nature of human beings, about our place in the world and our ultimate sources of value. For me, it’s like finding a philosophical Shangri La to be working in this field.

What are the biggest challenges facing the field?

My work is focused on the implications of the technology, so the challenges include making sure that the power of AI does not simply amplify problems – such as our existing biases. There’s also a big issue in how we apply AI to problems. AI can be very powerful indeed in narrow areas. Whenever such narrow focus happens, there’s a danger that context will be missed, and that we’ll have found a solution, and so make all our problems fit the solution. It’s like Abraham Maslow said, when you have a hammer, everything begins to look like a nail.

There are certain tasks at which the AI we have now and in the near future can excel, so we must make sure that as we develop particular applications, we don’t find that our picture of the world starts to mould itself to what we can achieve with AI, especially given the hype that periodically surrounds it. That’s one of the reasons why we need as many people as possible involved in developing and applying AI, and thinking creatively about how it can best be used, and what else we need to achieve real benefits.

Why do you think it is important to encourage more women into the field?

Yes it’s important that women and men work in AI, but more than this, it’s important that there are people with diverse experiences and varied opinions and viewpoints in AI for a number of reasons.

We need to develop technology that actually caters to people’s needs, and where in practical applications, human beings will really benefit. Tailoring such tech is complex and it needs really good design sensitive to the context of a myriad of different circumstances.

What research are you most proud of?

I try not to really ‘do’ being proud of things, I’ve always been taught that ‘pride comes before a fall’. But I’m most pleased to be involved in work that might have a practical impact to improve lives for people. For instance, I’m also working right now on a project based in Cardiff University collaborating with a group of medical sociologists and others, on the care of people living with dementia – see storiesofdementia.com. This might seem a million miles away from AI and the impact of the development of new technology, but in fact the philosophical and ethical issues overlap considerably – how do we translate abstract ideas such as respect for persons, and humane, dignified care, into making a concrete difference to the lives of those such as people living with dementia, who have various challenges such as difficulties in communicating?

This work is aimed at producing practical recommendations to improve lives. We’ve just started a project looking at continence care. A world away from the glamour of AI, but essential work. And I see a great opportunity for technology to think about some important and common problems, for example, perhaps with working towards better detection of pain, which is greatly under-treated for those with dementia, or assisting with access to fluids and access to the toilet, which is often a problem in hospital wards. In the end, it’s this kind of careful, detailed ethnographic work that my colleagues in Cardiff are carrying out, which examines what’s really going on and what’s needed, that needs to be married up with developments in tech, in order to produce technology that will really benefit people.

Are there any AI research developments that excite you or that you are particularly interested in?

I’m particularly interested in the possibilities for AI in medicine, such as helping with disease diagnosis and the interpretation of medical images, and also its deployment in applications such as in the use of mobile technologies for health management. With these developments people are increasingly able to monitor and learn about their own health conditions. These are particularly exciting for use in remote areas or where medical staff are in short supply, but also simply for increasing the knowledge and control that individuals have over their own conditions and hence over their own wellbeing.

There are, quite understandably, fears that AI will take away jobs, but in the context of medicine, I think that’s unlikely. Think about how overstretched medical staff are at the moment. Helping them to make faster, more accurate diagnoses, tailored to individuals, will not only help patients, it should, hopefully, help to relieve time pressures and other stressors from doctors, if applied thoughtfully.

The evidence so far seems to indicate that AI works best as an addition to the skills of medical practitioners, not as a replacement for them. With all these developments, however, we need to keep looking very carefully at how we can get the best out of such technologies. For example, the early diagnosis of disease can be a big advantage in some conditions – but not such an advantage in others. In any context, and medicine is a good example of this, information is just information. It’s not knowledge, and it’s certainly not wisdom. That’s where the human skills of medical practitioners will always have a vital role.

What drew you towards a career in science?

Our whole family was always really excited about science. As children, my siblings and I were always glued to the television whenever Tomorrow’s World was on.
I came to dislike school a lot and used to bunk off and go to the library and read philosophy instead. I was really interesting in how the Arts’, social sciences and general STEM worked together.

I’ve always been focused on applying abstract ideas to concrete reality, and having an understanding of, say, the science behind developments in genomics. From my work in ethical questions in medical technology it was a short step to working in issues in artificial intelligence.

Who inspires you?

Of the many possible answers, I’d have to say members of my family. My father always told me that I could do anything I wanted in life. His own mother had started out life as the illegitimate daughter of a Victorian barmaid, brought up in Tiger Bay in Cardiff, and she became the headmistress of a girl’s Grammar School. So Dad had a great belief in women’s abilities. On my Mum’s side, her grandmother was the first woman in Cardiff to have her own alcohol licence and ran her own pub, also in Tiger Bay. She had six children, and during the Depression when work was hard to find, she started doing pub lunches to provide income for them - the family always claim that she invented the pub lunch. Whether that’s strictly true or not, ‘get an education, get an education’ was like a mantra breathed in the air, the idea that education was a key to success and that family was crucial too, and that yes, you can get around obstacles and make a go of things.

Dr Boddington is the author of the book Towards a Code of Ethics for Artificial Intelligence.

Learn more about the research referenced in this article.

Find out more about Dr Boddington and her research interests.

Redesigning complex networks with AI

In part three of our women in AI series, Professor Marta Kwiatkowska, a Polish computer scientist at Oxford’s Department of Computing discusses her research specialism in developing modelling and analysis methods for complex systems. This work includes those arising in computational networks (which are applicable to autonomous technology), electronic devices and biological organisms.

Are there any AI research developments that excite you or that you are particularly interested in?

Robotics, including autonomous vehicles, and the potential of neural networks, such as autonomous vehicles, image and speech recognition technology. For example, developments like the Amazon Alexa-controlled Echo speaker have inspired me to work on techniques to support the design and specifically safety assurance and social trust, of such systems.

What can be done to encourage more women in AI?

I think women should have the same opportunities as men and we should raise awareness of these opportunities, through networking, female role models and the media. AI is embedded in all aspects of our lives and we need all sections of society to contribute to the design and utilisation of AI systems in equal measure, and this includes women as well as men.

What research projects are you currently working on?

I am following several strands of work of relevance for autonomous systems, mobile devices and AI, including developing formal safety guarantees for software based on neural networks, such as those applied in autonomous vehicles. This involves formalising and evaluating the social trust between humans and robots. A social trust model is based on the human notion of trust, which is subjective. To make the model applicable to technology you have to develop 'correct by construction' techniques and tools for safe, efficient and predictable mobile autonomous robots. That means building personalised tools for monitoring and the regulation of affective behaviours through wearable devices.

Professor Marta KwiatkowskaProfessor Marta Kwiatkowska

In your opinion what are the biggest challenges facing the field?

Technological developments present my field with tremendous opportunities, but the speed of progress creates challenges around formal verification and synthesis - particularly the complexity of the systems to be modelled. We therefore need to develop techniques that can be accurate at scale, deal with adaptive behaviour and produce effective results quickly.

What motivates you in your field?

I like working on mathematical foundations and gaining new insight from that, but my main motivation is to make the theoretical work applied through developing algorithms and software tools: I refer to this as a "theory to practice" transfer of the techniques.

What research are you most proud of?

I was involved in the development of a software tool called PRISM (www.prismmodelchecker.org) , which is a probabilistic model checker. It is widely used for research and teaching and has been downloaded 65,000 times.

Who inspires you?

I have been inspired by several leading academics in my career, but one particular female scientist and my fellow countrywoman has been a role model and an inspiration for me throughout, Maria Sklodowska-Curie, because she combined a successful career with family.

Learn more about Professor Kwiatkowska’s research here and here.

Image credit: Shutterstock

The legal challenges of a robot-filled future

Lanisha Butterfield | 26 Mar 2018

In the second of our 'Women in AI' series, Dr. Sandra Wachter, a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute discusses her work negotiating the legal pitfalls of algorithm-based decision making and an increasingly tech-led society.

What drew you towards a career in AI?
I am a lawyer and I specialise in technology law, which has been a gateway into computer science and science in general.

I’ve always been interested in the relationship between human rights and tech, so a law career was a natural fit for me. I am particularly interested in and driven by a desire to support fairness and transparency in the use of robotics and artificial intelligence in society. As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework. 

Law is generally a very male-dominated field, and tech-law even more so. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet. There is a lot of work to done to shift this mind-set.

Image credit: OUDr. Sandra Wachter is a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute Image credit: OU

What research projects are you currently working on?

The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making, which has become increasingly less predictable and the systems more autonomous and complex. I’m interested in what that means for society.

At the moment I am working on a machine learning and robotics project that addresses the question of algorithmic explainability and auditing. For example, how can we design unbiased non-discriminatory systems that give explanations for algorithm-led decisions, such as, whether individuals should have a right to an explanation for why an algorithm rejected their loan application? I have reviewed the legal framework for any loopholes in existing legislation that need immediate consideration and then urged policy makers to take action where needed.

As our interest in AI increases I think it is important to design technology that is respectful of human rights and benefits society. I work to ensure balanced regulation in the emerging tech framework. 

What interests you most about the scope of AI?

I am interested in developing research-led solutions that can mitigate the risks that come with an increasingly tech-led society. Supporting transparency, explainability and accountability will help to make machine learning technology something that progresses society rather than damaging it and holding people back.

AI in healthcare has the potential to have a massive positive impact on society, such as the development of products for disease prediction, treatment plans and drug discovery.

It is also an exciting time for healthcare robotics, the emerging fields of using surgical robotics for less invasive surgeries and assisted-living robotics are fascinating.

What are the biggest challenges facing the field?
On a very basic level an algorithm is a predetermined set of rules that humans can use to learn something about data and make decisions or predictions. AI is a very complex, more autonomous and less predictable version of a mundane algorithm. It can help us to make more accurate, more consistent, fairer, and more efficient decisions. However, we cannot solve all societal problems with technology alone.Technology is about humans and society, and to keep them at the heart of future developments you need a multi-disciplinary approach. To use AI for the good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles, particularly ethical and political responsibility, otherwise you get a skewed view. 

The development of AI-led technology for healthcare is a key research interest of mine. I’m also very interested in the future of algorithm based decision-making.

What research are you most proud of?
I published research around the use of algorithms for decision making and showed that the law does not guarantee a right to an explanation for individuals. It shed light on loopholes and potential problems within the existing structure that will hopefully prevent legal problems in the future. In following work we proposed a new method “counterfactual explanations” that could give people meaningful explanations, even if highly complex systems are used.

To use AI for good you need to collaborate with other sectors and disciplines, such as social sciences, and consider issues from all angles - particularly ethical and political responsibility, otherwise you get a skewed view. 

As a woman in science and a woman in law how would you describe your experience?
Law is generally a very male-dominated field, and tech-law even more so. People are often surprised when I go to events and they find out that I am the keynote speaker for the day. The general view of what a tech-lawyer ‘is’ is not very diverse or evolved yet, and there is a lot of work to done to shift this mind-set.

I think it would help to create more opportunities for women to have more visibility, such as speaking at events. People need to see from a young age that something is as much for one sex as it is for another. I still remember when I was at high school, the Design Technology subjects were split by gender, with boys taking woodwork, while girls learned knitting and sewing. I desperately wanted to do woodwork and build a birdhouse with the boys, but my teacher’s response when I asked was simply that ‘girls don’t do that.’ Young girls need to be supported and encouraged instead of told that they can’t do something.

Who inspires you?
I am very lucky, my grandmother was one of the first women to be admitted to a tech-university, so I grew up with a maths genius as one of my role models. People need to see that gender isn’t a factor in opportunity, it is about passion, dedication, and talent.

It is the University's first AI Expo tomorrow, what would you like the event’s legacy to be?
This event is a very important step forward for the University and I hope that it will inspire more events like it in the future. AI is a rapidly emerging field and it is really important to raise awareness and show the world that Oxford not only takes it seriously, but that we are working to use AI for good and are mindful of the consequences that come with it.

Further information about Dr Wachter and her research interests are available here

Find out more about our AI Expo showcase 

In part three of the series we meet a computational scientist involved in redesigning complex networks with AI

Multilingualism

In a guest post for Arts BlogKatrin Kohl, Professor of German Literature and Lead Researcher on the Creative Multilingualism research project, writes about recent calls for all British citizens to be able to speak English.

Should we be for or against British citizens having to be able to speak English? What makes a British artist sing in Cornish when she could be communicating so much more usefully in English – not only the lingua franca of England and the British Isles, but a language that's now spoken and sung across the world? Why is the Irish language such a politicised issue when some claim that there are now more Polish than Irish speakers in Northern Ireland? And where do users of sign language fit into these debates?

The fact is this: the UK has always been, and will remain, multilingual. And this is no more incompatible with everyone being able to speak English in the UK than it would be in India. Many people switch between different languages every day, and we all, at the very least, keep different linguistic registers in play as we move between different spheres and groups of people at home, at work or at school.

Louise Casey recently asserted that the UK should set a date for everyone to speak English. She's surely not wrong when she argues that additional funding should be provided for fostering English language skills, or that building linguistic bridges between communities can promote integration. But integration isn't helped by imposing a single language top-down or assuming that diversity is best eradicated. Languages are neither confined to what is useful nor just about what the majority speaks – we need look no further than the establishment of Welsh as an official language of the UK to appreciate this fact.

Languages are about lives, as the production of Gwenno Saunders' Cornish album shows us. While her linguistic heritage may be unique (with a Cornish poet and a Welsh language activist as parents), she's not alone in being able to draw on diverse languages as a personal treasure trove. All across the UK, people cherish the languages that are part of their heritage or that they have come into contact with in other, often very individual ways. Communities pass on their languages in religious practice, supplementary schools and cultural events, and individuals make something linguistically new from cross-cultural marriages and culturally diverse school environments. A language is a special emotional resource, a voice within which embodies memories of conversations with loved (and hated) ones past and present.

This personal, emotional dimension of languages has been sidelined in the way foreign languages have come to be taught in the UK – if the value of knowing a language is reduced to its practical function, it becomes unclear why we should bother with the hard graft of learning a new language when we can make ourselves understood in English. By the same token, it then seems sufficient to promote English as the sole passport to global success, whatever other languages children might already be familiar with. Many children are made to feel ashamed of knowing another language, and some schools indeed prohibit their speaking anything other than English on the assumption that they are thereby doing the children a favour – English is imposed as part of a lifelong school uniform.

Fortunately, many schools instead embrace the multilingualism of their students, enable them to take qualifications in their home languages, and allow them to discover their own linguistic resources in creative writing that extends beyond linguistic boundaries. Creative Multilingualism has been working with Oxford Spires Academy and with Haggerston School in Hackney to find out how children respond to exploring new language spaces. Modern foreign languages can be taught as part of that process and in interaction with it. This fosters a spirit of community that isn't confined to a single language, but characterised by shared variety and enhanced understanding of the potential that linguistic diversity holds for us all: each language is a subtly different window on to the world and a different link with other groups of people. As a preparation for life in an increasingly global world, this is hard to beat.

The UK rightly takes pride in its exuberantly diverse creative talent, but there's currently little appreciation of the ways in which languages enrich the country's creative identity. The UK music scene isn't just culturally and ethnically tremendously diverse, but linguistically too. Take Punch Records, a company set up to work with emerging Black British and British Asian artists who have grown up in urban contexts where varieties of English routinely mingle with other languages. The Slanguages exhibition project serves to showcase hip-hop, grime and rap as multilingual forms with a political edge. Birmingham school playgrounds have here served as seedbeds for adventurous modes of communication that offer exciting scope for developing new rhythms, speech forms and gestural language.

The UK's extraordinarily varied linguistic heritage is an invaluable national resource. At a time when the country wants to project itself as being more than Little Britain, and more than a country on the edge of Europe, it makes sense to value all those languages that have entered the UK over the decades, centuries and indeed millennia. Each of them has left its audible traces in the population, and together they open up a multitude of living pathways to other parts of the world. We might as well celebrate our flourishing abundance of languages – they're certainly not likely to go away.

Creative Multilingualism is funded by the Arts and Humanities Research Council as part of the Open World Research Initiative.

Image credit: Shutterstock

Meet the women driving Oxford’s AI research

Lanisha Butterfield | 22 Mar 2018

Once upon a time the concept of machines that could think and act like people was a fantasy - or more often than not, the recipe for a male-dominated, blockbuster movie. Fast forward thirty years and artificial intelligence is transforming - at pace, both the world around us and the way we live, work and communicate within it.

Despite its topical prevalence, AI is a social hot potato that is regarded as a gift and a curse depending on who you talk to, and its purpose is widely debated. Well documented issues include the technology’s impact on the labour market and  concern around the gender-gap driving the designs behind the scenes - evidenced by the perceived white male bias in the algorithms that they generate. However, the field is gradually changing and more women are not only building a future in tech, but driving some of the incredible breakthroughs that are shaping our society.

As the University prepares for its first AI Expo event next week, the women closing the inter-disciplinary AI research gender-gap at Oxford University will discuss their experiences, career highlights, and some of the biggest challenges facing the industry with ScienceBlog.

Image credit: Marina JirotkaProfessor Marina Jirotka.

Marina Jirotka is Professor of human-centred computing, Associate Director of the Oxford e-Research Centre and Associate Researcher at the Oxford Internet Institute. 

Putting people at the heart of computing

As Professor of Human-Centred Computing, Associate Researcher at the Oxford Internet Institute and governing body fellow at St Cross College, Marina Jirotka's work focuses on keeping people at the heart of technological innovation. Her research group undertakes projects that aim to enhance the understanding of how technology effects human collaboration, communication and knowledge exchange across all areas of society in order to inform the design and development of new technologies.

What is human-centred computing and how did you come to specialise in it?

Human-centred computing puts people at the heart of computing, so they have some control over how technology affects their lives. However, as technology has become more advanced, particularly with new developments in AI and machine leaning, this becomes harder. I am very keen to keep people at the centre of the drive towards machine learning.  

I became interested in computational models of the brain in the 1980s when I was studying anthropology, and my interest in AI and its societal impact grew from there. I took further studies in computing and artificial intelligence after that.

My first research position was on one of the Alvey projects. Alvey was a large UK government sponsored research programme in IT and AI which ran from 1983 to 1987. The programme was a reaction to the Japanese fifth generation computer project and defined a set of strategic priorities for channelling British research into IT improvements. I was involved in building a planner to give people advice about the welfare benefits system. The final product was a great example of early inter-disciplinary collaboration, fusing STEMM technology with social sciences understanding.

As a society we are striving to create artificial intelligence without really understanding what intelligence is, or how to get the most from it, and that is a problem.

What drew you towards a career in science and AI?

Science has always been a big part of my family; my parents and my grandfather were chemists in the Czech Republic. My mother was actually one of the first women in the country to get a degree at Charles University.

I became interested in computational models of the brain in the 1980s when I was studying social anthropology and psychology. My interest in AI and its societal impact grew from there when I studied computing and artificial intelligence.

As a society we are striving to create artificial intelligence without really understanding what intelligence is, or how to get the most from it, and that is a problem. I personally like seeing where AI can actually go and where it can take us - what it really can do compared with the Hollywood hype, and then using that knowledge to hopefully make a difference.

How has the field changed for you as a woman in AI, and what can be done to encourage more women to join the field?

When I first started I was a real oddball, not only a woman but also a social scientist. But, now that there are more of us, I notice it less. I can’t generalise but I think the human-centred theme could be a big draw. In my experience women are keen to see the outcome of an application and understand what their work and contribution will actually achieve. Whereas some of my male colleagues are more driven by product development and the theoretical side.

What are the biggest challenges facing the field?

It is important to consider the kind of world that we want to live in, build from there and start thinking about the impact that developments will have on society and institutions. At the same time, it is paramount to involve and engage people in those visions, so that human society is taken on the AI journey as well, rather than left behind.

What research are you most proud of?

In the early days it was my contribution to Alvey, but currently I would say it is the Digital Wildfire Project.

The project grew from a desire to understand and address the spread of hate speech and misinformation online. For example, public reaction to events such as the New York Stock Exchange, Hurricane Sandy and crucially the spread and impact of hate speech.

In everyday society there are safeguards in place to protect people from hate speech, but in an online environment these defences do not exist. As a result, people sometimes feel that they can say things and behave in ways that they wouldn’t in any other area of life. People are subjected to abuse that they would not normally be faced with.

Our research looked at this phenomena and offered advice to people on how to engage with and control it. We worked with a number of different stakeholders from those trying to prevent and manage it, such as the police and schools, and to those who are most vulnerable, children.

More recently we have worked with policy makers, such as the House of Lords Select Committee, on communications to advise on and support children’s digital rights. I was specialist advisor to the committee which produced a report “Growing Up with the Internet”, making recommendations relating to how internet policy should involve participation of multiple stakeholders and for the promotion of digital literacy for children. The report was debated in the House of Lords. Following this, the Secretary of State for Digital, Culture, Media and Sport, responded to the Committee’s report and announced the launch of the Government’s Green Paper for an Internet Safety Strategy in which a digital literacy programme was proposed that involves different stakeholders in order to protect children when they are online.

I have learned so much from this project, particularly about how government works. It has also been a great way of engaging with the public. We have worked with technology companies and sponsors like Santander to engage with young people and get them to share their experiences online through art and other channels.

When I first started I was a real oddball, not only a woman but also a social scientist. But, now there are much more of us, it isn't unusual, it's vital. The challenges that we have to face in the 21st century can’t be solved by one discipline or mindset alone.

What excites you most about the future of AI?

Given the state of the planet, the ways in which AI is being used in areas that humans have not been able to access, such as extreme environments and also to help wildlife and conservation, these areas are really exciting.

I’m also equally interested in and worried by transhumanism - the notion of embedding technology into a human, in order to give them super human abilities. There is already research taking place in the US, which aims to improve people’s cognitive faculties through neuro science.

What can be done to help public understanding of AI?

People want to know how things apply to them and how something is going to affect them. We need to convey the current knowledge about machine learning to people so that they understand its potential and capabilities. In many ways this is much more interesting than the current media hype.

What role does interdisciplinary collaboration play in machine learning and AI?

It is imperative, to the point that research councils actively encourage interdisciplinary work now. The challenges that we have to face in the 21st century may not be solved by one discipline alone.

You are chairing the AI & Ethics debate panel at next week's AI Expo, what are your thoughts on the event?

The AI Expo is a great idea, that will hopefully serve as a reminder of Oxford’s commitment to supporting well considered machine learning progression. I hope it will inspire more events of its kind in the future.

Learn more about Professor Jirotka’s research here

 Digital Wildfire Project: #TakeCareOfYourDigitalSelf

In part two we meet a tech lawyer, working to support transparency in the use of AI and robotics in society