Image Credits: Anna Korhonen
To give AI-focused women academics and others their well-deserved — and overdue — time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Anna Korhonen is a professor of natural language processing (NLP) at the University of Cambridge. She’s also a senior research fellow at Churchill College, a fellow at the Association for Computational Linguistics, and a fellow at the European Laboratory for Learning and Intelligent Systems.
Korhonen previously served as a fellow at the Alan Turing Institute and she has a PhD in computer science and master’s degrees in both computer science and linguistics. She researches NLP and how to develop, adapt and apply computational techniques to meet the needs of AI. She has a particular interest in responsible and “human-centric” NLP that — in her own words — “draws on the understanding of human cognitive, social and creative intelligence.”
Q&A
Briefly, how did you get your start in AI? What attracted you to the field?
I was always fascinated by the beauty and complexity of human intelligence, particularly in relation to human language. However, my interest in STEM subjects and practical applications led me to study engineering and computer science. I chose to specialize in AI because it’s a field that allows me to combine all these interests.
What work are you most proud of in the AI field?
While the science of building intelligent machines is fascinating, and one can easily get lost in the world of language modeling, the ultimate reason we’re building AI is its practical potential. I’m most proud of the work where my fundamental research on natural language processing has led into the development of tools that can support social and global good. For example, tools that can help us better understand how diseases such as cancer or dementia develop and can be treated, or apps that can support education.
Much of my current research is driven by the mission to develop AI that can improve human lives for the better. AI has a huge positive potential for social and global good. A big part of my job as an educator is to encourage the next generation of AI scientists and leaders to focus on realizing that potential.
How do you navigate the challenges of the male-dominated tech industry and, by extension, the male-dominated AI industry?
I’m fortunate to be working in an area of AI where we do have a sizable female population and established support networks. I’ve found these immensely helpful in navigating career and personal challenges.
For me, the biggest problem is how the male-dominated industry sets the agenda for AI. The current arms race to develop ever-larger AI models at any cost is a great example. This has a huge impact on the priorities of both academia and industry, and wide-ranging socioeconomic and environmental implications. Do we need larger models, and what are their global costs and benefits? I feel we would’ve asked these questions a lot earlier in the game if we had better gender balance in the field.
What advice would you give to women seeking to enter the AI field?
AI desperately needs more women at all levels, but especially at the level of leadership. The current leadership culture isn’t necessarily attractive for women, but active involvement can change that culture — and ultimately the culture of AI. Women are infamously not always great at supporting each other. I would really like to see an attitude change in this respect: We need to actively network and help each other if we want to achieve better gender balance in this field.
What are some of the most pressing issues facing AI as it evolves?
AI has developed incredibly fast: It has evolved from an academic field to a global phenomenon in less than a single decade. During this time, most effort has gone toward scaling through massive data and computation. Little effort has been devoted to thinking how this technology should be developed so that it can best serve humanity. People have a good reason to worry about the safety and trustworthiness of AI and its impact on jobs, democracy, environment and other areas. We need to urgently put human needs and safety at the center of AI development.
What are some issues AI users should be aware of?
Current AI, even when seeming highly fluent, ultimately lacks the world knowledge of humans and the ability to understand the complex social contexts and norms we operate with. Even the best of today’s technology makes mistakes, and our ability to prevent or predict those mistakes is limited. AI can be a very useful tool for many tasks, but I would not trust it to educate my children or make important decisions for me. We humans should remain in charge.
What is the best way to responsibly build AI?
Developers of AI tend to think about ethics as an afterthought — after the technology has already been built. The best way to think about it is before any development begins. Questions such as, “Do I have a diverse enough team to develop a fair system?” or “Is my data really free to use and representative of all the users’ populations?” or “Are my techniques robust?” should really be asked at the outset.
Although we can address some of this problem via education, we can only enforce it via regulation. The recent development of national and global AI regulations is important and needs to continue to guarantee that future technologies will be safer and more trustworthy.
How can investors better push for responsible AI?
AI regulations are emerging and companies will ultimately need to comply. We can think of responsible AI as sustainable AI truly worth investing in.
Source: Women in AI: Anna Korhonen studies the intersection between linguistics and AI |