fbpx

Calibrating for gender bias in online data

As companies and governments become more data-driven, there is one big problem: are they designing the world for men?

The gender data gap refers to the fact that men are overrepresented in much of the data that organizations are using to take decisions. While Invisible Women (Criado-Perez) demonstrates how this bias affects all walks of life, this effect is certainly present in online data – and the type of data that our clients use with Citibeats.

Online platforms are often not representative of the gender distribution in the general population, which in most countries is 49-51% (source: Our World In Data). For example, women represent 38% of Twitter users globally, with variations across countries (source: Datareportal, 2020). On top of that, it seems that of these users, in some contexts men post more than women; while this varies from country to country and topic to topic, in our experience we often find analyses with around 70% of the Twitter conversation is male, and 30% female. 

Clearly, there is a discrepancy. So, we’ve made it one of our main goals to remove this bias from analyzing people’s opinions, and calibrate the results before they are used by our clients. Here is the why and how. 

Impact Report 2019

Impact report

Do you want to know how we have helped to create better policies, more effective budgets and earlier interventions with Artificial Intelligence?

Why the gender data gap is a problem

From data processed in Citibeats, we are seeing some differences in the topics of opinions shared by women and men. By calibrating for this effect, we give an equal weight to the topics that matter overall. 

For example, in one sample of data in Latin America, of civic opinions during the COVID-19 crisis, there is an underrepresentation of the importance placed on concerns shared about the healthcare system, household economy, and civic initiatives – that’s because, relative to other topics, women are putting more focus on these topics than men, but they’re not given enough importance in the data because there are more men talking.

While we see slight differences after calibration (health system, household economy and citizen initiatives have higher weighting when weighing women’s voices equally to mens), we don’t see major differences. It should be noted that this is averaged data, containing many countries, and this must also be analyzed by country. That said, it is important to carry out this type of analysis and calibration, because in some cases the underrepresentation could be greater, and the calibration would have a higher impact.

We are also observing country-specific gender differences. In the same data sample, we can see that in Brazil, business economy and health system issues have a higher percentage of the men’s discussion, while women are putting more focus on mental health and education than men.

In that case, mental health concerns would have a higher percentage of the discussion relative to other issues, if women’s voices were weighted equally to men; or business economy issues would have a lower percentage.

Having visibility of the gender differences also helps detect new emerging issues. In another client application of Citibeats, on consumer protection in 3 countries in Africa, we found that COVID-19 exacerbated certain differences. In one country, the number of women reporting being victims of fraud increased after COVID more than it did for men; in another country, the number of women reporting being mistreated by customer service increased more than it did for men. 

By calibrating results to have an equal weighting for male and female, as well as making the gender-specific problems faced as visible as possible in our product, our focus now is on limiting the bias of the gender data gap, and giving our clients the tools they need to make important decisions.

How we’re bridging the gap with state of the art AI

In order to calibrate results, we’ve been working on state of the art technical approaches to infer gender from online discussion. We set out to understand, at the aggregate level, if posters on a topic are male or female – and you can read the full description of how we did this development and training from scratch in the technical blogpost: Using Machine Learning to Calibrate Online Opinion Bias. 

In order to estimate the gender of a user, we focused on using people’s names, and for Twitter, bio description. Using deep learning, our system looks for clues, and makes a final probability of gender. A name like Esther may have a 100% probability of being female, while a name like Cris might have a 75% probability; from the bio, we may detect other clues, such as “mother of two”, “she/her” or “empresaria” (this last example, ‘business woman’ in Spanish, appears in Latin languages with gendered nouns). While our technical approach isn’t perfect, we’ve been able to compare our Twitter demographics estimation with Datareportals surveys, with very close matches (read the tech post for more details).

We found many interesting clues along the way – such as the fact that women tend to use emojis more than men, or that the use of the female emoji icon is more indicative of being female, than the male emoji icon is of being male. All these small discoveries are factored into the probability that is finally calculated by the algorithm. 

An ethical process for an ethical output

An important consideration in our approach has been the ethical aspect. We want to limit one problem (the gender bias), without creating another (we need to be respectful of privacy boundaries). With this in mind, we’ve taken the following approach:

The start of a journey to ethical AI

Much is being talked about ‘ethical AI’, and our focus at Citibeats is grounding that in actionable, practical measures to do something about it. We are trying to be idealists and pragmatists. Pragmatic in that we are already putting concrete measures in place, and idealists in that we believe we can take this a long way – to be a leading example in how to apply ethical AI to social good challenges worldwide.

While in this research and blogpost we’ve focused on gender bias, there are many more fronts open which we are continuously working on – with the vision to be the social data platform that is reliable for taking decisions that matter. It’s often said that when it comes to data, you are what you eat, and we hope to be providing ‘organic food’ data analytics to clients.

Ethical tools are best put to use on meaningful problems! If you have an idea for how to apply analysis of people’s opinions at scale with your organization, please contact us; or download our Impact Report to read the latest case studies from other organizations. 

Want to hear from us?

Subscribe to our newsletter and stay updated!

Thank you!

We’ve received your information correctly.
Our Sales Team will contact you shortly.

Do you want to know more about Citibeats? Please take a look!