Blog

High-level impact considerations

Working out what to work on to have the highest impact is a big task. But before we get stuck into the detailed work, it’s good to stand back and think about what kinds of questions we should be asking.

This isn’t an exhaustive list, but here are some things we think you should bear in mind.

What area should I be working in?

Some problems are orders of magnitude more important than others. Technology has given us easy global communication, not just cat gifs. When choosing a problem to work on you should prioritize those that score well on:

  • Scope: the size of the benefit from solving the problem
  • Tractability: the ease of transforming your resources into progress
  • Neglectedness: the more neglected the problem, the higher the value of additional resources and people

High-priority problems are often closely related, or are facets of a larger underlying problem, so you can more efficiently search for problems by looking for high-priority areas or causes. For example, improving health is a cause packed full of tractable, important problems.

How many people do you benefit and how much?

It may sound obvious, but you should aim to help the greatest number of people as much as you can. For this reason it important to think carefully about your target beneficiaries, as some groups will be much larger and easier to help than others.

Furthermore, this means that small is not always beautiful - growing the scale of your business can be great for your impact as well as your profit.

Are your most important beneficiaries your customers?

You might help other people even more than you help your customers. One way your company could improve the world is by innovating, which enables future products and companies.

For example, although Tesla produces vehicles for the wealthy, they’re hastening the widespread uptake of electric cars, leading to reduced emissions and a lowered burden of climate change. In this respect, an important class of Tesla’s beneficiaries are the future people who won’t suffer as much from the effects of climate change.

What is your impact mechanism?

As well as identifying who you’re helping, you should think about the mechanism by which you’re helping them and use impact metrics to guide your decisions.

Here are some examples of mechanisms by which your company might have an impact: - Direct improvements in the welfare of your customers. - Innovation or developments in infrastructure, which enable future products to help people directly. - Other positive spillover effects, such as averting environmental damage.

What will happen if you don’t do this project?

When your goal is to improve the world, you care more that good gets done than that it gets done by you. If someone else creates a product before you, then that’s actually a good thing, because the world becomes better, sooner. This is a reason to concentrate on something that’s less likely to happen. For example, it seems likely that most “machine learning for X” startups are going to be done in the next 5 to 10 years, unless you pick a really unusual X!

One way to find projects that are unlikely to be done by others is by seeing where you have an unusual combination of skills, knowledge, and/or problem domain.

What’s next?

The next step is to start answering some of these questions! If you’d like some help with that, consider booking an advice session to talk to us, or emailing us directly.

Continue reading

Technology for detecting and managing disease outbreaks: an interview with Alex Demarsh

Alex Demarsh

Alex Demarsh is a PhD student at McGill University’s surveillance lab and works as an epidemiologist at the Public Health Agency of Canada.

What are you working on and how did you get involved?

First I did a masters in epidemiology, and found a job in government. Then I got swept up in the 2008-2009 H1N1 pandemic. It was a great start to my career because that’s what public health infrastructure is for - worldwide emergencies. I had an idea about how the infrastructure should work, but the reality was a real surprise. People were sharing data by emailing each other Excel spreadsheets and getting faxes of handwritten forms. This experience got me interested in data issues, what software can do, and the need to improve our infrastructure around data sharing and managing surveillance and critical incidence data.

I had already been interested in programming and I built my skills further through software carpentry and a coding bootcamp. This taught me how to actually code and manage data properly. I worked for a few years but thought I had some skills missing. I looked around for opportunities and found the surveillance lab at McGill University. They do awesome technology development as well as good epidemiology. They build systems for public health practitioners for epidemic detection, often using medical records data.

What did you learn from your epidemiology training?

We were taught lots of statistics, but not much on software or on data management - how to use data that is not in a form that you can run analysis on. The training used very clean data sets but out in the world, the data is messy.

Do you see your work as more focussed on the risk from catastrophic pandemics or disease more generally?

Pandemics are an obvious catastrophic risk, but I’m focussing on building good infrastructure that could help with any level of public health crisis. There is work to be done that would help with both pandemics and lower level outbreaks - piping data systems together and automating them, removing humans from error-prone steps. There’s really low hanging fruit here because things like emailing Excel spreadsheets are common.

What is biosurveillance?

Surveillance is the ongoing and systematic collection, analysis, and dissemination of health data. All public health institutions do this. It involves going out and collecting data on incidence or prevalence of disease, or on risk factors or precursors to disease. Surveillance systems are useful for early detection of anomalies so that we can act in response. It’s also used for census purposes to look at the stability of disease rates over time and across different regions so that we can decide whether to put more resources into helping one set of people or another.

Within surveillance, biosurveillance is more focussed on computational tools and early detection of outbreaks.

What are the main parts of the infrastructure we need?

We need interoperable systems that can talk to one another. We need to be able to link together e.g. survey data, administrative data, environmental data, road networks, and maybe vector populations and movements. One thing that makes this difficult is that data is often hard to move between administrative units (such as states or provinces). This data infrastructure would help us to predict the spread of disease and would support decision making on how to handle an outbreak.

Have there been any cases when better tech infrastructure for surveillance has been put to the test?

Bluedot have done work using data from mass gatherings such as [there have been exercises to test human component of outbreak management. These exercises involve many different parts of the health system, and often use actors for realism. Then during the exercise the people running it might call you up and say something like “there’s Ebola at Ottawa airport” or “the stockpile of personal protective equipment isn’t available” and you have to handle that situation. Promoting more real world simulation could be good - it’s underused. There’s some similarities here with military exercises, and the military do have some similar aims in that both they and the public health community have to do emergency response.

What’s the mechanism for this infrastructure helping the world? Will better data infrastructure help with early detection or is it more about better handling of the later stage of an outbreak?

A bit of both, although it’s more about the latter. Early warning is a major goal of surveillance, but in reality the tech improvements we’re talking about are more useful for managing the main body of the outbreak thus minimising the impact, not preventing it entirely.

What kind of projects would you like to see? Where would you direct a wave of tech talent?

The first thing that comes to mind is influenza. It would be useful to link hospital electronic medical records systems to public health monitoring so that the public health people can see what’s going on more easily. The public health people could then quickly detect an uptick of influenza cases. I’m not sure why this hasn’t happened yet but it’s probably some combination of a lack of technical skills and it not seeming like a priority. There are a few isolated systems that do something similar, but not anything close to national let alone international scale.

In general, influenza is a good disease to work on as a lot of people die of it each year, there’s a risk of an influenza pandemic, and there’s a lot of data on it.

One thing a lot of people are looking at is point of care diagnostics - tools you can use to diagnose without a lab. This is especially useful in areas where you can’t access a lab.

One issue biosurveillance people think about a lot is the fact that doctors are told ‘When you see hoofprints think horses not zebras’. But in biosurveillance we’re looking for those zebras, unusual cases that could be the beginning of something bigger. Given the way the health system works, the main way of diagnosing these rare events will be through a doctor’s brain, but they’re trained to underweigh these kinds of rare event. With my work I’m trying to create computable case definitions so you can analyse them and pick up things that the doctors may not have. The disease I’m looking at is unusual so clinicians might not to pick up on it.

It would also be good to make it easier to communicate results of surveillance an analysis to non-technical and non-scientific people who have to make decisions. We need better tools for communicating data effectively to people in government who make decisions on how to respond. Often agencies have to search manually through data when there would be ways to make that process a lot quicker using better software. Decision-makers tend to have medical advisors, but maybe they could have technical advisors to help them with this.

What kinds of data sources are promising, other than medical records?

  • Weather and landscape data when studying vector-borne diseases. This data exists because it’s collected for other purposes.
  • Human movement data. For example, Bluedot’s central data source is about the global air transport network.
  • Data from the media. For example Canada’s Public Health Agency developed GPHIN (Global Public Health Intelligence Network) takes feeds of media articles in seven languages, does some automatic translation and keyword detection, and then sends them to human analysts who then send them off to groups that might be interested. It’s got a lot of potential and it got people excited around the time of SARS because they detected something was going on in China well before the Chinese admitted it. One of main things Epidemico does is curate and analyse media feeds to detect epidemic and outbreak information and pharmaceutical adverse event detection.

Is there a community around using technology for public health?

There is a Journal of Public Health Informatics, but there is a relatively small awareness of tech for public health as a thing.

Where can we look for more information on biosurveillance?

Continue reading

Tech consultancy for non-profits: an interview with EyeSeeTea

EyeSeeTea

Adrian Quintana Pérez, José García Muñoz, and Ignacio Foche Pérez form EyeSeeTea, a software consultancy focussed on serving the non-profit sector. We talked to them about how non-profits use software, and whether their approach to developing it is changing.

What made you start EyeSeeTea?

The three of us were studying together on a Masters course in telecommunication for developing countries. Afterwards, we talked to see if we could do some work together, maybe publish something. Then a year ago we decided to start the company. That happened in quite an easy way - we found a client who needed us.

There are a lot of people trying to do telecommunications work in developing countries, but they are following a research approach. There aren’t many people working as professional software developers in this area. So we’re doing quite well - for a startup!

Is there a lot of demand for what you have to offer?

There are lots of people who could do what we do, but they need to get paid! There are plenty of skilled people, and people who know what’s needed in the field, and people who have access to money. But there aren’t often people who have all three, and that’s where we fit, in the junction of those worlds.

Small nonprofits don’t usually hire people like us. They are more likely to have one computer specialist who does everything. Bigger nonprofits are more likely to have the money to hire a company. Often we take public contracts to do work for a government, university, or large nonprofit. There is quite a bit of work in the research sector for implementation work.

Big nonprofits, on the other hand, are more and more hiring internal people to do this kind of job. It’s a new thing for them, they are not yet used to hiring software developers. They often think it should be a volunteer role.

So why do they hire you instead of their own developers?

At the moment, our clients know us and we have a close relationship with them. We tend to know clients in advance. It will be a key challenge to go beyond people we know. Having a few contracts with well known nonprofits will help build our reputation and make it easier to get more clients.

Is there room for more organizations like yours?

The space is increasing in size, it’s growing every year. Technology is becoming more important and people are starting to understand that sometimes you need to hire a professional. There is definitely a gap, but it’s not huge. There are only three of us, and we’re not even working full time yet.

Until this year, the usual way we’ve found to start a development project wasn’t to focus on technology first. NGOs in the past were not used to hiring a tech company. They might just hire one person or not think it’s important.

For software, nonprofits used to think that they could get a couple of volunteers for free. But now they are beginning to understand that if you want to meet a deadline and get a quality product then you need to pay.

All our work is open source, but there is a difference between open source and free of charge.

Are there other organizations doing this, like you are? What would happen if you didn’t do it?

Sometimes we create the need at the nonprofit, by showing them what they could do with our help. In other cases there is competition for a contract. We tender a bid, and they do the price comparison. For a university that has money for a project, there will usually be competition between projects as well.

Once you know the company - maybe you’ve done a small project for them already - you have a better chance. But you have to get a foot in the door.

That means it’s very important to have contacts in the field. All of us have worked in this field as volunteers before. I think to access this kind of market you really have to start as a volunteer, so that you know the people and the needs. If you can find the need, and the people, and the budget, then maybe you can start something, but that’s not easy.

What skillset do you wish you had? What skills are needed by your customers?

At the moment we say we can do anything, because we are just starting! Everything is changing, new technologies all the time, so you have to be flexible.

Data mining is everywhere, analysing metadata etc. Big data is important, lots of organizations simply have a huge amount of data.

At the ACM Dev conference there were a lot of presentations about data analysis. That is a big trend in this world. Organizations and donors want to know what is happening with funds, how to improve efficiency, etc. There is a gap in data science for this field. Not many universities work in this kind of analysis in the developing world.

Did you consider working directly for a nonprofit?

Ignacio used to work for one, a small NGO, working in Peru and Colombia. And he was a volunteer in a bigger one. But this field depends heavily on the economy. In Spain, for example, all the budget for development was cut suddenly. It can be very unstable.

Adrian came from the research world. His involvement in development was through volunteering in his free time. When he got hired by UCL he sent his CV to a couple of places. But it’s difficult to get hired by a nonprofit. There are a lot of good people applying to few positions. Nonprofits could hire more people if they would pay for it.

What kind of work have you ended up doing so far?

We’ve done a variety of things: network infrastructure, mobile apps, web, technological audits. All of our code is open-source, so you can see it on our website. It is nice that we’ve been able to see many different aspects of telecommunications. We’d like to keep doing that!

Continue reading

ICT in social enterprise: an interview with Lucie Klarsfeld from Hystra

Lucie Klarsfeld

Lucie Klarsfeld is a Senior Project Manager at Hystra, a strategy consultancy focussed on social enterprises targeting the Base of the Pyramid. We talked to her to learn more about the social enterprise sector as a whole, and particularly the role of technology.

Could you give us a brief overview of what Hystra does?

We’re a strategy consultancy company that specialises in social business and so-called Base of the Pyramid (BoP) markets. By a “social business” we include anyone that uses market mechanisms to address social issues.

Historically, this focus comes from our founder, Olivier Kayser, who previously worked for both McKinsey and Ashoka. At the end of his time there, he could see a lot of synergies between the social entrepreneurs that Ashoka was helping - who were good at finding local solutions, but not necessarily so good at (or interested in) scaling up - and multinational corporations - who are less good at (or interested in) social innovation, but whose entire “raison d’être” are to adapt and replicate solutions that work across countries. So the aim was to try and combine these two worlds, and in particular to figure out how large corporations could help social enterprises to scale.

The first report that we did was on access to energy for the poorest. We’ve kept a pretty similar methodology since then, which is to look at what worked. We try to go in with no preconceptions, and look at what business models there are in a sector. Then we select a representative sample of businesses in the sector, and we visit them to understand better how they work and why they haven’t scaled further.

We then use this research to help specific actors to be more efficient, or to scale better. That is either with large corporations to define how they can profitably work with the base of the pyramid, or with social businesses to help them scale their strategy, e.g. we help them write business plans or raise funds. We also work with other organizations like aid agencies or foundations to help them define their strategy to support social businesses or work with the private sector, in order to achieve their development objectives.

To give you an example of what this can look like concretely: at the end of the energy project, we gave some recommendations to the sponsors. To Total we recommended that they start using their brand and network in Africa to start selling solar lanterns. Back in 2010 the market was very nascent, with a few products out there, mostly of poor quality. When Total brought in their brand and distribution skills, they were able to build much more consumer trust in the product, and help the few quality manufacturers get consistent orders and scale up production. They’ve sold 1.4 million to date, which makes them the largest commercial distributor of this product in the developing world. These are the kind of synergies that we’re looking for.

So the focus is not on starting businesses, but rather scaling them?

Yes, although you could argue that in the case I just mentioned, Total didn’t have any presence in that solar light business beforehand. We’re strategy consultants, so we can start from scratch to build a strategy for a new entrant in the sector!

You publish all your research publicly, on your website?

Yes, that’s a condition for the research we do. It’s also why the entrepreneurs that we study agree to participate, and to open their books to us. Publishing it to the world gives them publicity, and they have full control over the material that we actually publish. We often sign an NDA with them, so they usually give us much more information than we put in the case studies, which we use to create derived statistics or benchmarks that get published in aggregate, not revealing their individual confidential data while helping the sector progress.

How do you prioritise which business models you pick for the report?

We have a set of criteria. We want a project that really solves the issue, which isn’t to be taken for granted! The project also has to have a certain scale. For example, when producing the agriculture report on Smallholder Farmers and Businesses, almost all the projects we picked had over 10,000 customers. If it’s below that, we consider that it’s too new a project for us to be able to draw many conclusions on what works. The project also has to be financially sustainable: we don’t want models that are purely philanthropy; though we don’t necessarily want models that have already broken even, otherwise we would not find much. But we do look for models that have the ability to break even at some point, even if they have not already done so.

In the first phase of our research we read reports and interview the authors, and ask them which projects they think are the most successful given our criteria; as well as to refer us to other experts. There comes a point when we keep hearing the same things from experts who start referring the same projects and people to interview, and then we know we’ve covered most of the ground. We also do our own research, and in total we usually find between 150 and 300 projects per sector.

We’re also looking for patterns of similar projects that emerge, so we can pick representative examples as case studies. In the case of ICT it looked like the business models were either one-way information passing, two-way information passing, or crowd-sourcing/crowd-funding solutions. In each of these clusters we select the best projects according to our criteria.

What sources do you use for the research process?

We try to get input from a variety of sources. We try to talk to the large industry players; large institutions like the World Bank; other successful social enterprises; and some academics.

The report mentions that there wasn’t much work on impact, and people weren’t doing cost-effectiveness research. Do you do any of that work, or do you see others doing it?

The ICT report is a few years old, but even returning to it recently there still isn’t much work on cost-effectiveness. ICT in development is still quite new, and evolving very fast. And it’s really only in the last few years that social enterprises have started to get their ICT systems in order, using CRM or sales force management systems, etc.

It’s also quite difficult to assess the impact of ICT. In particular, I think it would be difficult to single out the effect of ICT on their overall impact of a project. If it’s not primarily an ICT project, if ICT is a tool, which it usually is, then it’s very difficult to say whether it was the ICT that mattered or the other parts of the business model.

Even systems that are very heavy on ICT have other important aspects. For example, one of the reasons that mobile money hasn’t taken off as well in places beyond Kenya is because of the critical importance of setting up an adequately sized agent network for cashing in and out. That’s the part that’s proven hard to replicate!

It’s particularly hard to bring in new, unfamiliar ICT systems, as the benefit has to be quite high in order to get people to buy into what may appear as a new, risky, system.

There is, of course, a role for ICT companies to provide tech services that assist other organization doing more direct development or social business work.

What do you think about the Good Technology Project?

I think it’s needed. Many social businesses would benefit from e.g. ICT volunteers to go and help set up their IT systems. One thing that would be very helpful would be to have some kind of comparison of the available tools. Even now that social enterprises are starting to improve their ICT systems, it’s not easy for them to do it well, design an adequate tool and choose the right partner for it!

What do you think the prospects are for developed world entrepreneurs trying to start social enterprises in the developing world?

I think it’s important to spend a good period of time exploring and learning about what’s already being done on the ground, rather than start from a blank sheet of paper with possibly pre-conceived ideas. Go and work for an existing organization, start to see what the existing models are in developing countries. If they’re entrepreneurial, they’ll spot opportunities to make things better once they have been part of the system.

You can’t start from nothing - there’s no way to avoid understanding the problem and doing your market research! Especially since developing countries are a new environment for developed world entrepreneurs, it’s important to actually be there.

What areas are currently neglected in the ICT space?

ICT is primarily used as a component of a larger system or business model, but in and of itself, it’s proven particularly useful for transferring money in places where that is difficult, and also in bringing information where it is missing and can make a life changing difference (e.g., getting market prices for farmers, or receiving quality medical information in remote communities).

So looking at it from a general perspective, you can look for places where there are information asymmetries, or where there isn’t communication and there should be. Perhaps you can find some opportunities there.

Continue reading

Data science competitions for good: an interview with Isaac Slavitt from DrivenData

The DrivenData team

Isaac Slavitt (right) is a co-founder and data scientist at DrivenData. Also pictured are co-founders Greg Lipstein (left) and Peter Bull (center). DrivenData hosts data science competitions that focus on social impact. The team also works directly with organizations, helping them harness their data in order to work smarter and carry out their missions more effectively. We talked to Isaac to know more about how they are applying data science for doing good.

How did DrivenData get started? What are you trying to do and what are your future plans?

DrivenData actually started off as a grad school project. My co-founders and I wanted to work on a research project in machine learning that was technically interesting but that also had some kind of social impact aspect, and we actually found it really hard to find relevant problems with available data. We knew there were many others in the field who were excited about that type of work, so we had the idea to build a platform specifically targeted at running data science competitions for social good. We started talking to nonprofits early on and the initial response was very positive, so we ended up launching our first for-prize competition soon after the platform was up and running.

With DrivenData, we have two main goals. The first is to get more people interested in applying their data skills to relevant problems, and providing an easy and fun way to start getting involved. The other goal is direct impact: working with mission-driven organizations to help them frame their data challenges, making those available to the data science community on our platform whenever possible, and helping them integrate the results back into their workflow. Of course, not all problems that nonprofits face fit neatly into a predictive modeling context, so our team also does consulting work for mission-driven organizations directly.

As far as future plans go, on the competition front there is a lot of overhead work in getting a competition ready, particularly if the organization we are working with doesn’t have data scientists on staff. We have been thinking about how to scale the competition preparation part of the platform, and in the medium term we want to figure out how to make this a more repeatable, streamlined process. In terms of the direct consulting work that we do, it has been really rewarding to work with organizations on their data and technology challenges and we plan to continue doing that.

How do you factor-in impact in the work you do? Can you give us an example of a high-impact project you have undertaken?

Working on projects with real impact is very important to us. We tend to think about the impact of the competitions we run in two ways:

One is the direct impact of the solution for the organization that we’re partnering with. We take that into account both while refining the problem as well as in finding partners to work with that we think will be able to benefit the most from winning solutions. Sometimes that means that they’re really interested in what insights they can learn from the models in carrying out their mission, and sometimes it means finding organizations who are in a good position to actually implement the winning solutions.

The second way we look at the platform’s impact is in raising awareness about the opportunities that exist in applying data science for good, and by providing people a platform to get involved. One metric for this is the participation level in our competitions. We get participants from all over the world, and the number of competitors and submissions has been increasing with each competition. Wherever possible, we try to look for ways to keep the data and problem open for people to keeping working on and learning from even after the main competition has ended. That’s one of the reasons we generally require winners to release their solutions under an open source license.

A good example of a high impact project is our “Keeping It Fresh” competition. The basic setup there is that the City of Boston has a large number of restaurants but only a limited number of food inspectors. The idea of this competition was that there must be a better method for choosing restaurants to inspect than sending the inspectors out at random, so the task was to find predictive signal using a large dataset of Yelp reviews in order to flag restaurants that could be potential food safety violators. One of the fun things about this competition was that, for their final submissions, competitors were actually making food violations predictions about the next few months — so it wasn’t just hold-out data, they were actually testing their models by trying to predict future events.

After the competition, there was a paper published about it by economists at Harvard who concluded that these methods could make cities 30-50% more efficient in their use of food inspectors’ time. That’s a nice win for public health and public sector effectiveness. Even better, the same methodology could be applied to other cities as well, which could act as a multiplier for the impact of this work.

What led you to choose the competition format as opposed to hackathons and/or volunteering?

What interested us about the competition format is that it scales globally because of the way it is structured. This format is particularly well suited for complex problems that could benefit from a large number of people trying out a large number of different approaches. For some really hard problems, particularly involving NLP or computer vision, it’s great to have hundreds of people trying thousands of different approaches to feature selection, model tuning, and so forth—far more than any one team could try in a reasonable amount of time.

The limitation of the format is that not all problems can be framed as competitions. For example, there are tons of really important problems that organizations face where the data doesn’t fit neatly into a statistical modeling question, or where the dataset is not large enough, or where they are looking for more open ended research or systems development. These problems are often better tackled through volunteering, and there are some fantastic organizations out there like DataKind who facilitate working directly with organizations.

What factors do you consider while selecting partners to work with?

There are three main considerations for selecting partners to work with: The first and most important is that the proposed problem should be interesting to participants and have a high potential for impact. The second is the technical and data maturity level of the organization. Third and equally important is having data-savvy people in the organization that could take the winning entry and integrate it into their workflow, or at least develop a plan for extracting value from the end products.

We are open to working with partners throughout the world — in fact our most recent competition centered on an organization based in Morocco.

Based on your knowledge of applying data science for social good, do you see any problem or area that is neglected?

I think just being able to capture and store the right data in general, and then data quality in particular, is a hard problem for many nonprofits. That challenge definitely isn’t specific to the social sector, but in our experience many organizations are struggling to keep up with the private sector in that respect. Even large organizations with permanent IT staff typically lack dedicated software engineers or data scientists, and the existing staff may not have the bandwidth to spend a lot of time thinking about how they generate and warehouse information. Taken together, that means that building systems and processes for data collection and preservation is often not an option.

Those data issues may not seem pressing on any given day because the nonprofit staff are busy working on the primary mission, but at some point they are probably going to want to learn from their data, and the missed opportunities happening right now are going to make that difficult.

The field will continue to get more mature and I think there are going to be more and more data scientists wanting to work with nonprofits which is great news, but five years down the line if organizations don’t have the right data or if the data has serious quality and consistency issues then they may find themselves starting from scratch. Since many nonprofits could use some help on this front, here’s a big opportunity for data practitioners to get involved right now.

What advice do you have for someone wanting to use data science and software skills for good?

Given the growing popularity of data science in recent years and the resulting surge in demand, nonprofits are finding it difficult to compete in hiring full time staff from that talent pool. That also means there are lots of great opportunities for people with the desire to help, and it isn’t necessary to know the latest machine learning algorithms to contribute — many organizations have a lot to gain even through relatively simple automation of their day to day processes, so there is huge opportunity for data scientists and software engineers of all skill levels. For example: at many nonprofits, basic projects like setting up a data collection pipeline would be a big win since that would enable them to apply more advanced techniques in the future.

The best way to get involved is by volunteering directly with nonprofits, volunteering through organizations such as DataKind or Code For America, or by participating in competitions like the ones on our platform. There is also still plenty of room in this space for new organizations that adopt one or a mix of the prevailing models, whether that is volunteer driven consulting, fee-based consulting, or online crowdsourcing.

Finally, I would encourage people to look locally. There is a lot of activity happening around data science for good in most cities, in the form of Meetups, DataKind chapters, conferences, and so on. But even if there aren’t many events in your area, there is almost certainly a nonprofit nearby that would love some help with their technical or data challenges. Even if you are just getting started in the field, you may be able to make a surprisingly meaningful contribution with your skills.


You can sign up here to be notified about and participate in DrivenData competitions.

Continue reading

Data science for good: an interview with Lauren Haynes of the Center for Data Science and Public Policy

Lauren Haynes

Lauren Haynes is a senior project manager at the Center for Data Science and Public Policy (DSAPP) at the University of Chicago. We talked to her to know more about the potential of data science for doing good.

Can you tell us about your journey so far in applying technology for good?

I have a degree in general engineering from UIUC with a minor in computer science and human-computer interaction. While in college, I was quite involved with Alternative Spring Break and went on seven service trips. I then went on to work with Accenture Technology Labs where I was involved with cutting edge research projects. I always had an inclination towards social good - so when one of my colleagues invited me to join Ounce of Prevention Fund, which does a lot of work in the area of early childhood education, I gladly accepted. As the IT Manager there, I revamped the IT infrastructure of the organization and also catered to its technology needs. At the Ounce, I also got a chance to gain an understanding of the internal dynamics of the working of an NGO. I then went to work as the product manager at GiveForward - a crowdfunding platform for compassionate giving. This role was unique and enjoyable since it had both a high technology component as well as a high social good component. Since last May, I have been working with the Data Science for Social Good Fellowship (DSSG) wherein we get 42 undergrad and graduate students each year to do data science projects for nonprofits and government agencies. The projects topics run the gamut from education, healthcare, environment, public safety, criminal justice… you name it. Some examples of problems we tackle in our program are prediction questions e.g. which participants are likely to drop out of a program; another example would be resource allocation questions e.g. we can inspect only a 100 buildings out of a 1000 - which ones are most likely to be noncompliant. I am also serving as the board vice-chair for Break Away wherein I provide guidance on using technology to efficiently manage their operations.

Continue reading

ICT for international development: an interview with Sam Sudar

Tell us a bit about your how you got into ICT for Development (ICTD).

I’m a PhD student at the University of Washington in Seattle. I’m doing my thesis on leveraging web tech on poorly connected regions and this is the theme throughout everything I do.

I started in the tech for development group working on Open Data Kit - a platform to help researchers and NGOs collect and manage data. The first version, which is widely used, is a way to use Excel to specify a form in native Android that looks a bit like a web form. This makes it easy to create a form and collect data from it. It’s used in over 100 countries and the Red Cross used it extensively after the Haiti earthquake. I was involved in Open Data Kit 2.0 which has the same functionality but in JavaScript and HTML to make more it more customizable. I’m not as involved in that now, but the paper on simplifying mobile deployments in low resource settings that I presented at the London ICTD conference was motivated by that work. It came out of dealing with researchers who I often felt tried to overcomplicate things.

Last summer I interned at Google working in the Chrome for emerging regions group.

Continue reading

Persistently neglected causes

A heuristic that we often use when assessing causes is that of neglectedness. If a cause is neglected, then that means that it is not receiving an amount of attention commensurate to it’s seriousness. Given that most causes suffer from diminishing marginal returns on investment, if less has been invested then we should expect there to be especially cost-effective projects still available.

But not all neglectedness is made equal. Some causes are only neglected transiently. That is, they aren’t receiving much attention now, but it’s clear that that will change in the relatively near future. An example is mobile services for the poor. This area has been neglected for a long time because the infrastructure and the handset penetration were not there, but that that is changing. Hence, interest is picking up, and we should expect to see an increasing amount of resources directed there over the coming years. So it’s likely that many potential projects in this area will happen anyway in the next couple of decades.

Continue reading

LBRY and decentralised apps - an interview with Jeremy Kauffman

Jeremy Kauffman is the CEO of LBRY, a decentralised content-publishing system. We talked to him because we want to know more about the potential of decentralised apps for doing good.

What are you doing?

We want to provide a simple search tool for movies, books, games, or any piece of content that can be published by anyone in the world. The big difference is that this network is completely decentralised - it’s powered by the computers of everyone in the network.

Continue reading

Tech for effective charities - an interview with Ben Clifford from Good Code

What is Good Code?

Good code is a meetup in London of software developers who work on projects for effective charities.

How did good code get started? What are you trying to do?

I was interested in technology entrepreneurship and I wanted to do a lot of good. I thought a good potential customer for this would be poor people or charities that help them, especially as there is little incentive for other organisations to help them out. I didn’t know much about this area though, so I set up Good Code to explore what kinds of problems charities have that could be solved by tech. We approach charities that seem high impact (e.g. from GiveWell recommendations) and then find out what sort of problems they have that could be solved with software. It’s been a good way to start conversations with charities because we’re offering something of use to them.

Continue reading

Portfolios of the Poor book summary

What the book is about

Portolios of the Poor tries to answer the question ‘How do the poor live on $2 a day?’. It’s based on regular interviews with over 250 families in Bangladesh, India, and South Africa which allowed the researchers to build up a detailed picture of the financial transactions these families made. Their main argument is that the poor have sophisticated financial lives. They use many different financial tools that together create ‘financial portfolios’.

Chapter 1 is available online if you want to get an overview. If you get the book, the most important chapters to read for an overview are chapters one and seven.

Continue reading

What can a technologist do about climate change?

Climate change is a problem that could badly affect the future of humanity so we’d like to learn more about what technologists can do to tackle it. Fortunately, we recently came across this extremely detailed essay on what technologists and specifically software engineers can do to help.

It’s worth reading both for its insights about tackling climate change and because it suggests technical solutions that could be applied to problems in many different cause areas. Also, we’ve taken it as inspiration for our own work at Good Technology Project - we would love to create resources as good as this for each cause area we think is promising.

Continue reading

Digital financial services for the poor: shallow overview

In a nutshell

What is the problem?

The poorest people in the world (those on around $2 a day or less) mainly use cash. This is costly for them to handle and it’s costly for institutions to handle, meaning that they have less access to ways of handling shocks, exploiting opportunities, and managing risk.

What are possible interventions?

Digital banking and payment systems and then financial products targeted at the poor built on those systems.

Who else is working on it?

Gates Foundation, World Bank, CGAP, several tech startups.

Continue reading

ACM Dev: Impressions and Thoughts

We attended the ACM Dev conference this year, which by good fortune was happening in London! We’ll be putting up our notes shortly1, but this blog post will discuss some of the higher-level considerations that came up as a result of the conference.

  1. We intend to make all of our notes and materials publically available, even if they’re provisional or later revised.

Continue reading

The Good Technology Project

It’s become a truism that technology is the most powerful lever that we have to change the world. As with any powerful lever, it’s up to us to actually use that power to make the world a better place. Moreover, we should aim to do the most good we can, rather than just some good.

Technology has great potential for good, both in and out of the for-profit sector. Widespread access to novel technologies can cheaply give whole populations access to capabilities that they previously did not have, or increase the efficiency of common processes.

Continue reading