Your Software, Their Hardware: Stacking the Deck for High-Quality Replication

Your awesome model doesn’t get to serious scale unless others replicate it, too. Here’s how to make it happen.


Say you’ve got a high-impact model that does some wonderful thing—gets primary health care to really poor people, saves coral reefs, or, I dunno, keeps impulsive adolescents from getting tattoos—and you want it to go big. You’ve made it as cheap, simple, and adaptable as you can, but you know that you can’t get it to real problem-solving scale all by yourself. To realize the full potential of your model, you’re going to have to get other “doers” (NGOs, governments, or businesses) to replicate it, and do so well enough to get similar results.

That’s hard. The social sector’s track record for high-quality replication is pretty dismal. If you want to see another doer organization get the same results you did, you have to stack the deck. Think of your model as software: You developed and ran it on your own hardware (your organization’s structure and operations), and now it needs to run on someone else’s hardware (their structure and operations). Software is much easier to scale than hardware, after all. So how do you get those other doers to adopt your software—your model—and run it as well as you did?

Think about how it works with commercial software products. A company has an idea for a product and develops it through progressive stages of validation until it’s ready for a wide market. Then they package it for the user, market and sell it, and provide customer support and updates over time. In other words, the company designs the product with the customer in mind, makes it as easy to use as possible, and continually improves it to make using it more productive.

We can imagine that same arc—from software idea to productive use at scale—in the context of a proven solution that achieves social impact at scale. The arc looks something like this:

Software hardware

The first half of the arc, going from an idea to a proven, replicable model, is pretty familiar to good organizations: You develop your idea into a systematic model, prove that it has real impact, and then refine it and make it fully scalable through your own replication and growth. However, if you’re serious about scaling, you also have to design and iterate on your model with your “replication customer” in mind (the “doer-at-scale”). At the same time you accelerate your own growth, you need to explore and validate your 1.0 ideas about your “replication customers.” As with a real software business, if you get all the way to a mature product without already having a deep understanding of your customer, you’ll be screwed when you try to find them.

With a scalable product/model and a solid sense of your customer/doer-at-scale, you’re ready to take on the three steps that will get you to impact at scale:

1. Package It Well

You need to package your model—your software—so that any capable organization that wants to use it can use it. This is where you begin to stack the deck for successful replication, and, sadly, it’s something we rarely see done systematically or particularly well. Packaging has three components: materials, process, and systems:

  • Materials: This is the combination of things—documents, videos, illustrations, websites, apps—that will effectively communicate how to deliver your model well. Too often, materials consist solely of some crappy manual that was the product of too little thought and not enough iteration. You need to design material with the user in mind and refine it through continuous iteration. Look around for good examples. Businesses like McDonalds have to get franchise owners to deliver uniform products; Ikea and Lego had to figure out how to make complex instructions without words; the best-rated “Whatever for Dummies” books provide great ideas for prose and graphics; and there are a host of effective and creative YouTube videos out there that superbly demonstrate complex procedures. Think about how you can template and systematize the activities in your model. Make the whole set of materials as close to DIY as you can. If need be, hire good consultants to help. Be serious about it. Invest enough time, money, and thought.
  • Process: To effectively transmit learning, you need to develop a thoughtful learning process that makes the most of your materials and helps the doer make the most of your model. That’s probably a lot more complex and intense than you think. Real mastery usually requires on-the-job action learning, so you may need to embed some of their people in your organization or send some of your people to theirs. But maybe there’s a case to be made for classroom training. Maybe you create small learning groups that move together through training and reinforce each other’s learning. Maybe you run an academy, where people can learn in batches, or maybe you take a “train the trainers” approach. The point is this: Be intentional, build on approaches that have demonstrably worked elsewhere, and iterate over time.
  • Systems: Whatever systems helped you achieve impact need to accompany your model to the replicating organization. If a digital performance management platform or application helped drive high-quality operations, it belongs in the package. More and more, we see organizations developing apps that guide the user through successful replication. We also see various platforms that bring replicators together to share lessons, best practices, and ideas. I think that “an-app-and-a-platform” is going to be an increasingly important element in any serious effort to scale, so give it some thought. Stack the deck: the goal is to make it easier to do things right than wrong.

2. Actively Sell It

Mostly, I see organizations waiting to see who comes knocking. I think that’s a mistake. I think you need to get out there and sell. But you first need to think about who you are selling to.

Here’s a simple, but systematic way to think about the right replicator-customers: “should do, could do, would do.” “Should do” organizations are ones that, by virtue of their mission, mandate, or strategy, should be interested in your model – your model would demonstrably help them achieve their stated aims. “Could do” refers to that subset of “should do” organizations you think would have the capacity to do a good job of replication if they wanted to, since they have actually implemented stuff comparable in complexity to your solution before. Finally, “Would do” is the subset of “could do” organizations that have a track record and culture that makes you believe they are committed to quality replication and real impact. These are your replicator-customers. Don’t waste your time with anyone else.

Should could would

Once you find the organizations you believe would do a good job of replicating your model, you still have to sell. Their successful replication of your high-impact model is a huge win for both parties, but they may not understand that yet. At Mulago, we bring in world-class sales experts to coach our fellows: All of them tell us that the key to successful sales is that your product needs to solve the customer’s problem. But to achieve that, you need to first listen and learn everything you can about the customer’s problem. Only then can you help them understand what a win your awesome model can be for them.

If you’re selling computer software to a customer, the price is some amount of cash. But in the world of replication, the “price” is the commitment to deliver your model at high quality, and the “transaction” is the agreement about how that will happen. It may be that charging them for your support has the salutary effect of skin in the game, and it looks good on your balance sheet, but the truth is that high-quality replication by others is such a hugely effective way to increase your own impact that whatever money you put into it is a bargain.

3. Offer Real Support

The replication process is more intense at the beginning than you’d like, but it also goes on longer than you’d hope, more often a tapering off than a clean exit. Organizations often underestimate the amount of support that will be required. But the better you design the package, and the better you choose your customers, the less effort and resources you’ll eventually have to spend on support.

There are three main components to support: mentoring, problem-solving, and quality assurance. Mentoring should be a scheduled activity, with careful selection of the right people on both sides: Those who are in charge of implementation need to be coached by the right experts, and at regular intervals. Problem-solving is responsive: You have to think carefully about how to create reasonable expectations and titrate the right level of availability.

Quality assurance is more complicated, as you need to make sure that they achieve the same kind of impact you did. Shoddy replication tarnishes your brand and wastes resources and potential. But while it would be nice if everyone reliably measured their own impact, mostly they don’t. As the originator, you’ll have to take a lead role to figure out to make sure that replicators are accountable. But accountability is not just for the replicator-customers; it’s also for you. After all, you can’t really ascertain the general validity of your own evidence until someone else replicates your model and gets similar results. If, over time, you can reliably correlate high-quality replication with impact, then performance management systems and the careful monitoring of key progress indicators can eventually serve as pretty good proxy indicators of impact.

As is probably obvious, this has been less a how-to piece than a guide to thinking through a critically important process. But the truth is that there aren’t a lot of examples yet, certainly not enough to populate a real manual on the subject. Relatively few organizations have devoted themselves full-on to replication by others, and even the best of those are still exploring what it takes to do it well. I’ve watched enough efforts now that I’m comfortable that this what it takes to make it work, and I’m confident we’ll soon have solid examples of every part of it.

And a final note to my fellow funders: Fund this, dammit! High-quality replication by others is the impact jackpot. It doesn’t feel as tangible as direct implementation and it costs more to do right than you might hope, but it’s huge bang for the buck if you help them get it right. We all like to pay for innovation—this is what makes all that innovation pay off. Done well, it’s the best investment you can make.

This article was originally published by Stanford Social Innovation Review on August 13, 2020 with the headline - Your Software, Their Hardware.