Recent case studies

63% estimate of Covid vaccine by May - Maby Forecast Live


Last week we were joined by UnHerd science writer Saloni Dattani and 41 forecasters on a call to investigate and explore the likely timeline of a coronavirus vaccine.

Before the call, Tom and I narrowed down the scenario that we care about. A vaccine which isn’t widely accepted, approved or available for distribution in large numbers won’t count for much, so in selecting an impactful event we settled on asking about an FDA-approved vaccine available in large numbers of doses in the United States as a scenario which likely indicates vaccine success:

Final question:

When will 25 million doses of an FDA approved vaccine for Covid-19 be available in the US?

Then we decided on the scope. Other public forecasts have asked whether there will be a vaccine before May 2021 as a yes/no binary, but we wanted to have an idea about when that might be if it comes sooner than May. So we decided to ask across five bins:

  1. A: Before Nov 4
  2. B: Nov 4 to Dec
  3. C: Jan to Feb
  4. D: Mar to Apr
  5. E: May or later

The call began with us introducing the concepts and taking the group through a quick calibration quiz. This gave us some data about the group's level of calibration - the better calibrated, the more reliable their forecasts.


Salonium calibration curve

Perfect calibration would mean that the outcome frequency and forecast probability were equal, which wasn't quite the case for our group, but they did better than a lot of groups we have seen recently.

Over time it’s typical to see new forecasters become better calibrated, as they become better acquainted with their own internal sense of probability.

Forecasting - round 1

median forecast initial

We forecast over two rounds. The first round goal was to get the lie of the land among the group, and surface questions for our guest speaker Saloni. The median response for each bin at the end of the first round was as follows:

At the end of the initial round we had a 60% probability the vaccine would arrive before May, with finer grained forecasts on the period in advance of that date.

Each forecaster could submit a comment and/or question anonymously, which were voted for by the group as a whole.


Salonium comments

We then opened the discussion, focusing first on the points most voted for by participants, making sure to cover the standard forecast checklist:

  1. Base rate / outside view / reference class (How do vaccines normally arrive? How do big medical projects normally get organised?)
  2. Inside view, breaking the problem down - what are the steps to this happening, how does the situation in front of us correspond to the base rate / reference class (what's the specific way that a vaccine of this type might progress?)
  3. Scope and scale (is there a difference between 1m, 10m, 25m, 100m doses? What if the time horizon was over a decade?)
  4. Relevant sources- who has been writing on this? Are they reliable? Have we over-reacted/underreacted to recent news (what about the vaccine trial that was halted - is that a big deal?)
  5. Biases - could optimism/pessimism bias play a role? What about political biases? (does Trump talking about vaccines emotionally affect people's forecasts?)

At the end of this discussion we opened forecasting on the update round.


Salonium median forecast

We asked all forecasters to consider the viewpoints and information they’d heard submit an estimate again, altering their response if they felt compelled to by the discussion or some other new information and analysis.

Salonium dot plot

This time the group was slightly more optimistic about the prospects of a vaccine, giving 63% overall to a vaccine before May, most likely in March or April.

Looking at the full distribution of forecasts, we saw that the first round forecasts were more widely dispersed, and it’s easy to identify where the updated forecast became slightly more optimistic.

Lessons Learned

This was our first public test of some of the new app features, as well as our first time forecasting when the team is mostly new to forecasting, and new to each other - our typical client team already knows each other and has worked together before.

With 40 or more people simultaneously forecasting we're glad everything worked according to plan, and we learned a tremendous amount about how to make this work even better in the future - it's all being fed into the design for the next session.

We also collected some benchmark forecasts from other platforms last Thursday (including from our former employers!) so it will be especially interesting to see how things resolve.

Help us improve Maby:

We founded Maby to help any organisation build an efficient forecast capability, and I hope we showed a little of that on Thursday. If your organisation or one that you work with would like be able to produce fast and accurate forecasts then we'd love an introduction, even if they’re not looking to buy anything in right now - we can only make our forecasting knowledge useful if we understand the problems you're facing, so feedback and information is immensely valuable to us as we build our app and systems.

Case study: Improving government forecasts


Our client, a government department with an existing futures team, wanted to improve their forward range and predictive accuracy with a custom training from Maby. Our plan had the following steps:

  1. A detailed consultation with the client lead to identify key risks requiring accurate forecasts.
  2. Pre-training assessment of each team member’s scope sensitivity and calibration, key attributes for accurate forecasting.
  3. A training workshop focused on forecasting the key risk topics, using each risk area as the jumping off point to explore more advanced forecasting techniques - beginning with assigning probabilities to binary yes/no event questions, then multiple outcomes, and finally exploring the construction of probability ranges and conditional forecasts. Each technique is explored using techniques to aggregate and share information among the team.
  4. At each stage of the workshop, forecasts and written comments are recorded from all team members.
  5. We provide use of our forecast aggregation software for one year following the workshop to use as an internal tool for the team.
  6. As the forecasts resolve over time, team members receive individual feedback on their forecast accuracy, allowing them to identify lessons learned and improve their next set of estimates.


Team members:

  • Have a better understanding of their strengths and areas to target for improvement.
  • Are upskilled in generating probabilities for scenarios.
  • Have learned techniques to maximise accuracy as a team, balancing individual and collective estimates.
  • Can promptly and confidently deliver probability estimates for a range of policy choices by decision makers.

Decision makers:

  1. Have a better understanding of their futures team’s accuracy and calibration.
  2. Have a strong internal resource to draw on for fast and accurate estimates of risks and outcomes when facing decisions under uncertainty.