Pick a neighbourhood in any New Zealand town or city, and then drop a single, hypothetical case of Covid-19 into it.
Imagine that, from this one point, you could watch the coronavirus snake out into local schools, or into places where the adults living in that neighbourhood are likely to travel to and work.
Imagine that each of those connections goes on to create tens of thousands of new ones, which in turn create millions, as the outbreak spirals into a dense, spaghetti-like web of infection.
Imagine that we could watch how this mind-bogglingly complex picture unfolds in real-time – and observe it dramatically change when we simulated a lockdown, or threw in the added power of testing and tracing.
This is just what scientists can now do, using a model so sophisticated and weighty that it requires a supercomputer to load and run.
It’s what’s called a “network contagion” model – and it took a team of data scientists at University of Auckland-based Te Pūnaha Matatini months of feverish work to build in the pandemic’s opening six months.
While Kiwis will have become used to hearing about models since the first case of Covid-19 arrived here, not all of these are the same.
At the very beginning, researchers used a traditional type of model for analysing disease outbreaks, called deterministic compartment models, to predict how tens of thousands of people might have died in New Zealand without any controls put in place.
While these models offered a general picture of how an infection could travel across a population, they also assumed that any two people in that population – like one person in Hamilton and another in Mosgiel – could meet and infect one another on a given day.
We could think of this “well-mixed” assumption like pouring milk into a cup of coffee and stirring it around so it instantly mixed: a simple but unrealistic pattern that outbreaks didn’t follow.
Later, Te Pūnaha Matatini experts turned to a more sophisticated form of model, called a stochastic branching process.
Still used because they were lightweight and fast-to-run, these models rendered a more accurate picture, and included the ability to include random effects, so that not every infectious case would be assumed to infect the same number of people.
They’ve also been used to run counter-factual scenarios: and recently calculated how delaying last year’s level 4 lockdown by even another three weeks may have resulted in 200 deaths and 12,000 infections, while pushing the prospects of elimination down to just 7 per cent.
When we hear director-general of health Dr Ashley Bloomfield discuss Te Pūnaha Matatini predictions around the current Delta outbreak, it’s usually those produced from stochastic branching process models.
But the network contagion model was smarter still.
“It assumes that people can only infect someone else through one of the interaction contexts that it simulates,” explained Dr Dion O’Neale, who led the team that built it.
“You might have a certain infected individual. They can infect someone else, but only, say, if they’re connected to the same workplace or dwelling and if that person is not already infected.”
Take some of the other things that modellers could do with it.
Based on which suburb in New Zealand an infection was detected within, they could make data-informed assumptions about how many generations were likely to live in a household from that suburb, or what we could expect for the ages, ethnicities, and sexes of the people in a dwelling.
These were all factors that influenced infection risk and outcomes for Covid-19.
They could simulate where these people worked, how far they travelled each day, and where they might have spread the virus before contact tracers caught up with them.
And they could guide officials on why lockdowns at postcode or suburb level were unlikely to be effective, or learn whether a case from that area might face health inequities or a higher risk of hospitalisation.
Critically, the near-granular resolution of this contagion model avoided the data pitfalls that came with assuming that anyone in Aotearoa – anywhere – carried the same individual profile.
Individuals in the model have different attributes that match up, not with the “average New Zealander” but with the range of the five million people in the country and with the different interaction patterns that they have.
“Importantly, this allows us to use a more equity-focused approach to understanding transmission spread and informing strategies to mitigate outbreaks, such as the vaccine roll-out,” said the University of Auckland’s Dr Steven Turnbull, who wrangled much of the data used.
The model’s main ingredient was an individual level interaction network built from a Statistics NZ-run dataset covering most of New Zealand’s five million-strong population, called the Integrated Data Infrastructure, or IDI.
This brought together Census data and tax records that detailed what industries people worked in – something particularly useful for predicting movements of essential workers over lockdown – and much more.
Also in the mix was information from local school rolls, electronic payment records from MarketView, commuting data, and insights about peoples’ long-range movement drawn from a Ministry of Transport report that used cell phone location data.
O’Neale emphasised that all of this information was anonymised – meaning that peoples’ personal data wasn’t being accessed.
Although the model couldn’t delve down as far as street-level, there was a wealth of information modellers could glean from tight, population blocks called a “statistical area 2”, or SA2s, provided by Stats NZ.
These areas tend to contain between 2000 and 4000 residents in cities, between 1000 and 3000 in towns and fewer than 1000 in rural places.
“In other words, while we can’t model from a single specific house somewhere, within an SA2 we still know there are this certain number of houses of particular types, they’ve got this many people living in them, and they tend to have this sort of household structure,” O’Neale said.
Fellow Te Pūnaha Matatini modeller Dr Emily Harvey said it was this SA2 data that also yielded that local information about how many people in a home went to work or school.
“All of the linking in the network is still probabilistic: so every time you regenerate a network, it’s just another potential representation of the broader patterns.”
When an outbreak was in motion, the research team fed in performance data from sources like the National Contact Tracing Service to ensure they had accurate parameters for the length of time it took to get in touch with close contacts.
Epidemiological case data, too, provided information like the fraction of infected people most likely to be asymptomatic – and therefore tough to find through community testing.
All of this meant officials could receive a highly-detailed picture, changing in near-real time, as they raced to contain spread.
“The bit that has really changed a lot in our model is being able to look at what potential interventions to control spread might look like,” O’Neale said.
“For instance, once we turn on alert level 4, it can tell us what happens when 70 per cent of workplaces close and all those workers stay at home, while the rest who are essential workers keep moving about.”
The model was even able to include the effect of essential workplaces having interventions in place – and even the individual impacts of testing and contact tracing.
“If someone in our model is identified as a case, then it tries to emulate what the contact tracing process looks like,” O’Neale said.
“For example, it might try every day to contact all of the close connections of a person who is a confirmed case, and with some probability it will fail if those people can’t be found or don’t answer their phone.
“But once those people have been contacted, they then have a very high probability of going and getting tested and isolating at home, and the model carries on.”
When this outbreak was first discovered, through the positive test of a Devonport man on August 17, O’Neale said the network-contagion model could have been brought to bear to predict spread.
In the event, however, they were able to use a more nimble, separately-built transmission risk model called the Aotearoa Co-incidence Network.
Rather than trying to model the chains of infection in the early stages of an outbreak, when very little information was available, this tool looked at the relative transmission risk between regions, based on estimated patterns of interactions between people who lived in different areas.
“This one is able to tell us how many connections there are between any SA2 area and another, based on people from those two areas who interact through attending the same school or workplace,” he said.
“You can also take, say, the locations of the first 10 confirmed cases, select those regions on a map, and then look at all of those other places that have more than 20,000 potential connections with them.
“Even just by focusing on workplaces and schools, you can see how much of Auckland that covers. You also see pretty quickly that the connections aren’t just staying close to those initial seeding cases.
“If you start with Devonport, for instance, you might well end up with a lot of connections in some other region that’s far away from those initial cases, but which has a lot of links to there.”
He said the myriad movement within Auckland – especially to workplaces in its ever-busy CBD – meant we needed to be looking to spread anywhere within the city, as well as far as Whangarei and north Waikato.
Interestingly, before the pandemic, O’Neale had been using this kind of complex modelling to try to predict how knowledge spread over employment networks.
“One day, we’d like to use it to get back to this sort of work. Emily already has some great plans about potential economic applications for it.”
Source: Read Full Article