Slides for my keynote at Complex Networks 2019

LisbonTalkCover

I gave a keynote talk at the Complex Networks 2019 conference in Lisbon—here are the slides, if you are interested.

If you are interested in temporal networks in general, here are some pointers:

Thou Shalt Not Smooth!

This is a very short post for those dabbling in the dark arts of network neuroscience. Everyone else, read this or this, they’re probably more fun anyway.

Functional brain networks

[Figure from Eur. J. Neurosci, doi: 10.1111/ejn.13717]

Q: When building ROI-level functional brain networks from fMRI data, should I apply spatial smoothing to the voxel time series?

A: No you should not, what were you thinking? See above; it messes up your degrees and links non-uniformly, and in general has weird effects. In any case, you already average your voxel time series to get your ROIs, which is brutal enough. For more, see our recent (open-access) paper in the European Journal of Neuroscience, with @TuomasAlakorkko and @eglerean and @hpsaarimaki and Onerva Korhonen.

Functional brain networks: the problem of node definition

Summary: Nodes in brain networks from fMRI are usually defined using ROI’s (Regions of Interest) so that each ROI node has a time series that is the average of the BOLD time series of the ROI’s voxels and links represent correlations between nodes. Here, we show that this averaging of voxel time series is problematic.

The human brain is a complex network of neurons. The problem is that there are about 10^12 of them with ~10^5 outgoing connections each; mapping out a network of this scale is not possible. Therefore, one needs to zoom out and look at the coarse-grained picture. This coarse-grained picture can be anatomical – a map of the large-scale wiring diagram between parts of the brain – or functional, indicating which parts of the brain tend to become active together under a given task.

But how should this coarse-graining be done in practice? How to define the nodes of a brain network –– what should brain nodes represent? In functional magnetic resonance imaging (fMRI), the highest level of detail is determined by the imaging technology. In a fMRI experiment, subjects are put inside a scanner that measures the dynamics of blood oxygenation in a 3D representation of the brain, divided into around 10,000 volume elements (voxels). Blood oxygenation is thought to correlate with the level of neural activity in the area. As each voxel contains about 5.5 million neurons, the network of voxels is significantly smaller than the network of neurons. However, it is still too large for many analysis tasks, and further coarse-graining is needed.

A typical way in the fMRI community is to group voxels into larger brain regions that are for historical reasons known as Regions of Interest (ROIs). This can be done in many ways, and there are many pre-defined maps (“brain atlases”) that define ROIs; these maps are based on anatomy, histology, or data-driven methods. It is common to use ROIs as the nodes of a brain functional network. The first step in constructing the brain network is to assign to each ROI a time series that is the average of the time series of its voxels measured in the imaging experiment. Then, to get the links, similarities between the ROI time series are calculated, usually with the Pearson correlation coefficient. The correlation between the two ROIs becomes their link weight. Often, only the strongest correlations are retained, and weak links are pruned from the network.

If the ROI approach is to work, the ROIs should be functionally homogeneous: their underlying voxels should behave approximately similarly. Otherwise, it is not clear what the brain network represents. Because this assumption hasn’t really been tested properly and because it is fundamentally important, we recently set out to explore whether it really holds.

We used resting-state data – data recorded with subjects who are just resting in the scanner, instructed to do nothing – to construct functional ROI-level networks based on some available atlases. We defined a measure of ROI consistency that has a value of one if all the voxels that make up the ROI have identical time series (making the ROI functionally homogeneous, which is good), and a value of zero if the voxels do not correlate at all (making that ROI a bad idea, in general).

Distribution of consistency for ROIs as brain network nodes
[Figure from our paper in Network Neuroscience]

We found that consistency varied broadly between ROIs. While a few ROIs were quite consistent (values around 0.6), many were not (values around 0.2).  There were many low-consistency ROIs in three commonly used brain atlases.

From the viewpoint of network analysis, the existence of many low-consistency ROIs is a bit alarming.  We also observed strong links between low-consistency ROIs – how should this be interpreted? These links may be an artefact, as they disappear if we look at the voxel-level signals. This means that the source of the problem is probably the averaging of voxel signals into ROI time series. While this averaging can reduce noise, it can also remove the signal: at one extreme, if one subpopulation of voxels goes up while another goes down, the average signal is flat. More generally, if a ROI consists of many functionally different subareas, their average signal is not necessarily representative of anything.

In conclusion, we would recommend being careful with functional brain networks constructed using ROIs; at least, it would be good to go back to the voxel-level data to verify that the obtained results are indeed meaningful.

For details, see our recent paper in Network Neuroscience.

This post was co-written by Onerva Korhonen, Enrico Glerean & Jari Saramäki.

[PS: The definition of brain network nodes is not the only complicated issue in the study of functional brain networks. Even before one has to worry about node selection, a possible distortion has already taken place: preprocessing of the measurement data. We’ll continue this story soon.]

Ant supercolonies: networks of nests

An ant (F. Aquilonia)

Ant colonies are complex systems par excellence. It’s almost as if the colony is the organism, not the ant. Ants follow simple behavioural patterns, depositing pheromones as they go and following trails of scent laid down by others. Because of their collective actions, the colony seems to have a life of its own, sprouting its foraging trails towards food sources much like a slime mold grows its branches along the shortest path to food. The colony appears to have its own reproductive cycle too: queens and males mate during the nuptial flight, and the impregnated queens then land to give birth to new colonies, like fertilized eggs. Ordinary workers play no role in reproduction; they are outside the germline.

But some species of ants behave in ways that are even more complex: they form supercolonies, networks of interconnected nests with hundreds of reproductive queens. In these supercolonies, queens and workers move freely between nests without eliciting aggression; they cooperate across nest boundaries. Ant supercolonies are the largest cooperative units known in nature: for some ants, they can extend for hundreds of kilometres.  They are also among the strangest: their existence is difficult to explain from the point of view of gene-centric evolutionary theory. This has to do with altruism: relatedness among nestmates can be low, and workers will end up helping unrelated individuals that carry a different set of genes. It may even be that ant supercolonies represent an evolutionary dead end.

Recently, I had a chance to have some fun with the genetics of ant supercolonies. My colleagues Eva Schultner and Heikki Helanterä who work on ants had collected a number of samples from tens of nests of F. Aquilonia in southern Finland. As Eva and Heikki wanted to understand the genetic structure of F. Aquilonia supercolonies, the sampled ants were genotyped for estimating genetic similarities between the nests (for technical details, scroll down). From a network-science point of view, the nests and their similarities span a weighted spatial network: nests are nodes and pairwise genetic similarities are mapped to link weights. The resulting similarity network looks like this:

2016_MY_LA_new

There are two supercolonies, one to the NE and one to the SW – the link weights inside the colonies are higher than between them, much like you would have for two communities in a social network. A closer look inside these two supercolonies (with methods more advanced than bare-bones network thresholding) revealed that there is a faint hint of substructure, of subclusters inside supercolonies. And because queens, workers, and pupae were genotyped separately and sampled at two time points, we could see that the genetic relationships between nests are not the same in terms of queens as they are in terms of workers, and not the same in spring as they are in summer when workers have started migrating.

This means that there may an extra layer of complexity in the genetics of ant supercolonies – fine structure in time and space, and in terms of class.

This work was published in Molecular Ecology last year. If you are interested in toying around with ant genetics, the data are available on Datadryad and my Python scripts can be found here: github.com/jsaramak/ants.

[Technical details: the ants were sequenced at 8 polymorphic microsatellite loci; microsatellites are nonsensical bits of DNA where a random sequence is repeated 5-50 times. They do not do anything and there is no selection pressure, and therefore microsatellite alleles are great for just seeing how close or far two populations are genetically. There are various measures for quantifying this: the simplest would be to see how often the same alleles appear in populations. In social-insect studies, the typical measure is the so-called relatedness (Queller & Goodnight 1989) and we used it in this work.]