Network theory - models III.- 9 mins
The configuration model
The goal is to generate N nodes with a given degree distribution.We are drawing nodes from the distribution and then we set up the links between them.
The hard part is the node connecting procedure becasue it has to be randomized as well.
The hidden variable model
Real networks are not completely random, thus we are interested in by how much they are described by their parameters.
Let’s assumae that is the hidden parameter distribution and are the linking probabilities. For each of nodes we draw its hidden variable from and then the connecting probability of nodes :
Given that a degree has a hidden variable what’s the probability of it having degreess :
So can be calculated:
Let denote the probability that a node with hidden variable is connected to a node with hidden variable and degree . Therefore can be expressed as the product of its neighbors:
Since links are established with probability therefore:
Where are the number of nodes with hidden variable . The generating function of this distribution is:
And using the binomial theorem this results in:
Since and is the conolution of distributions:
Taking the logarithm of this and using that :
For G is since that multiplicative factor. Therefore . we need this because the avarage degree distribution of a node with hidden variable was previous denoted as corresponds to this derivatie.
At this is simply takes the form:
If we want to generate sparse networks then should be indenpendent of ! Thus we should define .
Where is the avarage degree with hidden variable and therefore the generating function is very simple:
From previous studies it is know that an exponential generating function corresponds to a Poissin point mass function.
We can move on with this but I don’t see the point of defining more joint and porbablities to prove nothing with it at the end.
Robustness and percolation
A system is called robust if many of its nodes are removed but can still function properly.
We described previously percolation in the Erdős-Rényi model, where at a giant component forms in the network. We are now removing nodes, so approaching the problem from the other end and checking when the system goes to the ‘phase transition’.
We already calculated the expected component size in the generating function formalism which is model independent.
Where can be expressed based on degree distribution where was the distribution to be able to procedd at one end of a link on other links from the end node. We normalized it already:
And we defined with this as well:
Deriving this we can acquire:
At the critical point this results in . A general equation for the existence of the giant component is:
For the Erdős-Rényi model we derived this, in scale-free networks is diverging thus it has a giant component independently from the degree distribution.
How to used this to define network damage?
Removing fraction of nodes modifies to but if then it is easilly:
A node with degrees will loose on average neighbors.
Therefore the new degree should be :
Therefore the new degree distribution is:
The new average degree can be calculated as which will correspond to . The modified will correspond to . The calculations are tricky and the summations are replaced on a trianle instead of calculating both of them to infinity.
Substituting this to the giant component formation inequality one can acaquire after some easy simplifications that the critical fraction of removed nodes is:
In the Erdős-Rényi model we know that we have to remove a fraction of nodes until becomes 1 and then the giant component disappears therefore the system collapses. In scale-free networks therefore they are extremely robust agains node removal. Real world networks almost behave like this, thus making them not so vulnerable to random node removal but very much so to targeted attacks. Killing the hubs results in failure.
Spreading on networks - epidemics models
Susceptible - Infected - Removed
Some diseases like the flu are modeled as SIS, some like SIR (plague).
The model assumptions for epidemics are the following:
- homogenous network, everyone has more or less connections
- every node can be linked to an infected node with the same probability
- spreading time , probability of recovery , at there are fraction of infected nodes
A differential equation for can be formulated:
The number of links to the infected nodes at each timestep is while the disease can spread to a fraction of healty nodes with probability and with probability some of the infected can heal.
Solving the equation results in:
If , at it is therefore can be acquired and at the fraction of infected nodes is exponential outbreak. Otherwise their is exponential decay.
The threshold is at . Assuming that this becomes very intuitive since a node can infect its neighbors with uniform probability at each timestep.
Assuming that the probability of a link pointing to an infected node is given that denotes the infected fraction of nodes with connections. Basically the same differential equation can be deduced as before for this fraction:
Given that we reach a stationary phase the equation simplifies and we get:
We know that the probability of a randomly chosen link having nodes at one end is:
Substituting this back to the equation for results in a self-consistent equation for .
This can be solved graphically and from the condition of having a non-trivial solution (derivative according to ) we get:
For scale-free networks with this results in zero epidemic threshold since .
The result of is that no matter how weak the infection is it will prevail.
Network motifs and communities
A motif is small, connected subgraph in which the configuration of the links is predefined. They have a key role in information processing and controlling.
Communites, groups, clusters and modules are more interconnected parts with no widely accepted definitions.
There are several community finding methods:
hierarchical clustering : dendogram, finds groups/clusters/communities based on similarity - it is important where to make the cut based on similarity
Girvan-Newman method : 1) calculating the betweeness of the links 2) delete the ones with the biggest and if disconnected parts emerge update the dendogram re-iterate
How to measure the quality of the communities?
Comparing them to random partitions seems to be a good idea.
- high quality: more links than expected
- low quality: less than expected
We should compare it to the configurational model. This way the degree distribution can be kept.