# Network theory - models II.

- 13 mins# Network models

## The Erdős-Rényi model

The Erdős-Rényi model is based on generating a network with N nodes and distributing M links between them evenly or formalised othewrwise, linking the nodes with probability .

It was already discussed here that a two nodes are linked with probability and not linked with probability therefore resulting in a Binomial degree distribution and in the limit in a Poisson degree distribution where .

The variance can be calculated as well. It can be tiresome to go through which is:

For this first see:

Reparametrizing the equation for :

Thus the variance is:

In the limit while the is constant so therefore the variance becomes negligeble. A large Erdős-Rényi graph is extremely homogenous with no outliers.

Also, all the nodes have approximately the same clustering coefficient so . It can be interpreted as probability of neighbors being connected which corresponds to .

Giant component in the Erdős-Rényi model:

- Given that the giant component has a relative size the fraction of the nodes that are not in the giant component is .
- Taking one node not in the giant component we can say about its connection to the following:
- not connected to directly with probability
- not in the giant component with prob.

- Therefore we found that if is not connected to any other point or connected but not in the giant component for points with probability:
- which should correspond to not being in the giant component

From this equation we can approximate in the large limit that is the critical avarage node degree above which a giant component forms.

### Generating function

We have the degree distribution and we assume no degree correlation (we are in a random network), we approach the problem from the state where the network can be assumed sparse and local tree-like.

The generating function:

Where:

And the m moment is:

For intependent variables :

is the degree distribution function, is the distribution function of a randomly chosen node being in component of size , while is the distribution function of a link being connected to a component of size on one end.

should denote the probability distribution of randomly chosing links which on one end sum up to in component size.

*The idea behind creating these PDFs a component can be separated in such a way.*

- Choosing a node with links the probability of the links to sum up to size . The node and its neighbors therefore sum up to in size and that distribution is :

Taking the generating function of both sides:

Where:

Since was chosen that links to sum to in compoennt size, therefore , substituting this into :

Where is the degree distribution’s generating function. From this we can get the mean component size in a random graph:

Where but and for calculating we should see:

Where is the probability distribtuion of a randomly choosen link to be able to proceed to direction at one end.

Taking the generating function on both sides and making the same assumptions as before:

And therefore the derivative can be translated into:

So we find that the critical point should be at since:

We can actually calculate by simply saying that:

There are nodes with degrees therefore there are links with a degree node at one end. denotes the probability of a link to have a node at one end with other connections so basically the probability of having a node with a degree node at one end:

And the generating function of this is simple:

We are not done yet, introducing as the probability of choosing random links that at one end sum to other connections. Therefore the number of second neighbors can be calculated:

Taking the generating function of both sides:

Finally:

After all we arrive at the concluding formula regarding the giant component size:

Therefore if a giant component form, otherwise the system is critical or smaller clusters form only.

It was shown that:

*For scale-free networks is always divergent therefore they always contain a giant component!*

The Erdős-Rényi model does not compare well to real networks, it does not show scale-free behaviour ( *no power law degree distibution* ), doesn’t have a large clustering coefficient ( )and only shows the small-world effect ( small ). **It is and important reference system but not more.**

## The Watts-Strogatz model

It tries to make small-world and local clustering co-exist in a simple random graph by some slight modifications.

### The model

Start from a regular ring of nodes in which the first neighbors are linked and then rewire each link randomly with probability .

When for large :

And:

Also when we get back the totally random Erdős-Rényi model:

There are a range of values when is relatively low while is still very high, meaning ** high clustering and small wolrd properties**.

```
Takeaway:
It takes a lot of randomness to ruin the clustering, but a very small amount to overcome locality.
```

Number of random links in the system: . What happens when ? Basically there will be not many random links that can change the graph thus . In the case when then the system becomes random and . Approximating the transition at !

It can be shown that a scaling function where:

From numerical studies , therefore:

## The Barabási-Albert model

The problem was still open until the Barabási-Albert model of how to generate scale-free random graphs in a simple way.

Motivated by real networks the network size is *not static*, ** the system grows at each time step**.

A new node can be connected random or can be connected to high degree nodes with a larger probability. very logical based on real networks, it is called ** preferential attachement**.

```
Generating procedure:
Adding one node with m links at a time-step (the initial core should be at least containing m nodes) and choosing node i with probability that is proportional to its degree.
```

For a large timestep:

- and
- the probability of coohsing node is:

- with this probability new links could be added to node therefore the change in its degree can be approximated by:

From which the differential equation:

For large the sum of the degrees is twice the number of links therefore the differential equation is very simle and can be analitically solved:

Given that at timestep the degree of node is the constant can be eliminated as .

We want to calculate the degree distibution function and expect it to be scale-free. The probabilit of finding a node with degree at least is:

Where:

So the relative length of the time steps tells that:

From the cumulative distibution function we can get the degree-distribution by simply derivating by not considering that the problem is discrete. Therefore:

Which is a scale-free distibution with .

If links are connected with uniform porbability instead of preferential attachement :

Making the same chain of calculations this result in a degree distibution that is not scale free , therefore preferential attachement is truly necessary.

To calculate the clustering coefficient in the Barabási-Albert model we should ask the quation of what the probability is that node intorduced at is connected to node introduced at .

What is the expected number of links between a nodes links at the end of the generating process which was introduced at timestep ?

With words: if is linked to and it is linked to the probability of being linked to is and it is true for all the links of and therefore the summation, taking the continous limit and actually integrating in the process one can acquire the result above.

The number of links between the neighbors of is in . The clustering coefficient is the number of links between the neighbors of in the paths where is present divided by all the edges.

We can see that it is not dependent at all. This is decaying slower with than the Erdős-Rényi model, however, in real networks there is no decay at all.

Preferential attachement can be measured by simply observing the system for amount of time.

Taking the integral of for all degrees in order to reduce noise in real networks.

If is proportional to then there is no preferential attachement if it is proportional to then there is.

The problem with the Barabási-Albert model so far is that in real networks is while here it is . This results in oldest nodes having the most connections which is not true for real networks either.

We can introduce a finesse , a parameter that makes tunable by modifying preferential attachement’s probability.

Going to the same process and taking the large limit:

After all the differential equation results in:

This can also be done with not an additive but a multiplicative finesse parameter which is drawn from a distribution.

*scale-free*: if results in the original Barabási-Albert model*fit-gets-rich*: nodes have different , gets larger with and it is scale-free and in the long run the largest hubs are the fittest*Bose-Einsteinn condensation*: winner takes all