2.2 Macroscopics: Entropies for Networks

Consider the microcanonical entropy of an unconstrained network with equal a priori probabilities for the microstates. The entropy can be written – in the spirit of Boltzmann – as the logarithm of the number of possible states

images

representing the number of possibilities to distribute L indistinguishable links (particles) on images possible distinguishable positions in the symmetric adjacency matrix. If the N involved nodes are indistinguishable, then this has to be taken care of by an additional factor, images, in the argument of the logarithm. As soon as networks become subjected to constraints such as specific linking rules or probabilities, or through the definition of a Hamiltonian (as will be discussed later), the evaluation of the number of possible states (adjacency matrices) becomes more difficult. Clearly, Sc is a nonextensive quantity, regardless of whether or not nodes are distinguishable. Note that this is in contrast to classical statistical mechanics, where only the case of distinguishable particles leaves room for nonextensivity. If Sc is plotted against the number of nodes N for a fixed density of links (average degree) λ = = L/N, it is obvious ...

Get Analysis of Complex Networks now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.