Skip to main content
European Commission logo
polski polski
CORDIS - Wyniki badań wspieranych przez UE
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary
Zawartość zarchiwizowana w dniu 2024-05-27

Coevolution and self-organization in dynamical networks

Rezultaty

This work package has achieved several scientific results in the understanding of socio-economic networks, from a statistical physics perspective. These results stem from basic research and the work package was not expected to produce results immediately applicable to business or policy making. However, the results suggest several ideas promptly applicable in further investigations in economics, management and policy making. Indeed, these results have inspired already several collaboration projects with economists and two consulting projects for the French government. We have achieved novel understanding in the following topics: 1) Decision making process in the board of an organization: role of well connected minorities and lobbies 2) Decision making across organizations sharing some of their members: characterizing regimes of spreading of decisions, practices and visions through the network of interlocking relationships. 3) Structure of the ownership networks in stock markets: ownership networks are scale free but they can very well differ concerning the topological organization in several groups of interest or in one unique cluster 4) Propagation of failures in production networks: productions networks can exhibit spontaneous macroscopic fluctuations both in space and time 5) Community detection in the www Details are provided in the descriptions of the deliverables. Two consulting projects with the French Agency of Foreign Investments were based on results on topic 1, 2, 3 Collaborations with Prof. Gallegati (Univ. Politecnica delle Marche), Prof. Delli Gatti (Univ. Cattolica of Milan) and Prof. Stiglitz (Columbia Univ.) were also a consequence of results on topic 1, 2, 3. This collaboration contributes to our findings on topic 4 and has resulted now in an on going project with these economists.
New Models for Scale-free Networks Modeling the Internet and other networks is a fundamental step toward their better understanding and, in the case of technological networks, their possible improvement. Indeed, since large scale experiments on the real world are seldom possible, we must resort to synthetic graphs to study the behavior of real networks, their susceptibility to external and internal interventions and the results of improvement attempts. Possible models must of course take into account some characterization of the microscopic mechanisms that are likely to be at work in shaping real networks. Such mechanisms are of different kinds, from technological (bandwidth requirements) to economical (providers competing for users), political (governmental regulations), physical (protein-protein interaction strengths) and others. These detailed mechanisms are unfortunately largely unknown and we must therefore mimic them by simplified model rules that have to take into account the network growth by addition of new nodes (the Internet has been growing exponentially; there are more genes and proteins in higher, more evolved organisms than in less evolved ones) and the linking mechanisms between them. During COSIN we have developed various different scale-free models with the goal of exploring how different ingredients contribute at reproducing the topological patterns observed in real networks. One of the most important models that we have introduced tries to describe how the intrinsic properties of the nodes of the network, captured at a cartoon level by a single real variable, govern the way nodes link with each other. The results have a broad range of applicability: in the Internet, where it as been suggested that the scale-free nature of the Internet graph could be due to the scale-free size of Autonomous Systems, in turn related to the well known Zipf's law of company sizes; in protein interaction networks, where protein interactions can be detected only if they are stronger than a minimal threshold, and where the strength of a protein-protein interaction is due to the intrinsic properties of the two partners. In the next future we plan to use this model to model in more detail the protein-protein interaction discovery process, so to understand whether the scale-free nature of the corresponding networks is real or only a distortion introduced by the experimental setup on the real network. This is a most important issue since there is growing evidence for many networks that their real structures could be different from the observed ones because of systematic errors in the measurement technique. Another, related result of COSIN is that although in some cases the scale-free nature of networks is real, the measurement process can change the perceived exponent values: besides quantifying the extent of this effect, we have also shown how to avoid it in the case of the Internet. We have also been able to reproduce the assortativeness of social networks (peers are likely to be connected) by introducing a social-distance dependent modification of the classical "preferential attachment" rule, a further insight that reliable models for scale-free networks do need to better take into account the microscopic properties of the systems under scrutiny. More generally, we hope to have improved the awareness about some methodological problems affecting complex network research (insufficient characterization of the measurement methods and unsatisfying understanding of the system's microscopic behavior), and we plan in the next few years, to call for more effort in such direction from the community.
The ability of drawing very large networks as e.g. large computer networks is of great significance in visualizing the evolution of stochastic models for evolving networks. One focuses on designing and implementing new algorithms and innovative software systems that display a large graph at different abstraction levels. For example, there is an increasing need of systems that show maps of the Web and support the user during her navigation, of systems that display and monitor the traffic on the Internet, and of systems that draw portions of the Internet as a graph. Until now, the vast majority of graph drawing algorithms that have been deeply studied and experimentally tested in the literature, like for instance for database schemes, can efficiently handle graphs of only hundreds of vertices. We aim at devising general algorithmic techniques for drawing large graphs and at experimenting their usage in new visualization systems, thus contributing to devising the technology transfer from the algorithmic research on graph drawing to its application in networks visualization. As part of this goal, we developed analysis-enhancing layouts and, in cooperation with Universite de Paris Sud, created a novel technique that preserves the readability of abstract visualizations while showing all elements. Further, combinations and extensions of well-known graph-drawing techniques have been adapted to cope with specific networks like the Autonomous System network.
Scale free networks are characterized by a distribution of connectivities that has a power law shape, meaning that there are no characteristic values for the connectivity of the nodes forming the network. This is a simplistic view of the networks that are formed in our current world. The role played by the nodes in a networked society is not simply characterized by the number of neighbours it can influence. There are many other important factors in them. For instance, nodes are grouped into clusters that can be related to communities and those communities can form other communities and so on. Different works in the consortium have been related in the analysis and characterization of such communities in many kind of social, technological and biological networks; in each of those the communities can have different meaning, but in general terms we can characterize a network by its community structure. The study of an organization and how it evolves around its community structure can be of great help when reengineering the organization, and the tools provided by the consortium should be taken into account.
The WWW is one of the easiest way to disseminate information in an economic and yet effective way. Being also the subject of study of this project, it has been natural to devote a large effort of the dissemination actions in the building of a Web site that could help both participants and external people.
The Web-graph is the graph whose nodes are the (static) HTML pages and the (directed) edges are the hyperlinks between pages. This has been the subject of extensive attention because of the many applications that benefited from the analysis of the link structure of the Web, primarily Web mining. One example is represented by the algorithms for ranking pages such as Page Rank and HITS. Link analysis is also at the basis of the sociology of content creation, and the detection of structures hidden in the web (such as bipartite cores of cyber communities and web-rings. The experimental study of the statistical and topological properties is at the core of this discipline and at the basis of the validation of stochastic graph models for the Web. To study and analyse the web-graph we need to deal with massive graph. In this deliverable we present a collection of algorithms and related implementations that are able to generate and measure massive graphs in secondary memory. This work presents external and semi- external memory algorithm we developed in order to generate and analyse Web-graphs. We define a standard file format that we use to represent both the graphs and the results of the measurement processes. The library contains routines for simulating models of stochastic graphs resembling the properties of the Web, for measuring the Page rank and degree distribution, for finding correlation between different measures, for finding connected components, cliques of small size that are considered seeds of cyber communities, for detecting the overall picture of the structure of the Web-graph. All routines are able to compute such measure on graphs of very large size even on a medium size PC. This library has been used by different research groups in Europe that are carrying on research on the study of large complex networks, i.e. Helsinki University, Academy of Sciences of Budapest, Universita di Milano. It is actually at the best of our knowledge the only publicly available library containing a complete suite of routines for analysing large Web-graphs.
The purpose of this book is to provide a unified picture of the results obtained about the Internet in the context of different scientific communities by privileging the use of methods and concepts that have proven to be extremely useful in the analysis of more classical statistical physics systems. We shall therefore make a strong emphasis on the statistical regularities observed in the large-scale structure of the network, the so-called global Internet, and the importance of the dynamics in the formulation of adequate models. In doing this, we have made a special effort to bridge the language gap that might occur among different communities by devoting the two initial chapters to an outline of the Internet history and an elementary description of its functioning. This will allow us to build up a basic Internet glossary and outline the main elements that make the Internet work. We also provide an appendix summarizing the main concepts of graph theory that are used in the topological description of the Internet maps. The road map of the book can be schematized in two main parts. The first six chapters are essentially devoted to the physical Internet. In these chapters we review the various experimental projects dealing with data collection, focusing on the various mapping strategies and the level of description achieved with different tools. Following, we present the statistical analysis of the most recent data available, discussing in detail the main topological features characterizing the Internet large-scale topology. The ensuing chapter contains an overview of models proposed to represent the Internet. Here we emphasize the `physicist'' point of view by introducing the reader to the modern field of growing network models. Finally, we report in Chapter 6 the analysis of the Internet resilience to damages by casting the problem in the general framework of phase transitions and percolation phenomena. The second part consisting of Chapters 7, 8 and 9 is instead focused on the virtual networks hosted by the Internet, such as the World-Wide-Web, peer-to-peer systems, and other social communities, and to dynamical phenomena that occur on them, such as search processes and epidemic spreading. Finally, Chapter 10 is a short discussion of important features that are likely going to represent the main challenges for a full understanding of the Internet in the near future. The systematic study of the large-scale properties of the Internet and its view as a complex evolving network, while a relatively recent field, has generated quite a large number of works and a vast literature on the subject. We have made every effort to account and mention all the works relevant for a proper understanding of each chapter. It is, however, quite impossible to discuss in detail all the contributions to the field and we have therefore made some choices based on our perception of what is more relevant to the focus of the present book. We hope that our effort will result in a comprehensive and useful presentation of the subject to everybody working in the field, and more specially, to any researcher or student who intends to enter it. In this sense, by conveying the idea that the Internet is a paradigmatic example of complex system, we believe that the book can be of interest to computer scientists, physicists, and mathematicians alike.

Wyszukiwanie danych OpenAIRE...

Podczas wyszukiwania danych OpenAIRE wystąpił błąd

Brak wyników