Skip to main content
European Commission logo
English English
CORDIS - EU research results
CORDIS
CORDIS Web 30th anniversary CORDIS Web 30th anniversary

Article Category

Content archived on 2024-04-22

Article available in the following languages:

Feature Stories - Building a smarter internet

The proliferation of smart phones, smart TVs and smart cars could lead to 50 billion internet-connected devices by 2020. EU-funded scientists are working hard to create a smarter internet to cope with this massive growth.

The internet of things - cars, devices, and sensors connecting everyday objects - is already far larger than the internet of people and linked pages. In fact, by 2008 the number of things connected to the internet had already exceeded the number of people on earth. Experts predicts that these 'connected devices' will continue to grow in number. For example, a Dutch start-up, called Sparked, is using internet-connected wireless sensors on cattle. There are an estimated 1.3 billion cattle in the world today, so that is already a lot of connections! The auto industry is another major user of sensor technology. Smart cars on the drawing board have location, acceleration, orientation and proximity sensors all transmitting and gathering data to and from nearby cars and road infrastructure. 'Of course the internet is growing rapidly and many new types of device are being added to it all the time, and that trend is accelerating right now,' explains Philip Eardley, technical coordinator of a large EU-funded project called Trilogy which is, in the words of the project, 'Re-architecting the internet' and developing new technologies to support the emerging Future Internet . 'Our work looked at how you manage that shared network with all the competing demands. We wanted to find a way to best manage the capacity of the network,' says Mr Eardley. Future Internet technologies are a major focus of the European Union's research agenda. Funding is in place to test new technologies that could enable the network to evolve and adapt over time, because the internet is much more than simply a communication system, as it was originally envisaged; it is now the backbone of modern society. And Future Internet initiatives are intended to ensure that society's backbone does not break under the strain of constantly rising demands. It is vitally important work which is why Trilogy attracted some of the top talent from around the world to its cause. The UK's BT Innovate & Design group led the project as consortium coordinator, supported by NEC Europe, Roke Manor Research and University College London, all in the UK. The consortium also included leading groups, such as Nokia in Finland, Eurescom and Deutsche Telekom in Germany, along with the Université Catholique de Louvain in Belgium, Universidad Carlos III de Madrid in Spain, the Athens University of Economics and Business in Greece and Stanford Law School in the US. Trilogy's ambition was to study problems with current internet standards, particularly the 'Transmission control protocol' (TCP). Vint Cerf and Bob Kahn conceived TCP back in 1974. It is a remarkably efficient protocol that has performed well as the internet mushroomed. But it could be better, and this was a central focus of Trilogy's work. Finding the right balance Internet technologies must walk a fine line: too little control and the network fails and chaos can ensue; too much control and innovation is stifled. Anybody who wants to create protocols for the internet must make sure that their innovation does not create technological dead ends down the road. One of Trilogy's contributions is Multipath TCP (MPTCP), a protocol that allows a regular TCP connection to use multiple paths at the same time. This boosts the resilience of the network, because the connection works even if one path fails. The protocol also leads to greater network efficiency by pooling resources; data is sent over several paths simultaneously, and the sender rapidly adapts so as to send more of the traffic over the emptier paths and less over paths that are congested. Multipath TCP could be deployed to enable much better data mobility that adapts to a receiver's location, regardless of the network. For example, MPTCP could start to download a film over 3G or 4G and then draw on WiFi capacity when the user is in range of a hotspot. 'Multipath TCP offers both a more effective use of available capacity and can increase resilience, too, because the data is travelling to its destination by a variety of routes,' Mr Eardley explains. As part of the MPTCP work, Trilogy developed a congestion control algorithm that balances traffic between multiple paths, moving data transmission away from congested paths to exploit unused capacity elsewhere. Unblocking congestion Indeed, congestion is a key issue on the internet, so the Trilogy team developed a special protocol to cope with the problem, in addition to their congestion work within MPTCP. 'Congestion exposure' (CONEX) is a new protocol that lets all IP devices along a path see the total, end-to-end level of congestion. CONEX helps the operator by improving the detail of the available information and so informs bandwidth-management mechanisms. Congestion exposure also helps the end users; their operating system can optimise the end-to-end quality-of-service - during a period of heavy congestion, for example, the user's videoconferencing could continue at full rate while a file downloading could be paused, optimising the available resources. Another important area of research was mechanisms to help solve other problems, such as source address validation and internet protocol transition from IPv4 to IPv6, which is an important step for the Future Internet because the number of available IPv4 addresses is beginning to run out. The issue stems from the birth of the internet when a universal addressing protocol was developed to allow any type of computer to identify itself on the internet. IP was conceived in 1971 and v4 was rolled out in 1983. No one envisaged the eventual popularity of the internet. Tim Berners-Lee didn't invent the web until 1989. For its time, IPv4 was a very robust standard offering over 4 billion addresses in a 32-bit code. Now, the 128 bits of IPv6 provides vastly more addresses, and tools developed by Trilogy will help the transition to the new IP version. Trilogy placed particular emphasis on the definition and development of standards, and the team has been very active within the Internet Engineering Task Force (IETF), the body that standardises internet technologies. The project was instrumental in the foundation of two new working groups at the IETF, called MPTCP and CONEX. 'We have submitted about 50 different drafts, several of which are now approved as 'Requests for comments' (RFCs), with more in the pipeline. We have also published well over 60 papers,' notes Mr Eardley. An RFC is a standard or an advisory document that has been formally agreed by the IETF. The Trilogy project carefully considered how to deploy the protocols it developed, so that they can be adopted incrementally. 'We developed a Linux implementation of the Multipath TCP protocol and are hopeful that it will form part of the Linux kernel,' said Mr Eardley. And experience has shown with new internet developments that it only takes a couple of leading groups to demonstrate the advantages of these new technologies for them to take off. The Trilogy project received EUR 5.82 million (of EUR 9.82 total budget) in research funding under the EU's Seventh Framework Programme, sub-programme 'The network of the future'. Useful links: - 'Re-Architecting the internet. An hourglass control architecture for the internet, supporting extremes of commercial, social and technical control' - Trilogy project record on CORDIS Related articles: - Europeans making smart device communication even easier - Commission sets target for IPv6 deployment - Making sure the internet delivers - See-through networks - The Network of Everything