Internet Technology
Essay by review • November 3, 2010 • Research Paper • 3,782 Words (16 Pages) • 1,503 Views
The Internet has transformed the computer and communications world like nothing before. The Internet is also known as the world-wide web, which has the capability of gathering information, and can communicate between individuals and their computers no matter where the geographic location. The Internet started some thirty years ago; it has been one the best investments that researchers have spent their time and commitment on. Today millions of people use the internet. The Internet is a widespread information infrastructure. Its history is complicated and its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing the use of online tools to accomplish electronic commerce, information acquisition, and community operations.
The Internet History
The original name of the Internet was the Arpanet. The internet was based on the idea that there was going to be more than one independent network, with the Arpanet as the ground-breaking packet switching network. The Arpanet would soon include ground based packet radio networks, packet satellite networks, and other networks. In this approach, the choice of any individual network technology was not dictated by particular network architecture but rather could be selected freely by a provider and made to work with the other networks. Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations.
1
1. Cerf, Vinton Pages 10-20
In 1961 Kleinrock showed that packet switching was a more efficient switching method. Distinctive purpose interconnection preparations between networks were another possibility, along with packet switching. While there were other limited ways to interconnect different networks, they required that one be used as a component of the other.
As an open-architecture network, individual networks could be designed separately and developed so each can have its own distinctive interface which it may offer to users and other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are typically no constraints on the type of network that can be included.
Open architecture networking was introduced by Kahn in 1972, after his arrival at DARPA. This work was originally part of the packet radio program, but then became a separate program in its own right. The program was called "Internetting" at that particular time. Kahn thought about developing a local protocol available only to the packet radio network, that would avoid having to deal with the massive amount of different operating systems, and continuing to use NCP (network control protocol).
However, NCP did not have the capability to address networks (and machines) further downstream than to destination IMP on the ARPANET. So some changes to NCP would have to be done. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide end-to-end consistency. If any packets were lost, the protocol would come to a halt.
2
1. Cerf, Vinton, Pages 10-50
Kahn developed a new version of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new TCP/IP would be more like a communications protocol.
Four rules were critical to Kahn's early thinking:
1. Every separate network would have to stand on its own and no internal changes could be required to any network to connect to the Internet.
2. Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source.
3. Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
4. There would be no global control at the operations level.
There were also other key issues that needed to be addressed: Algorithms were needed to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source. There was a need for host to host "pipelining", so that multiple packets could be enroute from source to destination at the carefulness of the participating hosts, if the intermediate networks allowed it. There was a use for gateway functions to forward packets appropriately. This included interpreting IP headers for routing, breaking packets into smaller pieces if necessary, and handling interfaces. The need for end-end checksums, reassembly of packets from fragments and detection of duplicates, and the need for global addressing.
3
1. Cerf, Vinton pages 45-50
There were more concerns about techniques for host to host flow control, and interfacing with the various operating systems.
While working at BBN Kahn began working on a communications-oriented set of operating system principles. Kahn realized it was necessary to learn the performance level of each operating system. This would embed any new protocols in an efficient way. In the spring of 1973 Kahn asked Vint Cerf to work with him on the detailed design of the protocol. Cerf was deeply involved in the original design and development of the NCP. He already had the knowledge
...
...