To explain how data flows across a network – any network, including the Internet, a network uses certain keys and different protocols which include TCP/IP, UDP, FTP, CBR etc. For the most part each layer dumps data to the layer below it, or passes data to the layer above. There are special situations when the layers do interact – establishing a connection is one of them
TCP:
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer. TCP is the protocol that major Internet applications such as the World Wide Web, email, remote administration and file transfer rely on. Other applications, which do not require reliable data stream service, may use the User Datagram Protocol (UDP), which provides a datagram service that emphasizes reduced latency over reliability.
TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each computer so that the data can be reconstructed in order, regardless of any fragmentation, disordering, or packet loss that may occur during transmission. For every payload byte transmitted, the sequence number must be incremented. In the first two steps of the 3-way handshake, both computers exchange an initial sequence number (ISN). This number can be arbitrary, and should in fact be unpredictable to defend against TCP Sequence Prediction Attacks.
TCP primarily uses a cumulative acknowledgment scheme, where the receiver sends an acknowledgment signifying that the receiver has received all data preceding the acknowledged sequence number. The sender sets the sequence number field to the sequence number of the first payload byte in the segment's data field, and the receiver sends an acknowledgment specifying the sequence number of the next byte they expect to receive. For example, if a sending computer sends a packet containing four payload bytes with a sequence number field of 100, then the sequence numbers of the four payload bytes are 100, 101, 102 and 103.
When this packet arrives at the receiving computer, it would send back an acknowledgment number of 104 since that is the sequence number of the next byte it expects to receive in the next packet, TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to receive and process it reliably. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds communicate. For example, if a PC sends data to a hand-held PDA that is slowly processing received data, the PDA must regulate data flow so as not to be overwhelmed.
IP:
The Internet Protocol (IP) implements datagram fragmentation, so that packets may be formed that can pass through a link with a smaller maximum transmission unit (MTU) than the original datagram size.
IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world.
The Internet Protocol was developed to create a Network of Networks (the "Internet"). Individual machines are first connected to a LAN (Ethernet or Token Ring). TCP/IP shares the LAN with other uses (a Novell file server, Windows for Workgroups peer systems). One device provides the TCP/IP connection between the LAN and the rest of the world.
UDP:
UDP-based Data Transfer Protocol (UDT), is a high performance data transfer protocol designed for transferring large volumetric datasets over high speed wide area networks. Such settings are typically disadvantageous for the more common TCP protocol.
Initial versions were developed and tested on very high speed networks (1Gbit/s, 10Gbit/s, etc.); however, recent versions of the protocol have been updated to support the commodity Internet as well. For example, the protocol now supports rendezvous connection setup, which is a desirable feature for traversing NAT firewalls using UDP.
UDT has an open source implementation which can be found on SourceForge It is one of the most popular solutions for supporting high speed data transfer and is part of many research projects and commercial products.
The User Datagram Protocol offers only a minimal transport service -- non-guaranteed datagram delivery -- and gives applications direct access to the datagram service of the IP layer. UDP is used by applications that do not require the level of service of TCP or that wish to use communications services (e.g., multicast or broadcast delivery) not available from TCP.
UDP is almost a null protocol; the only services it provides over IP are check summing of data and multiplexing by port number. Therefore, an application program running over UDP must deal directly with end-to-end communication problems that a connection-oriented protocol would have handled -- e.g., retransmission for reliable delivery, packetization and reassembly, flow control, congestion avoidance, etc., when these are required. The fairly complex coupling between IP and TCP will be mirrored in the coupling between UDP and many applications using UD
FTP:
File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet. It is often used to upload web pages and other documents from a private development machine to a public web-hosting server. FTP is built on client-server architecture and uses separate control and data connections between the client and the server. FTP users may authenticate themselves using a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it.
The first FTP client applications were interactive command-line tools, implementing standard commands and syntax. Graphical user interface clients have since been developed for many of the popular desktop operating systems in use today; including general web design programs like Microsoft Expression Web, and specialist FTP clients such as CuteFTP.FTP is a client/server
CBR:
Constant Bit Rate is an encoding method that keeps the bit rate the same as opposed to VBR which varies the bit rate. CBR processes audio faster than VBR due to its fixed bit rate value. The downside to a fixed bit rate is that the files that are produced are not as optimized for quality vs. storage as VBR. For example, if there is a quiet section in a music track that doesn’t require the full bit rate to produce good quality sound then CBR will still use the same value - thus wasting storage space. The same is true for a complex sounds; if the bit rate is too low then quality will suffer.
Constant bit rate (CBR) encoding is the default method of encoding with the Windows Media Format SDK. When using CBR encoding, you specify the target bit rate for a stream, and the codec uses whatever amount of compression is necessary to achieve it.
Constrained variable bit rate encoding (described in the following section) also enables you to know the bit rate prior to encoding, but since the rate is variable, the resulting file cannot be streamed as efficiently as a file encoded in CBR mode. With CBR, the bit rate over time always remains close to the average or target bit rate, and the amount of variation can be specified
NS2 Simulator:
Ns are a discrete event simulator targeted at networking research. Ns provides substantial support for simulation of TCP, routing, and multicast protocols over wired and wireless (local and satellite) networks.
While we have considerable confidence in ns, ns is not a polished and finished product, but the result of an on-going effort of research and development. In particular, bugs in the software are still being discovered and corrected. Users of ns are responsible for verifying for themselves that their simulations are not invalidated by bugs. We are working to help the user with this by significantly expanding and automating the validation tests and demos.
OTcl Linkage:
Ns is an object oriented simulator, written in C++, with an OTcl interpreter as a frontend. The simulator supports a class hierarchy in C++ (also called the compiled hierarchy in this document), and a similar class hierarchy within the OTcl interpreter (also called the interpreted hierarchy in this document). The two hierarchies are closely related to each other; from the user’s perspective, there is a one-to-one correspondence between a class in the interpreted hierarchy and one in the compiled hierarchy. The root of this hierarchy is the class TclObject. Users create new simulator objects through the interpreter; these objects are instantiated within the interpreter, and are closely mirrored by a corresponding object in the compiled hierarchy. The interpreted class hierarchy is automatically established through methods defined in the class TclClass. ns use two languages because simulator has two different kinds of things it needs to do. On one hand, detailed simulations of protocols require a systems programming language which can efficiently manipulate bytes, packet headers, and implement algorithms that run over large data sets. For these tasks run-time speed is important and turn-around time (run simulation, find bug, fix bug, recompile, re-run) is less important. On the other hand, a large part of network research involves slightly varying parameters or configurations, or quickly exploring a number of scenarios. In these cases, iteration time (change the model and re-run) is more important. Since configuration runs once (at the beginning of the simulation), run-time of this part of the task is less important. ns meets both of these needs with two languages, C++ and OTcl. C++ is fast to run but slower to change, making it suitable for detailed protocol implementation. OTcl runs much slower but can be changed very quickly (and interactively), making it Ideal for simulation configuration. ns (via tclcl) provides glue to make objects and variables appear on both languages. Having two languages raises the question of which language should be used for what purpose.
Our basic advice is to use OTcl:
• For configuration, setup, and “one-time” stuff
• If you can do what you want by manipulating existing C++ objects and use C++:
• If you are doing anything that requires processing each packet of a flow
• If you have to change the behavior of an existing C++ class in ways that weren’t anticipated
For example, links are OTcl objects that assemble delay, queuing, and possibly loss modules. If your experiment can be done with those pieces, great. If instead you want do something fancier, then you’ll need a new C++ object. There are certainly grey areas in this spectrum: most routing is done in OTcl . We’ve had HTTP simulations where each flow was started in OTcl and per-packet processing was all in C++. This approach worked OK until we had 100s of flows starting per second of simulated time. In general, if you’re ever having to invoke Tcl many times per second, you probably should move that code to C++.
Trace files:
Upon the completion of the simulation, trace files that capture Events occurring in the network are produced. The trace files Would capture information that could be used in performance Study, e.g. the amount of packets transferred from source to Destination, the delay in packets, packet loss etc. However, the Trace file is just a block of ASCII data in a file and quite cumbersome to access using some form of post processing Technique.
nam files:
NAM stands for network animation.It is the Visualization tool used in NS2 simulation. NAM files replay events from trace files. The nam trace file can be huge when simulation time is long or events happen intensively