Wednesday, December 25, 2013

Maximum Power Point Tracking

Maximum Power Point Tracking, frequently referred to as MPPT, is an electronic system that operates the Photovoltaic (PV) modules in a manner that allows the modules to produce all the power they are capable of. MPPT is not a mechanical tracking system that “physically moves” the modules to make them point more directly at the sun. MPPT is a fully electronic system that varies the electrical operating point of the modules so that the modules are able to deliver maximum available power. Additional power harvested from the modules is then made available as increased battery charge current. MPPT can be used in conjunction with a mechanical tracking system, but the two systems are completely different.

To understand how MPPT works, let’s first consider the operation of a conventional (nonMPPT) charge controller. When a conventional controller is charging a discharged battery, it simply connects the modules directly to the battery. This forces the modules to operate at battery voltage, typically not the ideal operating voltage at which the modules are able to produce their maximum available power. The PV Module Power/Voltage/Current graph shows the traditional Current/Voltage curve for a typical 75W module at standard test conditions of 25°C cell temperature and 1000W/m 2 of insolation. This graph also shows PV module power delivered vs module voltage. For the example shown, the conventional controller simply connects the module to the battery and therefore forces the module to operate at 12V. By forcing the 75W module to operate at 12V the conventional controller artificially limits power production to ~53W.

Rather than simply connecting the module to the battery, the patented MPPT system in a Solar Boost™ charge controller calculates the voltage at which the module is able to produce maximum power. In this example the maximum power voltage of the module (VMP) is 17V. The MPPT system then operates the modules at 17V to extract the full 75W, regardless of present battery voltage. A high efficiency DC-to-DC power converter converts the 17V module voltage at the controller input to battery voltage at the output. If the whole system wiring and all was 100% efficient, battery charge current in this example would be V MODULE ¸V BATTERY x I MODULE, or 17V¸ 12V x 4.45A = 6.30A. A charge current increase of 1.85A or 42% would be achieved by harvesting module power that would have been left behind by a conventional controller and turning it into usable charge current. But, nothing is 100% efficient and actual charge current increase will be somewhat lower as some power is lost in wiring, fuses, circuit breakers, and in the Solar Boost charge controller.

Actual charge current increase varies with operating conditions. As shown above, the greater the difference between PV module maximum power voltage VMP and battery voltage, the greater the charge current increase will be. Cooler PV module cell temperatures tend to produce higher VMP and therefore greater charge current increase. This is because VMP and available power increase as module cell temperature decreases as shown in the PV Module Temperature Performance graph. Modules with a 25°C VMP rating higher than 17V will also tend to produce more charge current increase because the difference between actual VMP and battery voltage will be greater. A highly discharged battery will also increase charge current since battery voltage is lower, and output to the battery during MPPT could be thought of as being “constant power”.

What most people see in cool comfortable temperatures with typical battery conditions is a charge current increase of between 10 – 25%. Cooler temperatures and highly discharged batteries can produce increases in excess of 30%. Customers in cold climates have reported charge current increases in excess of 40%. What this means is that current increase tends to be greatest when it is needed most; in cooler conditions when days are short, sun is low on the horizon, and batteries may be more highly discharged. In conditions where extra power is not available (highly charged battery and hot PV modules) a Solar Boost charge controller will perform as a conventional PWM type controller. Home Power Magazine has presented RV Power Products (now Blue Sky Energy, Inc.) with two Things-That-Work articles; Solar Boost 2000 in HP#73 Oct./Nov. 1999, and Solar Boost 50 in HP#77 June/July 2000, Links to these articles can be found on the Blue Sky Energy, Inc. web site at www.blueskyenergyinc.com

FTP

 File Transfer Protocol, the protocol for exchanging files over the Internet. FTP works in the same way as HTTP for transferring Web pages from a server to a user's browser and SMTP for transferring electronic mail across the Internet in that, like these technologies, FTP uses the Internet's TCP/IP protocols to enable data transfer.

FTP is most commonly used to download a file from a server using the Internet or to upload a file to a server (e.g., uploading a Web page file to a server).

File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet. It is often used to upload web pages and other documents from a private development machine to a public web-hosting server. FTP is built on a client-server architecture and uses separate control and data connections between the client and the server.FTP users may authenticate themselves using a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it.

The first FTP client applications were interactive command-line tools, implementing standard commands and syntax. Graphical user interface clients have since been developed for many of the popular desktop operating systems in use today, including general web design programs like Microsoft Expression Web, and specialist FTP clients such as CuteFTP.

FTP may run in active or passive mode, which determines how the data connection is established. In active mode, the client sends the server an IP address and port number and then waits until the server initiates the TCP connection. In situations where the client is behind a firewall and unable to accept incoming TCP connections, passive mode may be used. In this mode, the client sends a PASV command to the server and receives an IP address and port number from the server, which the client then uses to open a data connection to the server. Both modes were updated in September 1998 to support IPv6. Further changes were introduced to the passive mode at that time, updating it to extended passive mode. 

Where FTP access is restricted, an FTPmail service can be used to circumvent the problem. An e-mail containing the FTP commands to be performed is sent to an FTPmail server, which parses the incoming e-mail, executes the requested FTP commands and sends back an e-mail with any downloaded files as attachments. This service is less flexible than an FTP client, as it is not possible to view directories interactively or to issue any modify commands. There can also be problems with large file attachments not getting through mail servers. The service was used in the days when some users' only internet access was via e-mail through gateways such as a BBS or online service. As most internet users these days have ready access to FTP, this procedure is no longer in widespread use.

Types of TCP

To avoid congestion collapse, TCP uses a multi-faceted congestion control strategy. For each connection, TCP maintains a congestion window, limiting the total number of unacknowledged packets that may be in transit end-to-end. This is somewhat analogous to TCP's sliding window used for flow control. TCP uses a mechanism called slow start to increase the congestion window after a connection is initialized and after a timeout. It starts with a window of two times the maximum segment size (MSS). Although the initial rate is low, the rate of increase is very rapid: for every packet acknowledged, the congestion window increases by 1 MSS so that the congestion window effectively doubles for every round trip time (RTT). When the congestion window exceeds a threshold thresh the algorithm enters a new state, called congestion avoidance. In some implementations (e.g., Linux), the initial thresh is large, and so the first slow start usually ends after a loss. However, thresh is updated at the end of each slow start, and will often affect subsequent slow starts triggered by timeouts.

As long as non-duplicate ACKs are received, the congestion window is additively increased by one MSS every round trip time. When a packet is lost, the likelihood of duplicate ACKs being received is very high (it's possible though unlikely that the stream just underwent extreme packet reordering, which would also prompt duplicate ACKs). The behavior of Tahoe and Reno differ in how they detect and react to packet loss:
  • Tahoe: Triple duplicate ACKS are treated the same as a timeout. Tahoe will perform "fast retransmit", reduce congestion window to 1 MSS, and reset to slow-start state. 
  • Reno: If three duplicate ACKs are received (i.e., four ACKs acknowledging the same packet, which are not piggybacked on data, and do not change the receiver's advertised window), Reno will halve the congestion window, perform a fast retransmit, and enter a phase called Fast Recovery. If an ACK times out, slow start is used as it is with Tahoe. 
Fast Recovery (Reno Only) in this state, TCP retransmits the missing packet that was signaled by three duplicate ACKs, and waits for an acknowledgment of the entire transmit window before returning to congestion avoidance. If there is no acknowledgment, TCP Reno experiences a timeout and enters the slow-start state.Both algorithms reduce congestion window to 1 MSS on a timeout event.

TCP Vegas:
Until the mid 1990s, all of TCP's set timeouts and measured round-trip delays were based upon only the last transmitted packet in the transmit buffer. University of Arizona researchers Larry Peterson and Lawrence Brakmo introduced TCP Vegas, in which timeouts were set and round-trip delays were measured for every packet in the transmit buffer. In addition, TCP Vegas uses additive increases in the congestion window. This variant was not widely deployed outside Peterson's laboratory.

However, TCP Vegas was deployed as default congestion control method for DD-WRT firmwares v24 SP2. 

TCP New Reno:
TCP New Reno improves retransmission during the fast recovery phase of TCP Reno. During fast recovery, for every duplicate ACK that is returned to TCP New Reno, a new unsent packet from the end of the congestion window is sent, to keep the transmit window full. For every ACK that makes partial progress in the sequence space, the sender assumes that the ACK points to a new hole, and the next packet beyond the ACKed sequence number is sent.

Because the timeout timer is reset whenever there is progress in the transmit buffer, this allows New Reno to fill large holes, or multiple holes, in the sequence space - much like TCP SACK. Because New Reno can send new packets at the end of the congestion window during fast recovery, high throughput is maintained during the hole-filling process, even when there are multiple holes, of multiple packets each. When TCP enters fast recovery it records the highest outstanding unacknowledged packet sequence number. When this sequence number is acknowledged, TCP returns to the congestion avoidance state.

A problem occurs with New Reno when there are no packet losses but instead, packets are reordered by more than 3 packet sequence numbers. When this happens, New Reno mistakenly enters fast recovery, but when the reordered packet is delivered, ACK sequence-number progress occurs and from there until the end of fast recovery, every bit of sequence-number progress produces a duplicate and needless retransmission that is immediately ACKed. New Reno performs as well as SACK at low packet error rates, and substantially outperforms Reno at high error rates.

Thursday, December 19, 2013

Network Function Of Data Transfer Protocol

TCP provides a communication service at an intermediate level between an application program and the Internet Protocol (IP). That is, when an application program desires to send a large chunk of data across the Internet using IP, instead of breaking the data into IP-sized pieces and issuing a series of IP requests, the software can issue a single request to TCP and let TCP handle the IP details.

IP works by exchanging pieces of information called packets. A packet is a sequence of octets and consists of a header followed by a body. The header describes the packet's destfination and, optionally, the routers to use for forwarding until it arrives at its destination. The body contains the data IP is transmitting.

Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets can be lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost data, rearranges out-of-order data, and even helps minimize network congestion to reduce the occurrence of the other problems. Once the TCP receiver has reassembled the sequence of octets originally transmitted, it passes them to the application program. Thus, TCP abstracts the application's communication from the underlying networking details.

TCP is utilized extensively by many of the Internet's most popular applications, including the World Wide Web (WWW), E-mail, File Transfer Protocol, Secure Shell, peer-to-peer file sharing, and some streaming media applications.

TCP is optimized for accurate delivery rather than timely delivery, and therefore, TCP sometimes incurs relatively long delays (in the order of seconds) while waiting for out-of-order messages or retransmissions of lost messages. It is not particularly suitable for real-time applications such as Voice over IP. For such applications, protocols like the Real-time Transport Protocol (RTP) running over the User Datagram Protocol (UDP) are usually recommended instead. 

TCP is a reliable stream delivery service that guarantees delivery of a data stream sent from one host to another without duplication or losing data. Since packet transfer is not reliable, a technique known as positive acknowledgment with retransmission is used to guarantee reliability of packet transfers. This fundamental technique requires the receiver to respond with an acknowledgment message as it receives the data. The sender keeps a record of each packet it sends, and waits for acknowledgment before sending the next packet. The sender also keeps a timer from when the packet was sent, and retransmits a packet if the timer expires. The timer is needed in case a packet gets lost or corrupted. 

TCP consists of a set of rules: for the protocol, that are used with the Internet Protocol, and for the IP, to send data "in a form of message units" between computers over the Internet. While IP handles actual delivery of the data, TCP keeps track of the individual units of data transmission, called segments that a message is divided into for efficient routing through the network. For example, when an HTML file is sent from a Web server, the TCP software layer of that server divides the sequence of octets of the file into segments and forwards them individually to the IP software layer (Internet Layer). The Internet Layer encapsulates each TCP segment into an IP packet by adding a header that includes (among other data) the destination IP address. Even though every packet has the same destination address, they can be routed on different paths through the network. When the client program on the destination computer receives them, the TCP layer (Transport Layer) reassembles the individual segments and ensures they are correctly ordered and error free as it streams them to an application.

TCP segment structure:
The data section follows the header. Its contents are the payload data carried for the application. The length of the data section is not specified in the TCP segment header. It can be calculated by subtracting the combined length of the TCP header and the encapsulating IP header from the total IP datagram length (specified in the IP header).

  • Source port (16 bits) – identifies the sending port
  • Destination port (16 bits) – identifies the receiving port
  • Sequence number (32 bits) – has a dual role:
  • If the SYN flag is set (1), then this is the initial sequence number. The sequence number of the actual first data byte and the acknowledged number in the corresponding ACK are then this sequence number plus 1.
  • If the SYN flag is clear (0), then this is the accumulated sequence number of the first data byte of this packet for the current session.
  • Acknowledgment number (32 bits) – if the ACK flag is set then the value of this field is the next sequence number that the receiver is expecting. This acknowledges receipt of all prior bytes (if any). The first ACK sent by each end acknowledges the other end's initial sequence number itself, but no data.
  • Data offset (4 bits) – specifies the size of the TCP header in 32-bit words. The minimum size header is 5 words and the maximum is 15 words thus giving the minimum size of 20 bytes and maximum of 60 bytes, allowing for up to 40 bytes of options in the header. This field gets its name from the fact that it is also the offset from the start of the TCP segment to the actual data.
  • Reserved (3 bits) – for future use and should be set to zero
  • Flags (9 bits) (aka Control bits) – contains 9 1-bit flags
  • NS (1 bit) – ECN-nonce concealment protection (added to header by RFC 3540).
  • CWR (1 bit) – Congestion Window Reduced (CWR) flag is set by the sending host to indicate that it received a TCP segment with the ECE flag set and had responded in congestion control mechanism (added to header by RFC 3168).
  • ECE (1 bit) – ECN-Echo indicates
  • If the SYN flag is set (1), that the TCP peer is ECN capable.
  • If the SYN flag is clear (0), that a packet with Congestion Experienced flag in IP header set is received during normal transmission (added to header by RFC 3168).
  • URG (1 bit) – indicates that the Urgent pointer field is significant
  • ACK (1 bit) – indicates that the Acknowledgment field is significant. All packets after the initial SYN packet sent by the client should have this flag set.
  • PSH (1 bit) – Push function. Asks to push the buffered data to the receiving application.
  • RST (1 bit) – Reset the connection
  • SYN (1 bit) – Synchronize sequence numbers. Only the first packet sent from each end should have this flag set. Some other flags change meaning based on this flag, and some are only valid for when it is set, and others when it is clear.
  • FIN (1 bit) – No more data from sender
  • Window size (16 bits) – the size of the receive window, which specifies the number of bytes (beyond the sequence number in the acknowledgment field) that the sender of this segment is currently willing to receive (see Flow control and Window Scaling)
  • Checksum (16 bits) – The 16-bit checksum field is used for error-checking of the header and data
  • Urgent pointer (16 bits) – if the URG flag is set, then this 16-bit field is an offset from the sequence number indicating the last urgent data byte
  • Options (Variable 0-320 bits, divisible by 32) – The length of this field is determined by the data offset field. Options have up to three fields: Option-Kind (1 byte), Option-Length (1 byte), and Option-Data (variable). 
  • Padding – The TCP header padding is used to ensure that the TCP header ends and data begins on a 32 bit boundary. The padding is composed of zeros. 


Protocol operation:
TCP protocol operations may be divided into three phases. Connections must be properly established in a multi-step handshake process (connection establishment) before entering the data transfer phase. After data transmission is completed, the connection termination closes established virtual circuits and releases all allocated resources.

A TCP connection is managed by an operating system through a programming interface that represents the local end-point for communications, the Internet socket. During the lifetime of a 

TCP connection it undergoes a series of state changes: 

  • LISTENING: In case of a server, waiting for a connection request from any remote client.
  • SYN-SENT: waiting for the remote peer to send back a TCP segment with the SYN and ACK flags set. (usually set by TCP clients)
  • SYN-RECEIVED: waiting for the remote peer to send back an acknowledgment after having sent back a connection acknowledgment to the remote peer. (usually set by TCP servers)
  • ESTABLISHED: The port is ready to receive/send data from/to the remote peer.
  • FIN-WAIT-1 :
  • CLOSE-WAIT : Indicated that the server is waiting for the application process on its end to signal that it is ready to close.
  • FIN-WAIT-2 : Indicates that the client is waiting for the server's fin segment (which indicates the server's application process is ready to close and the server is ready to initiate its side of the connection termination)
  • CLOSE-WAIT : The server receives notice from the local application that it is done. The server sends its fin to the client.
  • LAST-ACK : Indicates that the server is in the process of sending its own fin segment (which indicates the server's application process is ready to close and the server is ready to initiate it's side of the connection termination )
  • TIME-WAIT : Represents waiting for enough time to pass to be sure the remote peer received the acknowledgment of its connection termination request. According to RFC 793 a connection can stay in TIME-WAIT for a maximum of four minutes known as a MSL (maximum segment lifetime).
  • CLOSED : Connection is closed


Connection establishment:
To establish a connection, TCP uses a three-way handshake. Before a client attempts to connect with a server, the server must first bind to a port to open it up for connections: this is called a passive open. Once the passive open is established, a client may initiate an active open. To establish a connection, the three-way (or 3-step) handshake occurs:

  • SYN: The active open is performed by the client sending a SYN to the server. The client sets the segment's sequence number to a random value A.
  • SYN-ACK: In response, the server replies with a SYN-ACK. The acknowledgment number is set to one more than the received sequence number (A + 1), and the sequence number that the server chooses for the packet is another random number, B.
  • ACK: Finally, the client sends an ACK back to the server. The sequence number is set to the received acknowledgement value i.e. A + 1, and the acknowledgement number is set to one more than the received sequence number i.e. B + 1. At this point, both the client and server have received an acknowledgment of the connection.


Selective acknowledgments:
Relying purely on the cumulative acknowledgment scheme employed by the original TCP protocol can lead to inefficiencies when packets are lost. For example, suppose 10,000 bytes are sent in 10 different TCP packets, and the first packet is lost during transmission. In a pure cumulative acknowledgment protocol, the receiver cannot say that it received bytes 1,000 to 9,999 successfully, but failed to receive the first packet, containing bytes 0 to 999. Thus the sender may then have to resend all 10,000 bytes.

To solve this problem TCP employs the selective acknowledgment (SACK) option, defined in RFC 2018, which allows the receiver to acknowledge discontinuous blocks of packets that were received correctly, in addition to the sequence number of the last contiguous byte received successively, as in the basic TCP acknowledgment. The acknowledgement can specify a number of SACK blocks, where each SACK block is conveyed by the starting and ending sequence numbers of a contiguous range that the receiver correctly received. In the example above, the receiver would send SACK with sequence numbers 1000 and 9999. The sender thus retransmits only the first packet, bytes 0 to 999.

An extension to the SACK option is the duplicate-SACK option, defined in RFC 2883. An out-of-order packet delivery can often falsely indicate the TCP sender of lost packet and, in turn, the TCP sender retransmits the suspected-to-be-lost packet and slow down the data delivery to prevent network congestion. The TCP sender undoes the action of slow-down, that is a recovery of the original pace of data transmission, upon receiving a D-SACK that indicates the retransmitted packet is duplicate.

The SACK option is not mandatory and it is used only if both parties support it. This is negotiated when connection is established. SACK uses the optional part of the TCP header (see TCP segment structure for details). The use of SACK is widespread — all popular TCP stacks support it. Selective acknowledgment is also used in Stream Control Transmission Protocol (SCTP).

TCP Protocol

The Transmission Control Protocol (TCP), documented in RFC 793, makes up for IP's deficiencies by providing reliable, stream-oriented connections that hide most of IP's shortcomings. The protocol suite gets its name because most TCP/IP protocols are based on TCP, which is in turn based on IP. TCP and IP are the twin pillars of TCP/IP. TCP adds a great deal of functionality to the IP service it is layered over: 


  • Streams. TCP data is organized as a stream of bytes, much like a file. The datagram nature of the network is concealed. A mechanism (the Urgent Pointer) exists to let out-of-band data be specially flagged. 
  • Reliable delivery. Sequence numbers are used to coordinate which data has been transmitted and received. TCP will arrange for retransmission if it determines that data has been lost. 
  • Network adaptation. TCP will dynamically learn the delay characteristics of a network and adjust its operation to maximize throughput without overloading the network. 
  • Flow control. TCP manages data buffers, and coordinates traffic so its buffers will never overflow. Fast senders will be stopped periodically to keep up with slower receivers. 


Full-duplex Operation:
No matter what the particular application, TCP almost always operates full-duplex. The algorithms described below operate in both directions, in an almost completely independent manner. It's sometimes useful to think of a TCP session as two independent byte streams, traveling in opposite directions. No TCP mechanism exists to associate data in the forward and reverse byte streams. Only during connection start and close sequences can TCP exhibit asymmetric behavior (i.e. data transfer in the forward direction but not in the reverse, or vice versa). 

Sequence Numbers:
TCP uses a 32-bit sequence number that counts bytes in the data stream. Each TCP packet contains the starting sequence number of the data in that packet, and the sequence number (called the acknowledgment number) of the last byte received from the remote peer. With this information, a sliding-window protocol is implemented. Forward and reverse sequence numbers are completely independent, and each TCP peer must track both its own sequence numbering and the numbering being used by the remote peer. 

TCP uses a number of control flags to manage the connection. Some of these flags pertain to a single packet, such as the URG flag indicating valid data in the Urgent Pointer field, but two flags (SYN and FIN); require reliable delivery as they mark the beginning and end of the data stream. In order to insure reliable delivery of these two flags, they are assigned spots in the sequence number space. Each flag occupies a single byte.

DATA TRANSFER PROTOCOL

To explain how data flows across a network – any network, including the Internet, a network uses certain keys and different protocols which include TCP/IP, UDP, FTP, CBR etc. For the most part each layer dumps data to the layer below it, or passes data to the layer above. There are special situations when the layers do interact – establishing a connection is one of them

TCP:
The Transmission Control Protocol (TCP) is one of the core protocols of the Internet Protocol Suite. TCP is one of the two original components of the suite, complementing the Internet Protocol (IP), and therefore the entire suite is commonly referred to as TCP/IP. TCP provides reliable, ordered delivery of a stream of bytes from a program on one computer to another program on another computer. TCP is the protocol that major Internet applications such as the World Wide Web, email, remote administration and file transfer rely on. Other applications, which do not require reliable data stream service, may use the User Datagram Protocol (UDP), which provides a datagram service that emphasizes reduced latency over reliability.

TCP uses a sequence number to identify each byte of data. The sequence number identifies the order of the bytes sent from each computer so that the data can be reconstructed in order, regardless of any fragmentation, disordering, or packet loss that may occur during transmission. For every payload byte transmitted, the sequence number must be incremented. In the first two steps of the 3-way handshake, both computers exchange an initial sequence number (ISN). This number can be arbitrary, and should in fact be unpredictable to defend against TCP Sequence Prediction Attacks.

TCP primarily uses a cumulative acknowledgment scheme, where the receiver sends an acknowledgment signifying that the receiver has received all data preceding the acknowledged sequence number. The sender sets the sequence number field to the sequence number of the first payload byte in the segment's data field, and the receiver sends an acknowledgment specifying the sequence number of the next byte they expect to receive. For example, if a sending computer sends a packet containing four payload bytes with a sequence number field of 100, then the sequence numbers of the four payload bytes are 100, 101, 102 and 103. 

When this packet arrives at the receiving computer, it would send back an acknowledgment number of 104 since that is the sequence number of the next byte it expects to receive in the next packet, TCP uses an end-to-end flow control protocol to avoid having the sender send data too fast for the TCP receiver to receive and process it reliably. Having a mechanism for flow control is essential in an environment where machines of diverse network speeds communicate. For example, if a PC sends data to a hand-held PDA that is slowly processing received data, the PDA must regulate data flow so as not to be overwhelmed.

IP:
The Internet Protocol (IP) implements datagram fragmentation, so that packets may be formed that can pass through a link with a smaller maximum transmission unit (MTU) than the original datagram size.
IP - is responsible for moving packet of data from node to node. IP forwards each packet based on a four byte destination address (the IP number). The Internet authorities assign ranges of numbers to different organizations. The organizations assign groups of their numbers to departments. IP operates on gateway machines that move data from department to organization to region and then around the world. 
The Internet Protocol was developed to create a Network of Networks (the "Internet"). Individual machines are first connected to a LAN (Ethernet or Token Ring). TCP/IP shares the LAN with other uses (a Novell file server, Windows for Workgroups peer systems). One device provides the TCP/IP connection between the LAN and the rest of the world. 

UDP:
UDP-based Data Transfer Protocol (UDT), is a high performance data transfer protocol designed for transferring large volumetric datasets over high speed wide area networks. Such settings are typically disadvantageous for the more common TCP protocol.
Initial versions were developed and tested on very high speed networks (1Gbit/s, 10Gbit/s, etc.); however, recent versions of the protocol have been updated to support the commodity Internet as well. For example, the protocol now supports rendezvous connection setup, which is a desirable feature for traversing NAT firewalls using UDP.

UDT has an open source implementation which can be found on SourceForge It is one of the most popular solutions for supporting high speed data transfer and is part of many research projects and commercial products.

The User Datagram Protocol offers only a minimal transport service -- non-guaranteed datagram delivery -- and gives applications direct access to the datagram service of the IP layer. UDP is used by applications that do not require the level of service of TCP or that wish to use communications services (e.g., multicast or broadcast delivery) not available from TCP.

UDP is almost a null protocol; the only services it provides over IP are check summing of data and multiplexing by port number. Therefore, an application program running over UDP must deal directly with end-to-end communication problems that a connection-oriented protocol would have handled -- e.g., retransmission for reliable delivery, packetization and reassembly, flow control, congestion avoidance, etc., when these are required. The fairly complex coupling between IP and TCP will be mirrored in the coupling between UDP and many applications using UD

FTP:
File Transfer Protocol (FTP) is a standard network protocol used to transfer files from one host to another host over a TCP-based network, such as the Internet. It is often used to upload web pages and other documents from a private development machine to a public web-hosting server. FTP is built on client-server architecture and uses separate control and data connections between the client and the server. FTP users may authenticate themselves using a clear-text sign-in protocol, normally in the form of a username and password, but can connect anonymously if the server is configured to allow it.

The first FTP client applications were interactive command-line tools, implementing standard commands and syntax. Graphical user interface clients have since been developed for many of the popular desktop operating systems in use today; including general web design programs like Microsoft Expression Web, and specialist FTP clients such as CuteFTP.FTP is a client/server

CBR:
Constant Bit Rate is an encoding method that keeps the bit rate the same as opposed to VBR which varies the bit rate. CBR processes audio faster than VBR due to its fixed bit rate value. The downside to a fixed bit rate is that the files that are produced are not as optimized for quality vs. storage as VBR. For example, if there is a quiet section in a music track that doesn’t require the full bit rate to produce good quality sound then CBR will still use the same value - thus wasting storage space. The same is true for a complex sounds; if the bit rate is too low then quality will suffer. 

Constant bit rate (CBR) encoding is the default method of encoding with the Windows Media Format SDK. When using CBR encoding, you specify the target bit rate for a stream, and the codec uses whatever amount of compression is necessary to achieve it.

Constrained variable bit rate encoding (described in the following section) also enables you to know the bit rate prior to encoding, but since the rate is variable, the resulting file cannot be streamed as efficiently as a file encoded in CBR mode. With CBR, the bit rate over time always remains close to the average or target bit rate, and the amount of variation can be specified

NS2 Simulator:
Ns are a discrete event simulator targeted at networking research. Ns provides substantial support for simulation of TCP, routing, and multicast protocols over wired and wireless (local and satellite) networks. 
While we have considerable confidence in ns, ns is not a polished and finished product, but the result of an on-going effort of research and development. In particular, bugs in the software are still being discovered and corrected. Users of ns are responsible for verifying for themselves that their simulations are not invalidated by bugs. We are working to help the user with this by significantly expanding and automating the validation tests and demos.

OTcl Linkage:
Ns is an object oriented simulator, written in C++, with an OTcl interpreter as a frontend. The simulator supports a class hierarchy in C++ (also called the compiled hierarchy in this document), and a similar class hierarchy within the OTcl interpreter (also called the interpreted hierarchy in this document). The two hierarchies are closely related to each other; from the user’s perspective, there is a one-to-one correspondence between a class in the interpreted hierarchy and one in the compiled hierarchy. The root of this hierarchy is the class TclObject. Users create new simulator objects through the interpreter; these objects are instantiated within the interpreter, and are closely mirrored by a corresponding object in the compiled hierarchy. The interpreted class hierarchy is automatically established through methods defined in the class TclClass. ns use two languages because simulator has two different kinds of things it needs to do. On one hand, detailed simulations of protocols require a systems programming language which can efficiently manipulate bytes, packet headers, and implement algorithms that run over large data sets. For these tasks run-time speed is important and turn-around time (run simulation, find bug, fix bug, recompile, re-run) is less important. On the other hand, a large part of network research involves slightly varying parameters or configurations, or quickly exploring a number of scenarios. In these cases, iteration time (change the model and re-run) is more important. Since configuration runs once (at the beginning of the simulation), run-time of this part of the task is less important. ns meets both of these needs with two languages, C++ and OTcl. C++ is fast to run but slower to change, making it suitable for detailed protocol implementation. OTcl runs much slower but can be changed very quickly (and interactively), making it Ideal for simulation configuration. ns (via tclcl) provides glue to make objects and variables appear on both languages. Having two languages raises the question of which language should be used for what purpose. 

Our basic advice is to use OTcl:

• For configuration, setup, and “one-time” stuff
• If you can do what you want by manipulating existing C++ objects and use C++:
• If you are doing anything that requires processing each packet of a flow
• If you have to change the behavior of an existing C++ class in ways that weren’t anticipated

For example, links are OTcl objects that assemble delay, queuing, and possibly loss modules. If your experiment can be done with those pieces, great. If instead you want do something fancier, then you’ll need a new C++ object. There are certainly grey areas in this spectrum: most routing is done in OTcl . We’ve had HTTP simulations where each flow was started in OTcl and per-packet processing was all in C++. This approach worked OK until we had 100s of flows starting per second of simulated time. In general, if you’re ever having to invoke Tcl many times per second, you probably should move that code to C++.

Trace files:
Upon the completion of the simulation, trace files that capture Events occurring in the network are produced. The trace files Would capture information that could be used in performance Study, e.g. the amount of packets transferred from source to Destination, the delay in packets, packet loss etc. However, the Trace file is just a block of ASCII data in a file and quite cumbersome to access using some form of post processing Technique.

nam files:
NAM stands for network animation.It is the Visualization tool used in NS2 simulation. NAM files replay events from trace files. The nam trace file can be huge when simulation time is long or events happen intensively

ADHOC NETWORKS

A wireless ad-hoc network is a decentralized type of wireless network. The network is ad hoc because it does not rely on a preexisting infrastructure, such as routers in wired networks or access points in managed (infrastructure) wireless networks. Instead, each node participates in routing by forwarding data for other nodes, and so the determination of which nodes forward data is made dynamically based on the network connectivity. In addition to the classic routing, ad hoc networks can use flooding for forwarding the data.
An ad hoc network typically refers to any set of networks where all devices have equal status on a network and are free to associate with any other ad hoc network devices in link range. Very often, ad hoc network refers to a mode of operation of IEEE 802.11 wireless networks.

It also refers to a network device's ability to maintain link status information for any number of devices in a 1 link (aka "hop") range, and thus this is most often a Layer 2 activity. Because this is only a Layer 2 activity, ad hoc networks alone may not support a routable IP network environment without additional Layer 2 or Layer 3 capabilities.

The earliest wireless ad-hoc networks were the "packet radio" networks (PRNETs) from the 1970s, sponsored by DARPA after the ALOHA net project. On wireless computer networks, ad-hoc mode is a method for wireless devices to directly communicate with each other. Operating in ad-hoc mode allows all wireless devices within range of each other to discover and communicate in peer-to-peer fashion without involving central access points (including those built in to broadband wireless routers).

To set up an ad-hoc wireless network, each wireless adapter must be configured for ad-hoc mode versus the alternative infrastructure mode. In addition, all wireless adapters on the ad-hoc network must use the same SSID and the same channel number.
An ad-hoc network tends to feature a small group of devices all in very close proximity to each other. Performance suffers as the number of devices grows, and a large ad-hoc network quickly becomes difficult to manage. Ad-hoc networks cannot bridge to wired LANs or to the Internet without installing a special-purpose gateway.

Ad hoc networks make sense when needing to build a small, all-wireless LAN quickly and spend the minimum amount of money on equipment. Ad hoc networks also work well as a temporary fallback mechanism if normally-available infrastructure mode gear (access points or routers) stop functioning.

COMPUTER NETWORKS


          A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information. Where at least one process in one device is able to send/receive data to/from at least one process residing in a remote device, then the two devices are said to be in a network.

           Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.
Communications protocols define the rules and data formats for exchanging information in a computer network, and provide the basis for network programming. Well-known communications protocols are Ethernet, hardware and Link Layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats.
Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines

             Some form of network software is required. This network protocol software is installed through the network preferences. NetBIOS was commonly used though recently a secure TCP/IP protocol has been developed. The network protocol determines how computers become part of the network and how they are recognized. The network must have a name and you can use some creativity at this point. Each computer must also have a unique name that other computers on the network can access them with.

If all goes well this is all you need but often there will be a conflict that can be resolved by establishing exactly how the computer will communicate. To solve these types of conflict your network needs a set DNS server address and each computer in the network needs to be assigned a unique IP address

                 A computer with an Internet connection can also share that connection with other computers on the network but you should check with your ISP what their policy on sharing Internet connections is. Sharing an Internet connection also raises some serious security issues. Many cable high speed Internet connections use the Network Neighborhood settings to create the Internet connection and connecting your home or office network to this existing system can cause problems

Routing:

                  Routing is the process of moving data from one network to another by forwarding packets via gateways. With IP based networks, the routing decision is based on the destination address in the IP packet's header. Routing also occurs in networks running other protocols, which are not discussed here. It is the process of learning all the paths through the network (routes) and using routes to forward data from one network to another. A protocol is a standardized way to perform a task. So, a routing protocol would be a standardized way of learning routes and moving data from one network to another. 

Routing protocols:

                 A routing protocol is a standardized process by which routers learn and communicate connectivity information, called routes, each of which which describes how to reach a destination host and network. Routers that wish to exchange routing information must use the same routing protocol to communicate routing information. Routing protocols are used by routers to dynamically learn all paths through a set of networks and forward data between the networks. Routers are specialized computer devices designed to perform routing.

TRIAC

      A Triac is equivalent to two thyristors connected back to back as shown in Fig. Thus, it is a directional switching device, in contrast to the thyristor, which is a unidirectional device, having reverse blocking characteristic, preventing the flow of current from Cathode to Anode. So, when it (triac) is in conduction mode, current flows in both directions (forward and reverse). This switching device is called as TRIAC (TRIode AC switch), with the circuit symbol shown in Fig. The three terminals of the triac are designated as  ,  and gate, shown in the same figure. These are similar to the terminals – A (Anode), K (Cathode) and G (Gate), of the thyristor The terminal,  is taken as the reference point for the measurement of the voltages and currents at other two terminals, G (gate) and  . The gate (G) is near to the terminal. The thyristor conducts with the current direction from Anode to Cathode (positive), when a positive pulse is fed at the Gate terminal with respect to Cathode, and at that time, with positive voltage applied between Anodeand Cathode terminals, being connected in series with the load. The triac conducts in the MT1  MT2 G  MT1  MT2  MT1 positivedirection from  to  , when a MT2 MT1 positivepulse is applied at the gate (G) terminal with respect to  and at the same time, the positive voltage is applied between two terminals,  (+) and  (-). Similarly, the triac conducts in  MT1 MT2 MT1 negativedirection from  to  , when a  MT1 MT2 negativepulse is applied at the gate (G) terminal with respect to  and at the same time, the positive voltage is applied between two terminals,  (+) and  (-). Please note that the voltage between two terminals, and  , is MT1 MT1 MT2 MT2  MT1 negative, in this case. So, the triac can conduct in both directions (positive and negative) as given here, whereas the thyristor conducts in one (positive) direction only. Only one triac is needed, whereas it is to be replacedby two thyristors, with consequent change in the control circuit. The V-I characteristics of both thyristor and triac. A thyristor turns off (non-conducting mode), if the current through it, falls below holding current. Similarly, a triac turns off (non-conducting mode), if the magnitude of the current, irrespective of its direction, falls below holding current. As a triac is connected in an ac circuit, and if the load in the circuit is resistive, the triac turns off at the zero crossing points of the voltage in each half (the supply (input) voltagereaches zero at the end of each half cycle). This will be nearly valid, if the load inductance issmall, though the triac in that case turns off, as the current though it goes to zero, after the zero crossing point is reached in each half. The case of higher inductance in the load has been discussed in detail in lesson #26 (module 3). The triac is a low power device, used in voltagecontrol circuits, used as light dimmers, speed control for fan motors (single-phase), etc. Some of the advantages and disadvantages of the triac. 

Friday, June 14, 2013

Something something Movie Review

Something Something


Genre: Romance/Comedy
Type: Dubbed/bilingual (Theeya Velai Seiyyanum Kumaru in Tamil)
Banner:
 Lakshmi Ganapathi films

Cast
: Siddharth, Hansika Motwani, Brahmanandam, Venu Madhav, Sudha etc.

Music
: Satya
Camera: Gopi Amarnath
Art: Guru Raj
Editing: Praveen KL & Srikanth NB
Dialogues
: Veligonda Srinivas
Story - screenplay - dialogues - direction
: Sundar C
Producer:
 Subhramanyam B, Suresh S
Release date
: 14 June 2013


            Story:


Kumar (Siddharth) is a shy guy, He works for a software company. Kumar already faced some problems with girls, But he falls in love with Sanjana (Hansika Motwani) in his first sight. But she is close with the most handsome guy in the company (Ganesh Venkatraman). Kumar don't have confidence to speak with sanjana, Kumar seeks some training from Love Guru Premji (Brahmanandam). By the premji assistance, he plays some mind strategic game, he diverted Sanjana's attention from Venkatraman, The rest of the story is all about how Premji teaches the tricks and taking too many turns and trouble's from an unexpected quarter. Brahmanandam in a full length character and he is the lifeline of the movie.



Direction department does well, Director C Sundar does well in comedy department, Music by C Sathya is good and the film . Cinematography by Gopi Amarnath is good. Dialogues and punch dialogues of premji character is vert good. Editing by Praveen KL and NB Sreekanth and dubbing/production quality for a film major big hero like Siddharth.


Rating: 4/5

Wednesday, June 5, 2013

Sanjana got problems with sreesanth's IPL spot fixing

Sanjana going to face questioning by mumbai police, she did lot of films in Kannada, Tamil, and Telugu movies, Long back ago gossips around sanjana, sreesanth, she is close with sreesanth for long time. sreesanth trapped by Police long back, Since the investigation got number of turns, Police investigation shakes Telugu,Tamil,Kannada industry including Bollywood.  '' Mumbai police initially discovered details about Sanjana. As soon as possible, There will be a chance to question sanjana  '' an official said.  2009 Discotheque in Goa, Sreesanth, Sanjana Dance caught in camera,

On Tuesday court refused to give bail to Sreesanth. His judicial custody has expired on Tuesday.Once again the court has ordered extension of Sreesanth's judicial custody and extended to June 18.

Monday, May 27, 2013

INDUCTION MOTOR

     
    An induction is also called as asynchronous motor is an Alternating Current motor, when current is induced in the stator winding producing magnetic field to the stator winding, by electromagnetic induction. They don't require the moving electric contacts, like a commutator or slip rings, which are necessary to transfer current to the rotor winding in other types of motor such as the universal motor. Rotor windings consist of short-circuited loops of conductors and they are 2 types: The wound rotor and the squirrel-cage rotor.

  Three-phase squirrel-cage induction motors, The industries are mostly depend on induction motor for rotating force, They are widely used in industrial drives because they are cliffy, dependable and efficient. Single-phase induction motors are used extensive for small loads, such as household widget like fans. The simple induction motor is a fixed-speed device, Because they used  variable-frequency drive systems, The device allowed to vary the speed. Variable-Frequency Drive Systems bring out especially important energy savings opportunities for existing induction motors in variable-torque centrifugal fan, pump and compressor load usage. Squirrel cage induction motors are very widely used in both fixed-speed and variable-frequency drive applications.

Pricipal of induction motor:


As a general rule, conversion of electrical power into mechanical power takes place in the rotating part of an electric motor. In d.c. motors, the electric power is conducted directly to the armature through brushes and commutator. Therefore, In this sense, a d.c. motor also called as conduction motor. However, in A.C. motors, the rotors, the rotor doesn't receive electric power by conduction but only with induction in exactly the same way, The secondary of a 2-winding receives its power from the primary. that's why such motors are known as induction motors. An induction motor can be treated as a rotating transformer i.e. one in which primary winding is stationary but the secondary is free to rotate. Of all the A.C. motors, the polyphase induction motor is the one which is sweeping used for various categories of industrial drives. 

Monday, May 20, 2013

RIGHTS OF AN UNPAID SELLER


 A seller of goods is deemed to be an unpaid seller when:      

  1. The whole of the price has not paid or tendered:                
  2.  A bill of exchange or other negotiable instrument has been received as a conditional payment,and the condition on which it was received has not been fulfilled by reason of the instrument or otherwise.

The following conditions must be fulfilled before a seller of goods can be deemed to be an unpaid seller:

  1. He must be unpaid and the price must be due.
  2. He must have an immediate right of action for the price.
  3. A bill of exchange or other negotiable instrument was received but the same has been dishonoured.

     When payment is made by a negotiable instrument it is usually a conditional payment, the condition the condition being that the instrument shall be deadly honoured. If the instrument is not honoured, the seller                             
is deemed to be an 'unpaid seller'. A seller who has obtained a money decrease has not been satisfied.

      'Seller' here means not only the actual seller, but also any person who is in the position of a seller,an agent of the seller to whom a bill of lading has been endorsed, or a consignee or agent who has himself paid for the goods or is directly responsible for the price.  

Monday, May 13, 2013

CHILD AND WOMAN LABOUR

By Chintu varma

"Child is the father of man-but we do precarious little for our children,for whom ostensibly all the great modern shrines are put". But in industries we have millions  of said faces mainly because child labour is prevalent. These child workers present melancholy picture of our modern industrialism, At present, we are although in age of scientific humanism, but even then our society still tolerates the abuses of child labour.

EFFECTS OF CHILD LABOUR:-work in childhood is a good and national gain but the condition under which labour is prevalent is a social evil and national waste. child labour is directly related to the child health and exerts a negative effects upon it.It tends to interfere with the normal family life and to encourage the breakdown of the social control that is largely dependent upon to preserve the existing social order. Further, it seriously interferes with their education and precludes the most productive participation in the privilege and obligation of citizenship.

Sunday, May 12, 2013

The importance of human relation in modern industry

By Chintu Varma
Need of human relation:- Before the advent of modern industrialism, business enterprises were very small. They were mostly operated by their owners personally. They employed small numbers of workers from the local area. The owner and the manager were same.business management were carried on according to his personal will. The workers and employers were personally known to each other. Thus the role of employer was just like a batriarch surrounded by a family to whom he owned some considerations and from whom he might expect respectful behaviour. Under these circumstance, the relation between the employer and the workers were direct and personal. Even to day businessmen  of small means and local operation still retain some characteristic.

     The paternalistic relation between the employers and the workers were modified after the emergence of big business. The developed means of transport produced mass markets new scientific inventions created new products.The steam and electricity provided power for industry. therefore the structure of industries was greatly modified. Management of industries become impersonal and created many social and human problems. In fact modern industry is not merely a factory or work place. It is a complex system which involves the pattern of interaction between people who are responding to each other in terms of there roles in work organisation. modern industry has become a social old in itself. according to Moore, "industry is social as well as mechanical, an organised group of workers as well as an efficient grouping of machines" therefore the study of human relations with reference to industry is of great importance.

     The pattern of modern industrial organisation is of bureaucratic type which involves various management problems. modern industry reveals the characteristics of a social system. its maintenance requires adequate diagnosis and understanding of both individual and groups within the factory. therefore, in response to the various problems which arise with in industry, a new approach been evolved known as 'Human relations'.

     The human relations approach:-the human relation approach to management is of recent origin.the haw throne studies in USA have largely stimulated the rise of this new approach. the approach is obviously based on the knowledge regarding human beings. although, the old approach to management organisation, is still an important part of modern industry, yet the development of human relations approach is of considerable importance. the detects and failures of bureaucratic management organisation, have compelled people to think about the new approach.

The main objectives of human relations approach in business organisation 
(i) Productivity 
(ii) Worker satisfaction as such the approach is oriented to achieve these twin objectives. 
Under these approach much importance is given to the interpersonal and organisational circumstances of the job.

     The various studies conducted with regard to human relations demonstrate that a properly motivated group of workers can produce More even adverse conditions. therefore, in order to increase the production and worker satisfaction, informal organisation and human attitudes must be considered to be the vital parts of industry, in this regard, Charles B. spoulding has pointed out the demerits of the older system as under:-

      "While many of techniques of the order management theory have continued to be found useful, as a series of the research endeavor and theoretical analyses has pointed out that its highly formalised approach often fails to function in accordance to the original intent; the workers of often deliberately limit production; incentive plans based on time and motion studies are exceeding difficult to administer; conflicts develop between line and staff officers rules are some times, ignored, frequently failed to motivate workers, and many actually perpetuate the conditions they we designed to overcome; specialized bureaucrats fail to see beyond their own specialities is poorly disturbed in the organisation as career ladder become fixed, and promotion by seniority is established the organisation may fail to adjust to a changing environments simply cannot deal with the size and complexity involved,"                                                                               


SUSPENSION SYSTEM IN AUTOMOBILES

Written By   T. SIVA KUMAR                                                                     Asst.proff: Sai Sakthi Engineering Colle...