The most widely used protocol for the ring topology is the token ring. This is probably the oldest ring control technique, originally proposed in 1969 and referred to as the Newhall ring. This has become the most popular ring access technique in the United States.
The token ring technique is based on the use of a small token packet that circulates around the ring. When all stations are idle, the token packet is labeled as a "free" token. A station wishing to transmit must wait until it detects a token passing by. It then changes the token from "free token" to "busy token" by altering the bit pattern. The station then transmits a packet immediately following the busy token. There is now no free token on the ring, so other stations wishing to transmit must wait. The packet on the ring will make a round trip and be purged by the transmitting station. The transmitting station will insert a new free token on the ring when both of the following conditions have been met:
The use of token guarantees that only one station at a time may transmit.
When a transmitting station releases a new free token, the next station downstream with data to send will be able to seize the token and transmit.
As with token bus, token ring requires fault management techniques. The key error conditions are no token circulating and persistent busy token. To address these problems, one station is designated as active token monitor. The monitor detects the lost-token condition by using a timeout greater than the time required for the longest frame to traverse the ring completely. If no token is seen during this time, it is assumed to be lost. To recover, the monitor purges the ring of any residual data and issues a free token. To detect a circulating busy token, the monitor sets a monitor bit to 1 on any passing busy token. If it sees a busy token with a bit already set, it knows that the transmitting station failed to purge its packet. The monitor changes the busy token to a free token. Other stations on the ring have the role of passive monitor. Their primary job is to detect failure of the active monitor and assume that role. A contention-resolution algorithm is used to determine which station takes over.
The token ring technique shares many of the advantages of token bus. Perhaps its principal advantage is that traffic can be regulated, either by allowing stations to transmit differing amounts of data when they receive the token, or by setting priorities so that higher-priority stations have first claim on a circulating token.
The principal disadvantage of token ring is the requirement for token maintenance. Loss of the free token prevents further utilization of the ring. Duplication of the token can also disrupt ring operation. One station must be elected monitor to assure that exactly one token is on the ring and to reinsert a free token if necessary.
The IEEE 802 token ring standard follows the general scheme outlined above. The major addition is a scheme for capacity allocation using priorities. The 802 specification provides for eight levels of priority .It does this by providing two three-bit fields in each packet and token: a priority field and a reservation field.
For clarity, let us define three variables:
Pm = priority of message to be transmitted by a station;
Pr = received priority; and
Rr = received reservation.
The scheme works as follows:
A station wishing to transmit must wait for a free token with Pr ≤ Pm.
While waiting, a station may reserve a future token at its priority level (Pm).
If a packet goes by, it may set the reservation field to its priority (Rr <- Pm) if the reservation field is less than its priority (Rr < Pm). If a free token goes by, it sets the reservation field to its priority (Rr <- Pm) if Rr < Pm. This has the effect of pre-empting any lower-priority reservations.
When a station seizes a token, it sets the reservation field to 0 and leaves the priority field unchanged.
Following transmission after seizing a token, a station issues a new token with the priority set to the maximum of Pr, Rr, and Pm, and a reservation set to the maximum of Rr and Pm.
The effect of the above steps is to sort out competing claims and allow the waiting transmission of highest priority to seize the token as soon as possible. A station having a higher priority than the current busy token can reserve the next free token for its priority level as the busy token passes by. When the current transmitting station is finished, it issues a free token at that higher priority. Stations of lower priority cannot seize the token, so it passes to the requesting station of equal or higher priority with data to send.
A moment's reflection reveals that, as is, the algorithm has a ratchet effect on priority, driving it to the highest used level and keeping it there. To avoid this, the station that upgraded the priority level is responsible for downgrading it to its former level when all higher-priority stations are finished. When that station sees a free token at the higher priority, it can assume that there is no more higher-priority traffic waiting, and it downgrades the token before passing it on.
The figure below depicts an example of the operation of the priority mechanism.
Rajeev Kumar is the primary author of How2Lab. He is a B.Tech. from IIT Kanpur with several years of experience in IT education and Software development. He has taught a wide spectrum of people including fresh young talents, students of premier engineering colleges & management institutes, and IT professionals.
Rajeev has founded Computer Solutions & Web Services Worldwide. He has hands-on experience of building variety of websites and business applications, that include - SaaS based erp & e-commerce systems, and cloud deployed operations management software for health-care, manufacturing and other industries.