11

I'm an engineer in the automation field. I've been switching slowly to networking. I've started in the automation world 12 years ago.

With an automation mindset, we use rings (like MRP rings) everywhere. However, I've noticed that IT network engineers hate rings and they never use it. I'm wondering why?!!

I mean it's a great way of providing redundancy with low cost.

Are there specific reasons why rings are (almost) never used in IT networks?


Thank you all for all of your answers. As a conclusion from all answers:

  1. Rings typically require same speed and same type of physical links among ring nodes. This is a non-desired limitation in the IT networks.
  2. Each node in the ring should have enough capacity for all other nodes of the ring. Again... a limitation.
  3. For a tree network, if one or more nodes fails (as long as it's not the backbone), this will not affect the rest of the nodes.
  4. Tree structure offers much less hops (two) compared to a very high number of hops in the rings
  5. Rings sometimes used in IT, typically in backbone

Therefore Rings in automation world offer basic redundancy where static, simple and low-BW networks are usually the case.

Ron Maupin
  • 99,565
  • 26
  • 120
  • 195
AhmedWas
  • 273
  • 1
  • 3
  • 9
  • 1
    There are some ring structures, like the internal bus in the Sony Cell CPU, but these are in places where individual links are not expected to go down. – Simon Richter Oct 22 '22 at 11:51
  • You should accept an answer that helped you. – Ron Maupin Oct 25 '22 at 16:13
  • @RonMaupin That's what I usually do. But here all helped me :) I mean each answer gave one or more reasons, but no a full answer. But I'll try to find the answer that most helped and I'll accept it. – AhmedWas Oct 26 '22 at 06:02
  • Has any answer solved your question? Then please accept it or your question will keep popping up here forever. Please also consider voting for useful answers. – Zac67 Oct 29 '23 at 19:00

5 Answers5

25

Historically, there were several competing technologies using ring structures like Token Ring or FDDI, but due to higher cost, lower performance, or simply slower development they've all vanished.

Modern, ubiquitous Ethernet uses switches to bridge all network ports together, so any ring or other looped topology creates a bridge loop, bringing down the network unless it is explicitly dealt with. Redundant links require a means to mitigate the bridge loops they form, most prominently by blocking redundant ports through a spanning tree protocol (MSTP, RSTP, obsolete STP or proprietary RPVST+) or by using routing algorithms with switching (Shortest Path Bridging or TRILL).

Accordingly, the 'native' topology for Ethernet is a tree (sometimes called a 'multi-star'). Advantages of tree vs ring include: smaller network diameter, higher efficiency, lower latency.

Using a fat tree, with increasing bandwidth towards the root, that topology can also be scaled extremely well. (Just imagine 100 switches in a three-tier tree - or in a ring...)

A ring network with one of the links blocked by xSTP:

enter image description here

If one of the switches fails, the blocked link changes to forwarding and five switches continue. If another switch dies, the network breaks in half.

The diameter of that network is five hops, delays accordingly. All traffic between non-adjacent switches needs to cross all intermediate switches and links - if there's no ample bandwidth, congestion is more likely than in a tree. Adding more switches increases that problem. Also, more than seven switches can exceed STP's design limit and might not (re)converge.

A tree network (collapsed core):

enter image description here

If one of the access switches fails, nothing else happens. If the single core switch dies, everything is offline. If there's a redundant core switch then that network is hard to bring down.

Note the diameter of just two hops. That diameter doesn't even change when you add some more switches. If you need to add more switches than the core can connect, then you add a distribution tier. That way, the tree can easily grow to more than 1500 switches (of 40+ ports) with a diameter of just four.

Three-tier hierarchical network

A three-tier hierarchical network from Cisco Networking Academy Connecting Networks Companion Guide: Hierarchical Network Design

PS: The way that question was asked made me think of the data link layer only. On the network layer, routers can make much better use of meshed networks than bridges/switches.

While you can more easily use any network-layer topology you like and that makes sense for you, the diameter, efficiency and latency arguments from above also apply. However, there may be more important aspects when designing your network. A ring of (few) core routers can make a lot of sense for a large network.

Zac67
  • 84,333
  • 4
  • 69
  • 133
  • 8
    It should be stressed... STP breaks the loop by turning off ("blocking") links creating redundant paths. SPB and TRILL attempt to address that shortcoming. – Ricky Oct 21 '22 at 15:43
  • There is also the somewhat niche protocol ERPS, which is a bit like STP but for ring topology only. (The advantage is much faster failover time) – user253751 Oct 21 '22 at 17:24
  • 1
    @user253751 Have considered to include that as well but I've actually never seen it in real life - and it sure does have the disadvantages pointed out above. – Zac67 Oct 21 '22 at 18:39
  • 2
    Historically and at the physical layer rings made sense. But we've moved on such that modern physical networks are all point to point links with smarter management protocols. I'm not a fan of most of what is written here as it is very inaccurate (perhaps imprecise is a better word), but the central concept is right. We've replaced sprawling physical networks where rings worked with smart protocols. Given that a modern Ethernet physical layer has just two nodes, it could be argued semantically that it is a degenerate ring. – Doug Oct 21 '22 at 19:37
  • 1
    @Ricky: We built this in college where you could do this without links turning off. It was somewhat annoying because you had to load the topology by hand into all the switches. They would then load balance. (Host discovery worked correctly; only switches had to have their topology loaded.) – Joshua Oct 23 '22 at 04:34
  • 1
    @Doug Most networks are P2P. Bus networks like CAN or RS-485 are still very much alive and well, just not in the type of settings that IT people are likely to be working with them. – Austin Hemmelgarn Oct 24 '22 at 02:08
  • @Joshua What you are describing is the instances in MST - multiple spanning-tree. The blocking happens per instance/vlan. It's cumbersome to manage, while it's a "smaller hammer", it's still a sub-optimal mechanism. – Ricky Oct 25 '22 at 18:12
  • 2
    @Doug No, this is absolutely accurate. Ethernet was never designed for loops ("rings"). If you have A-B-C-A loop, a broadcast will propagate around the ring in both directions forever. There's no ID or tracking of frames, and there's no hop count / TTL. Once A sends that broadcast to B and C, it will never know if that frame comes back. (and it will) Likewise B and C won't know that frame came from A and thus don't need to send it along. STP fixes this by breaking the loop, by keeping track of BPDUs. – Ricky Oct 25 '22 at 18:20
  • @Ricky: It didn't block. It used both paths and increased its performance. We didn't set up any VLANs. – Joshua Oct 25 '22 at 18:27
  • 1
    @joshua What you're describing is possible but entirely unclear. If you're using SPB or TRILL the switches would organize themselves. Possibly some proprietary stacking/fabric protocol was used but they're also pretty self organizing. Does sound like MSTP (which just uses multiple spanning tree instances, so there's never really a ring). – Zac67 Oct 25 '22 at 19:01
  • 1
    @Ricky You've misunderstood my comment. A lot of what is here is an imprecise blend of bits and pieces across multiple layers. It's not wrong, but I'm not a fan of how it's strung together (pun intended). Might be just me... I'm thinking of this strongly in terms of IT physical networks, so 10Base5 w/ vampire taps broadcasting across coax, TokenRing passing, Ethernet hubs allowing mixed-media, TokenRing switching hubs, etc. up to modern speed store and forward Ethernet switching. What I am getting at is that the "network" today is just a single P2P cable connecting a switch and a NIC. – Doug Nov 18 '22 at 11:57
22

The downside with ring networks as opposed to for example star networks, is that in a ring topology, any link in the ring needs to have enough capacity for all nodes on that ring, in order to be able to handle the outage of any link. So as your ring grows, the required capacity on each and every link in the ring needs to grow as well. This scales very badly.

Teun Vink
  • 17,233
  • 6
  • 44
  • 70
10

Historically, we did use ring topologies - e.g. token ring. It didn't have redundancy. Whenever the ring broke, the whole network went down.

Bus topologies (Ethernet over coaxial) were a bit better in regard to only a single cable to care about. It, too, went down as a whole when something went wrong anywhere.

The data rates are generally much higher than in the automation and one should take care of things like signal propagation, termination and reflections.

Star-like and tree-like topologies like Ethernet over twisted pairs came with much more fail-safety and much more graceful failure: if something on the network is broken, the rest can run just as usual and it was obvious what to fix.

Another bonus of the star-like networks that came with more intelligent (switching) hubs is that they don't require all devices to have the same properties like in the bus/ring topologies. They could have different data rates (10 or 100 or 1000 or 2500 or 5000 or 10000 MBit/s) and even different media (copper and optical fiber mix very well). The switching hub makes all these differences transparent.


Where we DO have ring-like and grid-like topologies in modern networking are the backbone connections. A campus-wide network backbone is usually at least a ring. A cell tower quite often has more than one backhaul connection. An internet service providers usually have city-wide grids. Etc, etc, etc...

Those redundant links are either used for reliability (to route around failures) or for bigger capacities, or both.

Of course, complex topologies require complex routing protocols as well.

fraxinus
  • 269
  • 1
  • 3
  • 2
    Your first paragraph is complete nonsense. When a Token Ring MAU (the original relay-technology hub) loses phantom (voltage) from a station, the station is bypassed and the ring self-heals. Thinwire Ethernet just breaks completely due to the reflections from the unterminated cable at the break. FDDI had dual counter-rotating rings for redundancy (but also existed in single-ring form). Carrier technologies SONET and SDH are often configured in rings. – grahamj42 Oct 21 '22 at 19:07
  • 6
    @grahamj42 I think the opening sentence was referring to literal line-breaks (I.e. with a pair of scissors): a single snip in the ring cables will definitely bring down a token-ring network, wouldn’t it? – Dai Oct 21 '22 at 21:24
  • 2
    @Dai - no, if the transmit pair from the station is cut, the ring heals in a fraction of a second. If the receive pair to the station is cut, the station will wait for a token, send a beacon (a frame signalling en error) and if it doesn't receive a beacon or a new token within a set time it knows the ring is broken and removes the phantom, thus cutting itself out of the ring. So there will be a brief loss of service, and the station with the faulty cable may try again to enter the ring after a number of seconds. The address of the faulty station is visible to a management app. – grahamj42 Oct 21 '22 at 22:43
  • 3
    @grahamj42 - a break in the ring, not a break between the ring and a station. – brhans Oct 23 '22 at 02:57
  • 1
    @brhans - the ring is always between stations; the MAU just uses relays to connect stations together. Between MAUs there are two circuits (same wiring as the station cable). When an empty MAU B is plugged into MAU A with one cable, the ring extends from A to B and back again, using both pairs. When the second connection is made from B to A, all the traffic flows in the first pair between the MAUs and the second pair is a hot standby. The active pair is protected by phantom like the station cable. If you cut a "backbone ring" in two places, you have two working rings. – grahamj42 Oct 23 '22 at 08:56
7

I mean it's a great way of providing redundancy with low cost.

That is true if and only if multiple faults don't happen at the same time and one fault is fixed before the next fault happens.

Computers come and go all the freaking time, people move to different offices, people have laptops which they pack up at the end of the day. Some people turn their computers off at night.

In a typical office situation, any network technology where faults with, or the removal of, end systems can disrupt the network will be far more fragile than one built around a central infrastructure device, even if that central infrastructure device is a single point of failure.

This is why twisted pair Ethernet with its hubs and point-to-point links was a breath of fresh air compared to prior technologies like coaxial Ethernet and token ring. At the end-user level rings suck.

Rings at the infrastructure level have less risk from multiple simultaneous faults, but then you get into the capacity issues that Zac and Teun mention.

Doug Deden
  • 103
  • 3
Peter Green
  • 13,303
  • 2
  • 21
  • 47
3

Are there specific reasons why rings are (almost) never used in IT networks?

They are stable enough without it. There's simply no big push to have more redundancy. Remember that a ring will only protect you against some fairly specific types of failures - which is rare. MTBF for switches is on the order of tens of years: MTBF Cisco Switches

And in a dynamic environment, ring networks such as MRP increase the risk of downtime in my experience. MRP is great for a static setup, not so much if you have changing components in it.

So TL;DR: There's no demand for it. Switches doesn't fail often enough.

vidarlo
  • 346
  • 2
  • 9
  • MTBF means there's a 50% chance of failure after that time - it's not a guarantee that a given device lasts that long. But you're absolutely correct, that chaining devices significantly increases the overall failure chance. If some device's failure rate is 50% in a given time period, two of those devices in sequence have a failure rate of 75%. – Zac67 Oct 22 '22 at 13:10
  • 1
    @Zac67 In addition the failure rate is a bathtub curve. But ultimately it boils down to risk tolerance. – vidarlo Oct 22 '22 at 13:27
  • Note that those MTBF numbers do not include e.g. "whoops the sprinkler above that switch failed". – TLW Oct 23 '22 at 03:37
  • @Zac67 Certainly chaining increases the probability of failure, but generally in a situation where you're chaining things you're also able to create redundant links. With edge switches having two connections to two different core switches (a not uncommon situation IIRC, though I've not wired large offices and buildings for decades) your failure probability is basically just that of the edge switch. – cjs Oct 24 '22 at 07:59
  • 1
    For the record, in 50+ years, I've never seen a sprinkler "fail". I've seen a few offices flooded after one is hit with a ladder, etc. But that tends to be in hallways where there aren't any switches, or in data rooms during a move when the rack(s) don't have any hardware in them. HOWEVER, I did once see a rack drenched in coolant when a top-of-rack chiller failed. – Ricky Oct 25 '22 at 18:30
  • It's been my experience switches fail more often from software than hardware. Eventually, and that may be years, the bits get out of order and the switch needs a reboot. – Ricky Oct 25 '22 at 18:35