Home 5G Why Latency is a Huge Challenge for both Edge MEC and 5G Applications

Why Latency is a Huge Challenge for both Edge MEC and 5G Applications

by Vamsi Chemitiganti

As we have seen from many blogs https://www.vamsitalkstech.com/tag/edge/ in the past few months past, edge use cases have started to emerged across various industries that need to combine cloud resources with local processing and storage of data based on business needs. One of the key challenges that edge applications need to address is latency.

Five Technical Challenges for Edge Applications

Edge computing brings processing and analysis closer to data sources, enabling real-time responsiveness and reducing bandwidth consumption. Key drivers for industrial edge computing include:

  1. Low-latency requirements: Immediate processing minimizes delays critical for real-time applications.
  2. High data volume and bandwidth constraints: Processing data locally minimizes network usage and associated costs.
  3. Autonomous or disconnected operation: Edge computing ensures continued functionality even when offline or with limited connectivity.
  4. Privacy, security, and data sovereignty concerns: Local processing reduces reliance on external data centers and potentially improves data control.
  5. Network cost considerations: Edge processing reduces reliance on expensive long-distance data transfers.

The First Challenge – Latency

Latency is defined as the delay of time between the initiation of a command or input from one side to the reception from the other. It is usually measured in milliseconds (ms). High latency means there is a larger delay in sending to receiving; low latency means there is a small delay.

Latency is typically affected by distance, as longer distances from output to reception means the data being sent has a longer time to travel, while shorter distances will have lower latency as it is received at a quicker time. Latency can also affected by software and hardware elements in the network path, along with network congestion at traffic exchange points — a situation analogous to the problems people have when driving on freeways in traffic.

Navigating Latency in Edge and 5G Applications

Edge computing and 5G hold immense promise for transforming industries with real-time, data-driven applications. However, these advancements dance on a tightrope, with latency posing a significant challenge. Let’s delve into the technical reasons why latency matters in these cutting-edge technologies.

  1. Real-time Demands – Milliseconds Matter:
  • Edge applications: Autonomous vehicles, industrial automation, and remote surgery rely on low latency for immediate decision-making. Delayed responses in these scenarios can have catastrophic consequences.
  • 5G use cases: Augmented reality, ultra-high-definition video streaming, and cloud gaming necessitate minimal latency for seamless user experience and responsiveness. High latency disrupts immersion and introduces jitter, hindering enjoyment.
  1. Distance Adds Latency:
  • Edge computing: Processing data locally reduces travel distance, minimizing latency. However, computational limitations at the edge often necessitate sending data to the cloud for deeper analysis, introducing additional network hops and potential delays.
  • 5G networks: While 5G promises significantly lower latency than previous generations, factors like backhaul network congestion, radio access protocols, and device processing power can still contribute to delays, especially over longer distances.
  1. Network Resource Contention:
  • Competing traffic: Both edge and 5G networks share resources with regular internet traffic. High network congestion can significantly increase latency for specific applications, impacting their performance and reliability.
  • Security measures: Encryption and intrusion detection, while crucial, can add processing overhead, introducing slight latency increases. Balancing security with real-time needs requires careful optimization.

Conclusion

Latency remains a crucial hurdle in realizing the full potential of edge computing and 5G. Understanding the technical factors at play and employing targeted optimization strategies are essential for ensuring these technologies deliver on their promise of real-time intelligence and revolutionize various industries. Latency matters in many industries like finance, satellites, and fiber optics. Even a few milliseconds can mean lost money in trading, missed opportunities in space exploration, or potentially dangerous delays in navigation.

Edge computing aims to tackle this issue by processing data closer to its source, rather than relying on a single, centralized location. This can help relieve congested servers and reduce the distance data travels, minimizing latency. Think of it like having a local processing center instead of sending everything across the country.

This decentralized approach benefits various applications, including Internet of Things (IoT) devices and smart cameras. With processing happening nearby, these devices can react in real-time, unlike scenarios with high latency delays. Studies show business leaders prioritize low latency for their applications, often aiming for speeds below 10ms or even 5ms for critical edge initiatives.

However, it’s important to avoid over-focusing on the lowest possible latency numbers. A recent analysis found a mismatch between enterprise needs and what current technology can reliably deliver in edge computing. Additionally, maintaining consistent low latency across complex networks can be challenging.

In conclusion, while edge computing is a promising solution for reducing latency, a well-considered approach that balances business needs with realistic expectations is crucial for success.

The next blog will discuss technical approaches to overcome latency.

Featured Image by freepik

Discover more at Industry Talks Tech: your one-stop shop for upskilling in different industry segments!

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.