Home 5G Key Lessons From “Part 2/3 of the DISH 5G Architecture”

Key Lessons From “Part 2/3 of the DISH 5G Architecture”

by Vamsi Chemitiganti

We continue our discussion of –https://www.vamsitalkstech.com/5g/dish-networks-5g-reference-architecture-on-aws-1-3/ , DISH Networks is pioneering the deployment of a greenfield 5G network. This implementation uses O-RAN standards and consists of a Radio Unit(RU) deployed on cell towers and a Distributed Unit (DU) as well as a centralized unit (CU) which is deployed in an AWS local zone. While these components combine to provide a full RAN solution that handles radio level control and subscriber data traffic, let us consider the key architecture lessons from this second blog.

Illustration 1 – Data center logical architecture for 5G

      1. The Partitioning of the monolithic datacenter into multiple pools of compute – NDC (National Data Center), RDC (Regional Data Center) and BEDC (Edge Data Centers) is shown in the above illustration. DISH is but one example of how to design data centers based on latency requirements and data processing considerations. Long term readers of the blog will remember the series on Software Defined Datacenters -https://www.vamsitalkstech.com/cloud/financial-services-it-begins-to-converge-towards-software-defined-data-centers/. 5G and Edge applications will now change how these datacenters are designed and disaggregated into various locations based on the above considerations.
      2. DISH uses VPC (Virtual Private Clouds) to represent the NDCs, RDCs and BEDCs and runs CNFs appropriately within each. The VPC represents an on-premises network with self contained compute, storage etc needed to host datacenter workloads. The VPCs are interconnected using the AWS Transit Gateway (as shown below).

        Illustration 2 – Transit Gateway

        The Transit Gateway is an AWS service that acts as a central hub to interconnect VPCs while encrypting data between connections. It also provides one way to manage and monitor all traffic flowing across VPCs. Network isolation is achieved using routing tables in the Transit Gateway. This is important when 5G core functions are partitioned according to their needs for advanced routing both within and across VPCs. Eg UPF, SMF and ePDG.

      3. Such functions need to support BGP for route exchange and failover. DISH deploys virtual routers (vRTRs) on EC2 to provide connectivity within & between the VPCs as well as back to the onprem network. GRE is then used to encapsulate the traffic across vRTRs to create an “Overlay Network”. The Overlay network sits on the Underlay decribed above and (IS-IS) routing protocol in conjunction with Segment Routing Multi-Protocol Label Switching (SR-MPLS) to distribute routing information and establish network reachability between the vRTRs. Multi-Protocol Border Gateway Protocol (MP-BGP) over GRE is used to provide reachability from on-prem to AWS Overlay network and reachability between different regions in AWS.
      4. The takeaway and benefit of this design is that it enables DISH to achieve requirements such as traffic isolation and efficiently route traffic between on-prem, AWS and 3rd parties (e.g., voice aggregators, regulatory entities etc.).
      5. Now onto datacenter design – the design of the DISH datacenters maps into AWS constructs such as Regions and Availability Zones (AZs) within the regions. Each Region hosts one NDC and three RDCs. The Underlay network is provided by EC2 and AWS networking. The architecture described above is software defined, including the Transit Gateway, and is set up using CI/CD pipelines with AWS APIs.

        Illustration 3 – DISH Data center physical architecture

      6. The design of the data centers and their geographic layout (shown above) depends on the network functions (NFs) running within them as we have previously discussed. Using automation and IaC (Infrastructure as Code) frameworks, these NFs can be placed appropriately for latency, performance and data processing requirements. Another goal that DISH aims to satisy is to enable nationwide 5G coverage.
      7. The DISH deployment architecture is described in the below table for easy consumability.
Datacenter (Region on AWS) Type of Workload High availability & Geo-reduancy design
National Data Centers (NDC) The DISH NDCs are built to span across 3 AZs to maximize high availability. The NDCs are deployed in three AWS Regions ((us-west-2, us-east-1, and us-east2). Deployed on them are nationwide global services, namely the subscriber database, IMS (IP multimedia subsystem: voice call), OSS (Operating Support System) and BSS (Billing Support System). These Regions are also chosen for the delay budgets – for instance  us-east-1 and us-east-2 are within 15 ms while us-east-1 to us-west-2 is within 75 ms delay budget.  HA is desgined as follows – the same NF is deployed in a redundant two AZs running within the same VPC. The overlay and underlay contsrtucts described above is used to recover traffic within the region by failing over to the standby NF. Geo-redunancy is designed as follows – two redundant NFs are deployed in two AZs across Regions. The Transit Gateway is used for the interconnection and vRTR is used for overlay networking which presents a mesh like construct which enables service continuity across NDCs in other regions during outage scenarios (e.g., Workloads and the Datacenters within them – Markets, B-EDCs, RDCs, in us-east-1 will failover to the NDC deployed on us-east-2 in the event of failure on us-east-1
Regional Data Centers (RDC) RDCs are hosted in the AWS Region across multiple availability zones. They host 5G subscribers’ signaling processes such as authentication and session management as well as voice for 5G subscribers. These workloads can operate with relatively high latencies, which allows for a centralized deployment throughout a region, resulting in cost efficiency and resiliency. For high availability, three RDCs are deployed in a region, each in a separate Availability Zone (AZ) to ensure application resiliency and high availability. An AZ is one or more discrete data centers with redundant power, networking and connectivity in an AWS Region. All AZs in an AWS Region are interconnected with high-bandwidth and low-latency networking over a fully redundant, dedicated metro fiber, which provides high-throughput, low-latency networking between AZs. CNFs deployed in the RDC utilizes an AWS high speed backbone to failover between AZs for application resiliency. CNFs like AMF and SMF, which are deployed in RDC, continue to be accessible from the BEDC in the Local Zone in case of an AZ failure. They serve as the backup CNF in the neighboring AZ and would take over and service the requests from the BEDC. Both High availability and geo-redundancy are achieved by having NFs failover between VPCs (multiple Availability zones) within one region where the RDCs are deployed. The Transit Gateway interconnects RDCs with vRTR based overlay. Route policies are in place to ensure that traffic only flows to backup RDCs when the primary RDC fails. 
Breakout Edge Data Centers (BEDC) Hosted on 16 Local Zones across the US. Running the 5G components that have strict latency budgets – Core UPF, the O-RAN centralized component – the CU. Local zones are also chosen for the BEDCs as they enable low latency workloads that both enterprise customers and 5G end-users (gamers, video streaming etc) to leverage 5G speeds.
Passthrough Edge Data Centers (BEDC) Passthrough Edge Data Centers (PEDC), serve as an aggregation point for all Local Data Centers (LDCs) and cell sites in a given location. This is implemented via Local Zone redundancy. The RAN network is connected through the PEDC to two different AWS Direct connect locations – one for the Regio and the other for the local zone. This allows RAN DU traffic to be rerouted from a BEDC (in say a Local Zone A) to a backup BEDC (deployed in Local Zone B) should the Local Zone A fail. 

Conclusion

As the DISH case study illustrates, while building greenfield 5G networks presents tremendous business opportunities, it also comes with significant architecture complexities. Chief among these are datacenter design and networking. We covered the DISH design choices for both in this blog. The disaggregated nature of 5G platforms, built on Kubernetes and containers, enables operators to be strategic in their choices of data center locations, types of applications to run on them and the best possible networking architecture to implement O-RAN based platforms. Long time readers of this blog will remember a range of Edge computing topics we have discussed (handy link here – https://www.vamsitalkstech.com/?s=edge). Expect both 5G and Edge to intersect in the years to follow.

The next blogpost will focus on Part 3/3 of the DISH implementation.

Photo by Brett Sayles

Discover more at Industry Talks Tech: your one-stop shop for upskilling in different industry segments!

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.