Calculation Method for Wavelength Path Planning in Arrayed Waveguide Grating–STAR Network with Loopback Function
概要
The Internet arose from the Advanced Research Projects Agency NETwork (ARPAnet), a packet communications network funded by the U.S. Department of Defense that began research in 1967. ARPAnet began operations in 1969 by connecting four universities and research institutes in the United States. The number of Internet users has increased dramatically since the 1990s, as Internet subscription restrictions have been lifted, new Internet service providers have emerged to provide Internet access services, and commercial Internet services have been launched [1]. Since then, broadband connection services such as the asymmetric digital subscriber line and "fiber to the home" have become popular, along with and Internet services such as those for e-mail and social networking. At the end of 2018, 51.2 percent of individuals were using the Internet and the global penetration rate of active mobile broadband subscriptions was 69.3 percent [2]. Internet communications have continued to show strong growth,Asia-Pacific’s average fixed broadband speed is expected to reach 157.1 Mbps by 2023, representing 2.5-fold growth from 2018 (62.8 Mbps) [3].
Therefore, to expand the transmission capacity, research and development has been promoted for multiplexing technologies such as time division multiplexing (TDM), wavelength division multiplexing (WDM), data rate acceleration, and subcarrier densification. However, if the capacity is increased, the transmission quality deteriorates, and the transmission distance is reduced because of Shanon limit. Hence, technologies such as adaptive modulation and coded modulation have been studied to achieve high-quality transmission (approaching the Shannon limit), and to select a modulation format for providing an appropriate transmission distance and capacity [4] [5] [6]. Moreover, form factors of transceivers for realizing high-capacity transmissions have also been developed. Transceivers have reached higher capacities and smaller sizes and support higher-capacity Ethernet, and in recent years, transceivers have supported WDM signals or digital coherence by incorporating a digital signal processor. Application- specific integrated circuit chips specialized for packet routing processing have also been developed to process high-capacity packets at high speeds, and various protocols and functions have been supported.
Wavelength path switching have also been studied following the development of optical fiber communications; various optical devices such as arrayed waveguide gratings (AWGs) were developed, [7] along with a reconfigurable optical add drop multiplexer (ROADM) system. The ROADM changes the wavelength path by using optical devices such as a wavelength-selective switch (WSS) or AWG. Wavelength resources are used efficiently, as each wavelength path is multiplexed and routed as related to the sources and destinations. These devices have allowed researchers to construct WDM networks. In a WDM network, additional wavelength bandwidths are used and additional signals can be sent, but to use many wavelengths, it is necessary to divide a certain wavelength band into narrow wavelength intervals. Narrow wavelength intervals require high-precision components such as laser wavelengths and filters for separating each wavelength, thereby increasing the price. In contrast, if the wavelength interval is wide, the number of signals that can be sent is reduced, but the system becomes cheaper. The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) has established two types of wavelength spacing for WDMs so that network systems suitable for each application can be used: one is called dense wavelength division multiplexing (DWDM), which has a narrow wavelength spacing and is suitable for large-capacity long-distance transmission, and the other is called coarse wavelength division multiplexing, which has a wide wavelength spacing and is suitable for transmission over 50 to 80 km, i.e., where the capacity is not so large and transmission distance is not so long. The frequency grid for DWDM defined by ITU-T supports a variety of fixed channel spacings ranging from 12.5 GHz to 100 GHz and wider (integer multiples of 100 GHz) as well as a flexible grid. Uneven channel spacings using the fixed grids are also allowed [8]. The current steps in channel spacing for the fixed grids have historically evolved by sub- dividing the initial 100 GHz grid by successive factors of two. Colorless, directionless, and
contentionless (CDC) functions have also been developed to realize a more flexible ROADM system. Colorless denotes a function in which any port can be configured with optical transceivers of any wavelength. Directionless denotes a function in which any local service can be configured to be sent in any direction, or all of the services in any direction can be configured to be dropped locally. Contentionless denotes a function in which a contention-independent feature enables multiple services of the same wavelength to be added or dropped on the same local node, thereby simplifying network design and improving port utilization. A ROADM node with CDC functions is called a CDC ROADM. CDC functions play important roles in achieving greater flexibility in terms of wavelength routing and wavelength assignment in WDM networks, because the combination of the CDC functions makes it possible to output optical signals to any wavelength and transmission channel, enabling efficient network operation. Like ROADM, optical cross connect (OXC) is also an optical transmission device that can exchange optical signals between different optical paths. In a sense, ROADM is a special implementation of OXC, and OXC includes ROADM.
At terminal devices, mobile devices have enhanced their computing power and installed camera modules and various sensors. Online services such as video streaming, cloud storage, electronic books, and electronic payments have become widespread using these mobile devices. These services are provided by service providers referred to as over the tops (OTTs), such as Google, Amazon, Meta, and Microsoft, and are realized by hyperscale data centers (DCs) owned by the OTTs. They provide services with an improved user experience by analyzing the data stored in these DCs using machine learning and artificial intelligence. It is expected that nearly everything will ultimately be connected to the Internet owing to the downsizing and capability improvements in electric equipment. As a result, there are various requirements for networks, including not only the capacity enhancement but also the massive connectivity and ultra-reliable low-latency communications (URLLC) for real-time services. Edge computing, in which distributed processing platforms are placed near end-users and Internet of Things (IoT) devices, is being discussed as an option for meeting network requirements [9]. To achieve URLLC, user plane function (UPF) devices, and multi-access edge computing (MEC) servers overhang near users, and URLLC data is routed to the MEC server by the UPF devices and processed in real time at the MEC server and other data is routed to cloud server and processed in cloud server [10]. A DC network is constructed, operated, and managed to carry the traffic of the various types of internal services exchanged between multiple servers. Therefore, in order to communicate with other DC, the traffic processed in the DC flows to the datacenter interconnect (DCI), i.e., a direct connection between DCs, and the DCI traffic increases. Thus, the DCI market has also grown [11]. As the transmission capacity of DCIs has expanded, OTTs have been introducing optical transport equipment and internet protocol (IP) equipment, and building networks [12] [13].
Within the DC, servers, and storage are virtualized to use physical resources flexibly and efficiently, and services are constructed and operated by linking virtualized components. Network virtualization is required to ensure service cooperation and independence when multiple services need to be efficiently accommodated, e.g., by increasing the number of logical servers dedicated to virtualization. Thus, complex virtual networks have been created on physical networks. Virtual networks need to be flexibly changed and managed as the number of virtual servers changes, as virtual servers can be added, deleted, and migrated easily. Various functions and high performance are required for the switches as well as server, capital expenditures (CAPEX) and operating expenses (OPEX) become higher, owing to the development of proprietary network appliances to meet the requirements. Software-defined networking (SDN) [14] and network function virtualization (NFV) [15] [16] have been developed to solve these issues [17]. In these approaches, network equipment conventionally siloed in vertically integrated systems are disaggregated by SDN and NFV, and devices conventionally acting as dedicated products are realized as software running on general-purpose servers and switches. By combining general- purpose hardware and software components, end-to-end systems can be configured flexibly, enabling the rapid provision of network services to meet various needs. This enables the provision of network functions with the required performance and traffic load on a general-purpose server in the cloud, i.e., realizing network functions previously provided by a hardware appliance as software running on a general-purpose server. SDN and NFV enable the realization of various network functions on servers and switches, and applications and networks can be scraped, built, and relocated easily by disaggregating physical resources and logical functions. In general, horizontal (east-west) traffic is increasing, as many functions are being distributed in DCs and cooperating. As a result, network architectures such as the leaf-spine Clos fabric, which is superior in regard to scalability in the horizontal direction, has been studied for constructing an efficient network in the DC [18] [19]. Overlay networks such as a virtual extensible local area network are built on an underlaying physical network, so as to build multiple services independently in the DC. These settings and monitoring control operations are complicated because several virtual networks are built on many devices; thus, the management plane is separated from the data plane, and the network elements are controlled by a controller. SDN and overlay network technologies have enabled the technology of network slicing, in which a single network infrastructure is virtually divided (sliced) and operated as multiple logical networks to provide services to meet various needs and applications [20]. The slices divided by this technology are independent of each other, and the load or problems in one slice do not affect the other slices. Even in the event of a network failure, the impacts can be limited to a specific slice. In addition, each slice can be customized according to its use and security policies. This allows infrastructure providers to provide customized networks for various services, while third parties can use network slicing to provide an optimized network infrastructure to end users as part of their services. These architectures have been adopted in DCs [21]. As a result, various network that have various requirements, including not only the capacity enhancement but also the massive connectivity and URLL, can construct logically at the same physical network.
Network disaggregation using SDN and NFV is also being adopted in DCIs and core transport networks (as well as in the DCs). The SDN and NFV provide the disaggregation among the data plane (D-plane), control plane (C-plane), and management plane (M-plane), but the components of the D-plane, such as the transponder, amplifier, and ROADM are disaggregated in an optical transport network. In optical transport networks, there are two typical disaggregation models [22]. One type concerns partial disaggregation, and the other concerns full disaggregation. A partial disaggregation model consists of open optical terminal devices and an open optical line system (OOLS). The open-terminal devices are similar to transponders and muxponders. The OOLS consists of a multiplexer (MUX) / demultiplexer (DEMUX), WSS, and amplifier (AMP), and their components are controlled by an OOLS controller. In a full-disaggregation model, each component within the open terminal device and OOLS is individually controlled. The disaggregation technologies are D-plane functional disaggregation, indicating vertical disaggregation, and C- and M-plane functional disaggregation, indicating horizontal disaggregation. As interfaces are defined independently for each disaggregated component, the requirements for operations and controllers are complicated thus, open interfaces, i.e., common application programming interfaces (APIs) among different devices, have been proposed [23], and integration technologies have been studied. For example, a fabric-switching network architecture for providing control as a router has been considered, [24] and service deployments have been demonstrated for IP and optical networks [25].
Furthermore, IP and optical integration devices installed with optical-coherent modules have been developed [26], and are expected to unify IP and optical transport devices. An efficient hybrid architecture using DWDM-capable interfaces and optical switching has also been proposed, and as intra- and inter-DC capacities have continued to scale, transceiver interfaces are increasing in data rate, often employing WDM signals [27] [28]. The importance of AWG routers (AWGRs) has been increasing in these networks as underlay networks are required, such as a full- mesh network able to connect with a greater number of nodes to construct a flexible overlay network. Previous studies have proposed reconfiguring a topology by establishing a direct path between the top of the rack switches to mitigate hotspots in DCs to provide large-scale social media services. C-Through [29] and Helios [30] are hybrid methods for reconfiguring path topologies using both electrical and optical switching. Helios provides functions for path reconfiguration and traffic arrangement on the switch side, and c-Through has these functions on the server side. REACToR [31] is another hybrid method that enables flexible traffic control by synchronizing packet switching and optical burst switching. ProjecToR [32] uses free-space optics with a digital micromirror device to enable high-capacity communication and high-speed switching.
DC networks that transfer large amounts of traffic will have a major impact on future network architectures and architectures for converged IP and optical networks are being considered. Network virtualization and openness will continue to grow as networks separated by SDN are expected to be integrated and operated in ways suitable for each network usage in different frameworks. SDN and NFV have been applied to DC networks by OTTs, and they are also being applied by telecom operators. As carriers face challenges concerning accelerating traffic demands and the rapid delivery of new services requiring high performance and various features, they need to be as economical and agile at the edges of their networks as cloud providers. Carriers have vast closed networks of their own; they can build local DCs near end users using their networks, and connect these DCs with their networks to build an end-to-end cloud and manage the network and cloud in an integrated manner [33]. Carriers and OTTs each have a different approach to SDN and NFV, but both are making progress in deploying SDN and NFV. Currently, telecom operators, vendors, and OTTs in various countries have joined standardization organizations responsible for developing technical standards and corporate alliances responsible for certification and awareness-raising, thereby promoting implementation in an integrated manner through role sharing and collaboration. Thus, technological trends of SDN and NFV are continued so that network architecture suitable for SDN and NFV should be considered and the study of AWGR is important to construct a full-mesh underlay network to construct a flexible overlay network.