How to Start Data Center Networking Projects Using NS2
To start a Data Center Networking (DCN) project in Network Simulator 2 (NS2) which needs to replicate the network infrastructure which is created for data centers in which thousands of servers are connected including high-throughput links and high-performance switches for offering services to users or applications. DCNs are frequently utilised within cloud computing, enterprise IT infrastructure, and large-scale applications, which want to manage the large amount of data.
A normal data center network includes servers, switches, routers, and communication protocols, which handle the data flow among these devices. We can replicate the communication patterns for NS2 within data centers, server-to-server communication, and routing protocols are often used in advanced DCNs.
Following is sequential steps to get start a Data Center Networking (DCN) project using NS2.
Key Components of Data Center Networking
- Servers: Machines or virtualized instances, which offer computational resources and storage toward users.
- Switches and Routers: Network devices that connect servers in the data center and it could link to external networks.
- Links: Interaction channels among the servers and switches.
- Topologies: For scalability and redundancy, DCNs normally utilised fat-tree, Clos, or spine-leaf topologies.
- Protocols: These protocols are frequently utilised like Ethernet, TCP/IP, and Data Center Bridging (DCB).
- Traffic Patterns: Server-to-server interaction, east-west traffic in the data center, and north-south traffic (data center to external network).
Step 1: Install NS2
Before executing the project, make sure that we have NS2 is installed on the machine. We should download and install it from the official NS2 website.
Verify the installation by following below command line:
ns
If NS2 is correctly installed then it would go to the NS2 prompt.
Step 2: Define the Data Center Network Topology
Data centers utilise the certain topologies for effectively handling traffic:
- Spine-Leaf Topology: A general topology in which spine switches are associated to leaf switches, and servers are linked to leaf switches.
- Fat-Tree Topology: It is look like a hierarchical network structure, which offers scalability and fault tolerance.
We will need to make a simple fat-tree topology or spine-leaf topology.
Step 3: Create the Simulation Script
Below is a basic instance which replicates a simple data center network to utilise NS2. In this instance, we can replicate a small spine-leaf topology in which leaf switches are associated to servers and spine switches are connected to leaf switches.
Example: Spine-Leaf Data Center Network Simulation
# Create the simulator object
set ns [new Simulator]
# Set up trace files to collect data
set tracefile “datacenter_network.tr”
$ns trace-all $tracefile
# Create nodes: spine switches, leaf switches, and servers
# Creating 2 spine switches, 2 leaf switches, and 4 servers
set spine_switch1 [$ns node]
set spine_switch2 [$ns node]
set leaf_switch1 [$ns node]
set leaf_switch2 [$ns node]
set server1 [$ns node]
set server2 [$ns node]
set server3 [$ns node]
set server4 [$ns node]
# Position nodes in a 2D space (topology)
$ns at 0.0 “$spine_switch1 set X_ 100; $spine_switch1 set Y_ 500”
$ns at 0.0 “$spine_switch2 set X_ 300; $spine_switch2 set Y_ 500”
$ns at 0.0 “$leaf_switch1 set X_ 100; $leaf_switch1 set Y_ 300”
$ns at 0.0 “$leaf_switch2 set X_ 300; $leaf_switch2 set Y_ 300”
$ns at 0.0 “$server1 set X_ 100; $server1 set Y_ 100”
$ns at 0.0 “$server2 set X_ 200; $server2 set Y_ 100”
$ns at 0.0 “$server3 set X_ 100; $server3 set Y_ 50”
$ns at 0.0 “$server4 set X_ 200; $server4 set Y_ 50”
# Define links between spine and leaf switches
$ns duplex-link $spine_switch1 $leaf_switch1 10Mb 10ms DropTail
$ns duplex-link $spine_switch1 $leaf_switch2 10Mb 10ms DropTail
$ns duplex-link $spine_switch2 $leaf_switch1 10Mb 10ms DropTail
$ns duplex-link $spine_switch2 $leaf_switch2 10Mb 10ms DropTail
# Define links between leaf switches and servers
$ns duplex-link $leaf_switch1 $server1 1Gb 10ms DropTail
$ns duplex-link $leaf_switch1 $server2 1Gb 10ms DropTail
$ns duplex-link $leaf_switch2 $server3 1Gb 10ms DropTail
$ns duplex-link $leaf_switch2 $server4 1Gb 10ms DropTail
# Set node configurations: use TCP for communication
$ns node-config -adhocRouting AODV -llType LL -macType Mac/802_11 -ifqType Queue/DropTail \
-ifqLen 50 -antType Antenna/OmniAntenna -phyType Phy/WirelessPhy -topoInstance $topo
# Create traffic flows between servers (simulating east-west traffic within the data center)
set tcp1 [new Agent/TCP]
set tcp2 [new Agent/TCP]
set tcp3 [new Agent/TCP]
set tcp4 [new Agent/TCP]
# Attach TCP agents to servers
$ns attach-agent $server1 $tcp1
$ns attach-agent $server2 $tcp2
$ns attach-agent $server3 $tcp3
$ns attach-agent $server4 $tcp4
# Create traffic generators (CBR traffic)
set cbr1 [new Application/Traffic/CBR]
set cbr2 [new Application/Traffic/CBR]
set cbr3 [new Application/Traffic/CBR]
set cbr4 [new Application/Traffic/CBR]
# Attach CBR traffic to TCP agents
$cbr1 attach-agent $tcp1
$cbr2 attach-agent $tcp2
$cbr3 attach-agent $tcp3
$cbr4 attach-agent $tcp4
# Set packet size and interval for CBR traffic
$cbr1 set packetSize_ 512
$cbr1 set interval_ 0.5
$cbr2 set packetSize_ 512
$cbr2 set interval_ 0.5
$cbr3 set packetSize_ 512
$cbr3 set interval_ 0.5
$cbr4 set packetSize_ 512
$cbr4 set interval_ 0.5
# Start the traffic generators at time 1.0 second
$ns at 1.0 “$cbr1 start”
$ns at 1.0 “$cbr2 start”
$ns at 1.0 “$cbr3 start”
$ns at 1.0 “$cbr4 start”
# Stop the traffic generators at time 4.0 seconds
$ns at 4.0 “$cbr1 stop”
$ns at 4.0 “$cbr2 stop”
$ns at 4.0 “$cbr3 stop”
$ns at 4.0 “$cbr4 stop”
# Finish the simulation at 5.0 seconds
$ns at 5.0 “finish”
# Define finish procedure
proc finish {} {
global ns
$ns flush-trace
exit 0
}
# Run the simulation
$ns run
Key Aspects of the Script:
- Node Creation:
- Spine switches and leaf switches are made for replicating the spine-leaf topology.
- Servers are linked to leaf switches.
- Link Configuration:
- Connections among the spine and leaf switches are made, including 10 Mbps bandwidth and 10 ms delay.
- Gigabit links (1 Gbps) are utilised among the leaf switches and servers for replicating the high-speed interaction.
- Traffic Simulation:
- CBR (Constant Bit Rate) traffic is created among the servers for mimicking east-west traffic in the data center.
- TCP agents are applied to design interact among servers.
- Routing Protocol: AODV is utilised however ECMP (Equal-Cost Multi-Path) or custom routing protocols can be replicated in real data center networks.
- Traffic Control: Traffic is started at 1.0 seconds and end at 4.0 seconds.
Step 4: Run the Simulation
We will need to store the script like datacenter_network.tcl and then run the simulation using NS2:
ns datacenter_network.tcl
Step 5: Analyze the Results
When simulation is executed then NS2 environment make a trace file as datacenter_network.tr. Also, we can obtain and examine the simulation performance parameters to utilise AWK tools:
- TCP congestion window
- Latency
- Throughput
- Packet delivery ratio (PDR)
For instance, to obtain all inherited packets:
awk ‘{ if ($1 == “r”) print $0 }’ datacenter_network.tr > received_packets.txt
Step 6: Extend the Simulation
If prolong the simulation, we can deliberate:
- Network Scalability: Maximize the volume of servers and switches for designing the larger data centers.
- Traffic Patterns: Mimic north-south traffic models such as data center toward external network or more complex application-level traffic.
- Advanced Routing: Execute and replicate the advanced routing protocols such as ECMP (Equal-Cost Multi-Path) or Data Center Bridging (DCB) in data centers for efficient routing.
- Fault Tolerance: Design the network failures, link congestion, and retrieval approaches for fault tolerance in the data center.
- Energy Consumption: Define power-efficient data center models to deliberate energy-aware routing protocols or low-power devices for energy utilization.
In this guide, we had illustrated how to start and replicate the Data Center Networking Projects and how to analyse the performance then extend the simulation using NS2 environment. We can ready to extend this project further as required.