Wednesday, July 26, 2006

Update

So, my advisor is doing this research project with Siemens, which I joined in. The project concerns carrying LAN traffic transparently across a WAN. Not like using VPNs or something. One approach is to use MPLS and carry Ehternet traffic inside MPLS packets. Another approach is to use GMPLS Controlled Ethernet Lable Switching (GELS). Yet another approach is to use Ethernet as is in the core using point to point Ethernet links between the routers.
The problem with the third approach is that it runs STP, which prunes the partially or fully meshed topology, which reduces the available bandwidth and hence less traffic demands can be met. Siemens are interested in GELS (By the way, it is still in the IETF draft stages). But we wanted to quantify this loss in bandwidth incurred by STP, which obviously wouldnt be incurred by GELS since it doesnt prune the topology and will bring other benefits as well.
So, in the first phase of the project, which is already complete, we compared the link utilization, traffic demand acceptance characteristics of STP and GELS and quantified it based on traffic matrices and topologies provided to us by Siemens. We used the TOTEM simulator to do this. We added our own protocol variant of CSPF to the simulator.
The need for the new protocol was to get a common ground for comparison. The problem is that with GELS, CSPF, we are setting up LSPs based on traffic demands, whereas with STP, everything is contention based and there is no concept of LSP. So, we implemented a variant of CSPF that accepts and sets up an LSP if required bandwidth is available and if requested bandwidth for the LSP isnt available, it allocates the highest possible fraction of the requested bandwidth, just as (at some leve), one would get with STP. I'll see if I can upload the code some time.
Now, in the second phase, we are evaluating failure scenarios to see how the two protocols compare in terms of recovery from failures. We are simulating single link failure scenarios for the given topologies on bridgesim. I have modified a C++ program developed by my fellow research student Atif Nazir and developed a shell script to simulate all possible single link failure scenarios with every node in the network selected as root bridge one by one and extracting the convergence time from the simulator's output log file. These results are then put into the gnumeric spereadsheet program and computations are done on the results.
Each single node failure scenario is taking about 50 seconds to run on the Virtual PC environment in which I set up Red Hat Linux 9.0 and there are a total of 82 links in the topology with 50 root nodes, so the entire simulations set will take about 57 hours to complete. Fun stuff, huh? Good for us I wrote the code to automate all this stuff, so that all we need to do is to ensure that the code is doing what we want to do, which I have already done in testing, and to copy and paste the results.