Click the graphs for more time ranges (day, week, month, year, 3-year).
We run a globally announced 6to4- and Teredo-relay since 2010-02-17. Better graphs distinguishing this traffic from our local users will come at some point.
At 2010-02-28 14:30 CET the ip4-Teredo measurements were adjusted to encompass the port(s) that our local relay runs at.
At 2010-03-29 12:00 CEST, our peer Bahnhof (AS8473) started accepting the 2001::/32 prefix we announce to them.
At 2010-03-29 18:00 CEST, load balance from 1 running Miredo process to 8 running Miredo processes was enabled. It seems to work well. The drop of traffic between these two time periods was due to IPv4 packet measurements at border routers temporarily not capturing the new Miredo processes. (Big thanks to Bernhard Schmidt!)
2010-03-30. After a physical relocation of the server the miredo proceess do not operate correctly. Two of the 8 processes, that were the relays for a significant amount of the traffic that was being relayed before the move, are choking badly on ksoftirqd now when they go up. Awaiting time for proper investgation of this the Teredo announcement was pulled.
2010-04-03: Some observations: 1) If miredo and sit-6to4 run on the same machine, and the sit tunnel is incorrectly configured, miredo can suffer greatly. (d'uh). 2) If your sysctl's net.ipv6.neigh.default.gc_thresh{1,2,3} are too low, like the default of thresh3=1024, your miredo tunnel will not perform well.
2010-04-19: After an internal vote on the project, the experiment has now ended and we no longer export the 6to4- and Teredo-prefixes to the rest of the world. If you care about your users experience, set up your own local relays. Feel free to contact us if you want help.

If you have any questions, feel free to contact us using email with LHS=staff, RHS=csbnet.se. Or drop by #nvg @ EFnet (irc).


IPv6 traffic types.


Here used to be a log-scale graph, but we've removed it since it no longer is very useful to look at, after traffic levels having increased substantially. You can still see it here though.