It seems that although I've written posts, I've not put them to be published! I apologise for this!
Posts will be going out shortly, so please stay tuned!
Sunday, June 19, 2011
Tuesday, April 5, 2011
New Transit Service - IPv6
I've just turned up another transit feed from WAIA today, thanks to the wonderful help of Joe Wooller. On the plus side, that includes both IPv4 and IPv6, so Xenion clients should expect native IPv6 soon!
Saturday, November 13, 2010
WAIX upgrade - gigabit!
I've just updated the WAIX link from 100mbit to 1000mbit. Clients won't notice the change (unless of course you happen to do a lot of WAIX traffic) but there's now a lot more room to grow in the future.
Since Xenion doesn't charge extra for WAIX traffic - it's up to me to make sure it's fair and equitable! - no-one should have any problems now using or providing local services.
Since Xenion doesn't charge extra for WAIX traffic - it's up to me to make sure it's fair and equitable! - no-one should have any problems now using or providing local services.
Tuesday, June 8, 2010
A bit of a change - working on Atheros / 802.11
One of my current contracts has me working on the FreeBSD 802.11 support (specifically Atheros) chipsets for a local company. I've been adding some missing features to the later chipset support and working out kinks in general.
This has involved spending a lot of time knee-deep in the Atheros driver code. What you would traditionally find in firmware on wireless NICs is implemented in the host driver. The hardware handles packet transmission and reception, doing collision avoidance, encryption/decryption, the radio encoding and decoding; the driver pretty much handles everything else. This includes keeping all that per-state information about wireless stations, handling authentication and roaming, and even includes things like doing radio and baseband calibration.
The most difficult part of this has been the distinct lack of documentation. So far I've made some good contacts in the Linux developer community who include a number of developers employed by Atheros (and have access to documentation!) and I've made quite significant inroads in making the FreeBSD atheros/wireless code a lot more stable.
The wireless/embedded development work can be found at my FreeBSD Wiki site. The majority of the improvements will make it into FreeBSD-9.0.
Xenion is still providing on-going Squid/Lusca/CDN support and development services to a variety of clients. This is not going to change any time soon!
This has involved spending a lot of time knee-deep in the Atheros driver code. What you would traditionally find in firmware on wireless NICs is implemented in the host driver. The hardware handles packet transmission and reception, doing collision avoidance, encryption/decryption, the radio encoding and decoding; the driver pretty much handles everything else. This includes keeping all that per-state information about wireless stations, handling authentication and roaming, and even includes things like doing radio and baseband calibration.
The most difficult part of this has been the distinct lack of documentation. So far I've made some good contacts in the Linux developer community who include a number of developers employed by Atheros (and have access to documentation!) and I've made quite significant inroads in making the FreeBSD atheros/wireless code a lot more stable.
The wireless/embedded development work can be found at my FreeBSD Wiki site. The majority of the improvements will make it into FreeBSD-9.0.
Xenion is still providing on-going Squid/Lusca/CDN support and development services to a variety of clients. This is not going to change any time soon!
Saturday, September 19, 2009
I should've mentioned this a while ago - Xenion status updates are posted on the @xenion_pty_ltd twitter feed. Please feel free to subscribe and post messages/queries!
Monday, August 3, 2009
Squid docs
I've started writing up some of my notes from Squid consulting into something (mostly) fit for public consumption.
This is partly to aid myself and partly to try and stop others from finding and fixing the same mistakes.
The fledgling documentation dump is here . I'll be adding more to it as I type up more notes and complete more work!
This is partly to aid myself and partly to try and stop others from finding and fixing the same mistakes.
The fledgling documentation dump is here . I'll be adding more to it as I type up more notes and complete more work!
Wednesday, July 15, 2009
Installing Proxy Cache Servers for Fun and Profit..
One of my current contracts involves setting up a web cache farm for an ISP on the end of a whole lot of full duplex satellite IP. They initially specced out 5 rather large servers (at least $10,000 each); I think they had a minor heart attack when I reduced that to one server. But then, the cost of bandwidth savings versus hardware (and Xenion's contracting/support rates!) is very minor in the long run.
In any case, it has been a resounding success. I'll summarise how things look at the moment; I'll do up a proper press release sometime later next month.
There's about 15,000 users sitting behind the single proxy cache server, with around 100mbit or so aggregate satellite IP bandwidth. The service uses a slightly modified FreeBSD-7 setup to support fully transparent HTTP interception (both client and server-side IP address spoofing) with a Cisco 3750 providing the WCCPv2 interception.
Tuning the FreeBSD stack (and Linux too, for those Linux people out there!) to effectively scale for satellite IP is no easy feat. It took a bit of time but I have quite a bit of experience in this area so the tuning was quite successful. The issue here is finding the right balance between throughput, scaling and link efficiency. A little bit of first year college mathematics helped me predict some decent settings and they work as expected.
The software is Lusca-HEAD (the very recent version as of this post) - this gives me all the useful Squid-2.7 features, stability and performance with my extras (twiddles for satellite IP stuff, TPROXY support, etc.)
The box itself is a dual dual-core AMD Opteron 270 at 2GHZ; 16gig RAM; Intel 1000/pro NIC, 3ware 9000 series SATA controller with 12 x 500gig 7200rpm disks of some sort. The disks are all mounted individually - no RAID at all. 10 disks are for storage; 1 for OS and 1 for logging.
The box pushes around 80 to 120mbit at peak with a byte hit rate between 20 and 40%. The request rate sits between 300 and 600 requests a second, sometimes peaking to 800 or more. This translates to traffic savings (saving a whole lot of money - satellite transponder space is expensive!) and much improved performance for clients.
It also handles between 10,000 and 20,000 concurrent connections with peaks over 40,000. Yes. 40,000 concurrent connections. I'm not making this up.
The cache size at the moment is around 2TB and 20,000,000 objects. I'm absolutely, positively not filling the disks to capacity for a whole lot of very good reasons. (Hint - don't do it.) I'll be happier to increase the storage to 4TB and beyond once I've deployed COSS for the small objects and tidied up some of the memory usage. The Lusca process is around 4 gigabytes at the present time and 75% that is the storage index and related bits.
Just for interests sake - out of the 20,000,000 objects, around 300,000 of them are larger than 256 kilobytes. The rest are small objects. It is quite scary actually how much of the cache directory is small objects.
I've included some preliminary windows update caching which is providing a 100% hit rate for the update files themselves. It's actually quite scary how simple it was to implement. Shame on you Microsoft for -almost- but not quite getting HTTP caching "right" in the windows updates.
All in all, the client in question is extremely happy about the support, installation and performance of the cache. There's a shortlist of items to do including Lusca improvements and reporting tools so the client can provide further information to his boss about how effective this all is.
Subscribe to:
Posts (Atom)