Vendors: No API? Go away.

It’s halfway through 2017.

Cloud infrastructure is all the rage. We’re running code without provisioning servers that is billed in 100ms increments. We are spinning up docker swarms, automating infrastructure, and building our own systems to meet the needs of the businesses we represent. Our whoozwits are talking to our dinglehoppers, and our widgets exchange data with our gookinschpinkles.

If the product you are trying to sling at me doesn’t have a working and well documented API, you are wasting my time. I don’t care how good it is. if I can’t integrate it with everything else I’m doing now and for the life of the product without buying another whatsadoozit or grazzlerazzel from you, then you might as well be doing your well oiled sales pitch into the mirror.

It’s like that cute girl or boy you dated for a while: they were super smart, cute, had all kinds of talent, but they simply didn’t apply themselves to anything and there was no clear end to their pattern of behavior in sight. They were simply a drain on your emotions, your wallet, and your time.

“Ain’t nobody got time for dat.”

(PS – I’m no developer, but even I can use a REST API. Hashtag funnywords)

Whitebox to create new era of open protocols


This alphabet soup of (Cisco) proprietary network protocols is just an example of several of the protocols that, for one reason or another, could lock your organization into their products for many, many years.

Just about no vendor is innocent when it comes to this. Sometimes protocols need to be made before the industry bodies (IEEE, IETF, ISO, ITU-T) can sluggishly convene, digest, banter, and finally agree or disagree on the merits of a given proposal. Other times protocols are designed primarily as a means to bind you to their line of equipment. It can be an incredibly effective tactic for the sales and marketing teams as long as the protocol works.

“Couple your proprietary protocols together with various levels of certifications and halfway competent marketing and sales teams and you too are halfway to becoming your own networking equipment vendor!”

All kidding aside, the idea of companies getting stuck on one vendor for primarily technical reasons is very real. Often times a vendor protocol simply is the best available solution to a problem that can’t be fixed by a change in network topology or by using an existing protocol. In this regard, proprietary protocols are like a drug for your organization. Once you start one, it can often be hard to leave.

Q: So why not just use an open protocol?
A: The right one often doesn’t exist

Protocols are tools used to solve problems. Sometimes you need a box wrench because a socket won’t fit. Maybe you need a pry bar but all you have is a flathead screwdriver. Without the right tools, it’s hard to build things the way they need to be. Once you’ve gone down that path however, it can be hard to add another vendor’s product to your network that might offer more value to your organization.

Whitebox, whitebox, whitebox. If you’ve been paying any attention what-so-ever to the networking industry in the past few years, this has become a increasingly ubiquitous term. As we’ve discussed before, whitebox is the decoupling of the network hardware and software – you pick your hardware based on the needs of your organization, then you pick the software that has the best set of protocols, features, APIs etc that support the hardware that you purchased. Ala-carte network equipment.

So what the heck does whitebox have to do with EIGRP, DMVPN, etc?

As organizations ween themselves off of Cisco, Juniper, Extreme, Arista, HP, etc they will be moving to platforms (often using the same hardware SoC / ASIC / FPGA) where they are adding the software on top. That software will have a very detailed API to make the stack viable for all kinds of projects like custom integration, automation, and SDN / IBN solutions. That routing/switching software stack will be using open protocols (MSTP, OSPF, BGP, etc.) and is normally running Linux underneath with a full suite of on-device programming language support.

So now we’ve been plucked away from our vendor safe space and tossed into a garage. This garage has everything a professional racing-level mechanic would need to make the most amazing car the planet has ever seen. In the center of the garage, there’s a massive dual turbo v8 mounted to a tube frame with basic racing seats, racing tires, 10 speed transmission, etc. It’s a formidable vehicle for the track, but what if we needed it do do something slightly different? Maybe tow something offroad? It’s a good thing we have this garage and access to all the tools…

I think what we are going to see is an explosion in the development of new open source protocols. I think they’re going to come about due to business need and fear of vendor lock-in now that people have been pulled away from proprietary solutions, and I think they’re going to get developed and implemented faster than the standards bodies (IEEE, IETF, etc.) can keep up. We have the hardware, we have the APIs, we have linux shells on the hardware, we often have programmable silicon, and we have the business need. Add that together with some experienced developers and I think as adoption of whitebox solutions continues we’re going to see some amazing things in this space over the next few years.

So what do you think? Post your comments below and I’ll try to address them as I get the chance.

PS – Work on your python, and netconf



Why does my WiFi suck, and how do I fix it?

I hear complaints from people all the time about their home WiFi situation. In this post, I’ll address some of the common complaints, explain a bit about how wireless works, and provide a couple of examples and solutions to better enable your decision making when it comes to outfitting your home or small business with WiFi.



My wireless doesn’t reach all the corners of my house! I’ve tried $40 routers from Wal-Mart, and $300 routers from amazon/newegg. They’re all virtually the same. Help!


Stop buying wireless routers. You’re much better of going with many smaller range, lower power Wireless Access Points (WAPs) than you are one giant multi-antenna routing monstrosity.


First of all, you’re paying for quite a few features you’re not using. For example, the actual routing portion, DHCP server, NAT, etc. Those are all things that are important at the edge of your network (where your internet provider connects to you), but not so much in the rest of the house. This can also cause problems because unless you put the device in bridge mode, then you end up having to create another network segment (say: instead of the of the rest of your network, for example). Often this will break certain applications or hardware features that rely on a single layer 2 broadcast domain, or maybe multicast traffic.  Certain speaker systems, smartphone applications for home automation, etc. often won’t work if they’re not on the same layer 2 network.

 Secondly, just because your wireless may be more powerful, does not mean it’s better. Something many people don’t realize is that a wireless connection, like any other type of multi-party communication, is a conversation. Cranking up the power on your router is akin to using a bullhorn to talk to someone on the other side of the parking lot. Sure, you may hear me fine if I speak through it, but unless you have a bullhorn, I won’t be able to hear your reply. In the wireless and microwave world, we use a term called EIRP (effective isotropic radiated power). It’s not important you learn what that acronym means in detail for the sake of this post, but in short it’s the combination of the power used and strength (gain) of the antenna. Increased output power only increases the strength of the signal in one direction. Using a larger antenna on the other hand, increases the strength of the signal in both send and receive (tx and rx). The problem we’ll often run into with this is normally one or more of cost, aesthetics, impracticality, or legality in extreme cases (for instance, the FCC or Federal Communications Commission, puts limits in place for the amount of EIRP on a given frequency). Something else to consider is that a larger antenna means you’ll hear more. Hear more of what? Well, everything. Not just your device, but the devices and wireless access points of everyone around you.

So, if increasing power only works in one direction, and increasing the antenna size has diminishing returns, what should you do to increase of the coverage of your home or business?

Use more wireless access points at lower output power

Keeping the output power similar to that of the devices you use has a dramatic effect on the quality of communication.


What about those wireless mesh systems. Should I use those?




There are a couple of different communication systems used in wireless mesh design. The oldest, cheapest, and most common one is based on the principle of “Store and Forward Repeating”. These are single radio, single frequency designs, and frequency is a shared medium. In the simplest terms, anything your WAP hears in a store and forward design is stored in memory, the radio waits for the frequency to clear, and then transmits this data up to whatever other WAP it’s uplinked with. Because of this design, your maximum throughput is effectively halved for each wireless mesh hop. With 2 wireless mesh hops on a store and forward design, you would effectively have a maximum throughput reduction of 1/4th that of what you would have in communicating with a single wired WAP. This problem is compounded when multiple people are connected to the same mesh system and are trying to send and receive data!

Better “mesh” designs have slowly started to work their way into the market

Multi-radio, multi-frequency designs often eliminate the problem in using a single shared medium, as now they are using 2 (or more) frequencies: 1 to talk to the client, and 1 or more to uplink to another WAP. You can still run into the same problem here though, if multiple WAPs are uplinked to the same WAP on the same frequency – you’ve simply shifted the problem to the uplink frequency instead of the access frequency your devices are using to talk to the nearest mesh WAP.

The best design for a distributed wifi system isn’t even technically mesh in the sense of needing to uplink to the rest of the network wirelessly, they’re simply wired. These systems all wire back to one or more switches, and often use the same wireless name (SSID), and often the same password. This helps eliminate throughput restrictions imposed by the width of the frequency (I’ll explain this later), the strength and quality of the uplink frequencies to each mesh WAP, etc.

In short, if you need the maximum amount of coverage and throughput, use multiple WAPs all wired to one or more switches.


I have great signal strength, but my older (802.11b, 802.11g, 802.11n) router or WAP just can’t meet my throughput needs. Should I upgrade?


Most likely.


802.11ac, the newest mass-adopted WiFi standard (yes I know 802.11ad is out, but that’s a very different situation), brings with it many performance enhancements while being backward compatible with existing 802.11a/b/g/n equipment.

Before I go any further, let me point out that the 802.11ac standard itself is mainly based around increasing performance of the 5GHz band. 802.11ac routers and WAPs will often be dual radio designs that support older 2.4GHz gear as well, but the vast majority of performance increases on a “pound for pound” basis come from either the increase in channel bandwidth usable in 802.11ac, or the number of spatial streams.  In 802.11b and 802.11g, we were using 20MHz wide channels (aside from certain vendor-specific tweaks). In 802.11n, that possible channel size increased up to 40MHz wide. In 802.11ac, we are now looking at 80MHz and 160MHz (80+80) wide channels.

 See the chart below (courtesy of Cisco):

 As you can see, even in keeping with 1 spatial stream (SISO) which most cell phones and IoT devices have, we’ve seen a possible 10-fold increase in bandwidth when going from a 20MHz wide channel up to the 160MHz wide channels 802.11ac (wave2 and wave3) support. The possible maximum throughput is then again doubled for each spatial stream. As a note, this is only possible in the 5GHz band, which has far more available frequencies than the 2.4GHz band. The 2.4GHz band, as it stands in most countries, only has a maximum of 40MHz available, or one “fat” 40MHz (20+20) wide channel.

One final note here, is that for each time you double the channel bandwidth, you also decrease receiver sensitivity by half. This is something to keep in mind if you design a coverage model around the more common 20MHz wide channel, only to turn on 80 or 160MHz wide channels to find out that your coverage is now terrible. Also, 5GHz attenuates twice or more as bad as 2.4GHz does – it’s a higher frequency, and the higher the frequency, the harder it is for it to penetrate structures and environmental factors (humidity, rain, snow, etc.).


So I took your advice and bought a couple of WAPs, and now I’m trying to figure out where to put everything. Is there a method I should use to go about this?


Absolutely! In professional terms, we call it a “site survey”.


A site survey entails placing as many WAPs as you think you will need in the places you think they will give the best coverage, and then walking around in that coverage area with some kind of tool to take a look at the signal level you have at various locations. This can be as simple or as technical as you want to be. There’s an Android app called “WiFi Analyzer” which can tell you the signal level for each SSID (wireless network) in the area. This is good for a baseline. Or, you can go further and heat map the area.

 A wireless heat map uses a combination of hardware (wifi device) and software (including a floor plan) to map the coverage area at various locations. In the above example, the areas with stronger signal are marked in red, and the areas with weaker signal are marked in green and blue. Depending on the settings in the software, those colors correspond to different signal level thresholds (-50, -60, etc). As a rule of thumb, I want no less than a -74 any where in my home, so I aim for the appropriate number of wireless APs to make this coverage possible on a low or medium power level.

 Hopefully this blog post was informative to many of you out there who may be having problems with your WiFi, or simply had questions about certain topics. Wireless is a very deep topic, from antenna designs, log scales, frequency “personalities”, radio designs, indoor vs outdoor, modulation levels, noise filters, PHY technology, SoC capabilities, etc. I tried to keep it informative, without being too technical. In the future if you’d like me to expand on a certain topic, feel free to send me an email and I’ll try to make that happen.


SDN is dead, long live SDN

SDN, or Software Defined Networking, was supposed to be the Next Big Thing(tm) in how networks were built and designed.  In reality it often creates more problems than it solves, and never really addresses the Business Needs of what it was conceived to accomplish.


One of the problems, as others have pointed out, is that SDN from post-conception became all about vendor lock-in. You were tied to their software, which was tied to their hardware, which was tied to their reference designs, methodologies, features, upgrade cycles, and support systems. All that money you would save (hah!) on expensive network engineers suddenly was diverted into a recurring revenue model with a vendor. You wouldn’t actually save any money at all, instead you would hemorrhage cash flow into companies like Cisco.

Another problem was that for most vendors, their “fancy” SDN software was really only about “solving” a very specific problem, or small set of problems that had limited possibilities.  Sure your SDN vendor might throw you a web page with more graphs than a demo grafana dashboard, but under the hood it was no better than a bunch of QBASIC GOTO statements to manage the load balancing and traffic predestination of various WAN links.

At the end of the day, none of this really came together to solve real problems other than propping up $vendor’s quarterly balance sheet.


… and then came Whitebox Networking. No, that’s not a vendor, but a term for taking reference hardware designs from companies like Broadcom, slapping on the network operating system of your choice (Cumulus NetworksFree Range RoutingSnaprouteVyatta, etc.), and walking out the door with extra cash in hand with virtually unlimited capabilities in the equipment you just purchased. No longer are we tied to a specific all-encompassing vendor to meet our organization’s networking requirements in terms of hardware and software.

Okay, so this helped break the nefarious umbilical cord between organizational IT and $vendor, but we still lacked ways to combine network designs and equipment and software and documentation to meet business needs in a way that didn’t take months or years of methodical planning, creating documentation, change windows, etc. only to realize that something was missed and that the needs of the business aren’t really being solved (and done in real-time, for that matter).


Enter Intent-Based-Networking, or IBN.

Ohhh look, another markety thing!  Shiny!

All <snark></snark> aside, this is what SDN was really designed to do.  In theory in this model one designs not the network, but the needs of a service or application and lets the software design or redesign the existing network in realtime to meet that need.

Let that sink in for a bit.  Seriously.  Go grab a cup of coffee, tea, water, Kentucky bourbon, or whatever tickles your fancy while you think on this.

Let me say it again:

“… in this model one designs not the network, but the needs of a service or application and lets the software design or redesign the existing network in realtime to meet that need.”

So going a bit deeper into the concept here, you might tell your IBN software that you intend to deploy an application (X) using storage resource (Y), any particular Quality of Service or SLA requirements you have, and then the software will reconfigure your mixed-vendor network to create an environment that meets these needs.  This is just one example, but it’s an amazingly powerful concept with real promise. 


We’ve all heard stuff like this before in IT – tech that seems too good to be true.  “This world-changing tech is a paradigm shift that overlays and merges business acumen with network zeitgeist to create synergy of idea and symmetry in execution.” – or something.  Blah, blah blah.  12 months later and a few million $ in the hole, nobody is any happier except for the salesmen (sales-women, sales-person, sales-you, so-on) who made comission on your folly.  You get stuck trying to implement their crap, and they take off to the airport in their 2nd or 3rd Ferrari to head to Maldives.

… but this time, it seems to be legit.  Much like how x86 / x64, the iPhone, Linux, VMware, and to a lesser extent Ubiquiti (with outdoor wireless and UniFi) changed segments of IT in massive ways, Intent Based Networking has actual promise and real customers.

As covered on a recent podcast, Apstra has a real product to do this NOW.  Yes, it’s young technology, but it’s rapidly developing. Other vendors are getting in on this as well; new, hip software companies as well as the usual suspects of traditional vendors. Both have skin in the game of course, and this will be a long ugly battle before the fog rolls off.

Until then, keep brushing up on your python and your cloud certifications.  Automation in the network is just the start.

“Stay woke” 😉