Thursday, October 29, 2015

Selecting a Solution that Fits: Right-Sizing Your Storage

We are all familiar with the common misnomer that “One Size Fits All” and most of us at one time have fallen prey to buying an item only to find the advice was a bald-faced lie. So when it comes to numerous choices for housing your business’s data, how do you “size” and select the ideal fit for your environment?

For some, it may be as easy as scanning Gartner’s Magic box for this year’s top performers or the breakthrough up and comers. What about transport methods: will you favor iSCSI, Fiber Channel, Fiber Channel over Ethernet, or SAS? Not to mention you must have solid state disks drives; it’s the hot technology, everyone is using it, and you don’t want fall behind! 

If, on the other hand, you are truly interested in investing in only what you need to get the job done efficiently, then the key to selecting the right storage should start with intimately knowing the I/O patterns of the applications that will access the data. The most important metrics to consider are reads, writes, and I/O size. Collecting data can be done through various performance programs such as perfmon for Windows or htop for Linux. These two are among a crowd and everyone has their favorite.  Specifically, you will want to measure during all workloads; peak and off-peak workloads may have different I/O characteristics. Attention should be given to the following points: Disk Reads/sec, Disk Writes/sec, Disk Latency, Size of I/Os being issued, and Disk Queue length. If you are analyzing a database, also include Checkpoint pages/sec and Page Reads/sec.

Once you have a solid idea of how your applications perform, then you can move onto sizing the physical disks. Typical IOPS per spindle range from 100-130 on 10K RPM drives, 150-180 on 15K RPM drives, and 5000+ on solid state drives. Keep in mind that disks with less than 50-70% written capacity will have improved IOPS over disks written at 80% capacity, so spread the data out! You will also want to consider the impact of write penalty your RAID choice will have on I/O issued against the disks. Read and Write Cache on storage controllers can offer improvement to the performance and should be taken into consideration if the applications lean heavily one way or the other.

Sizing storage systems is much more than just sizing the disks. Every component in the path to the physical drives has a throughput limit and can be a potential bottleneck. As more solid state disks are implemented, the configuration of these ancillary components will become more critical. To avoid pitfalls, analyze the potential throughput of each of the following:
  • Connectivity: HBAs, NICs, switch ports (and if they are shared by multiple servers), array ports, and the number of paths between servers and storage.
  • The number of service processors in the array and how the LUNs are balanced across them.
  • The capacity of the backend busses on the array and how the physical disks are balanced across them.
Other considerations that can affect sizing decisions are advanced features offered by today’s latest generation of storage arrays. Examples are thin provisioning, snapshots/clones, compression, deduplication, and storage based replication. In addition to these, some new generation arrays utilize technology that throws all those media IOPS estimates out the window!
 
The moral of the story is this: arming yourself with in-depth knowledge of your application performance gives you the ability to quantify different array features and purchase only what you really need. Arrays that tout doing everything come with a hefty price tag. If they don’t benefit your applications, they are worthless, and saving money where it makes sense has never been a bad business decision.

Thursday, October 22, 2015

Why Every Business Needs a Storage Assessment

Let’s pretend you’re in the market to buy a new house. You’re confident your current house is going to sell in the next few months, and you’d like to move as soon as possible. 

I am currently undergoing this process, and at times, the number of questions to go over feels daunting. What is on your list of attributes you’d like your new home to have? How are they prioritized? How much does price affect your decision? Here are some common filters when searching for real estate using common sites:

1. Listing type
2. Price range
3. Location
4. Number of beds
5. Home Type
6. Number of bathrooms

And the list goes on.

Maybe you’d like to view according to square feet or how many days it’s been listed. As I run through the filters and questions I start to pick candidates and save them on a list to research.The plot of land and location may be beautiful, but does that river raise the cost of insurance? How much? Is there a flood risk?

Now it’s time to start visiting and walking through places… creepy Michigan basement – no thanks.  Oil heat – Seriously? Water damage – pass. Sketchy roof – next house, please. I walked through one place where the tenants were present. They were really nice and after talking to them a bit, they mentioned that shortly after moving in they noticed that they would find the refrigerator in the middle of the kitchen. Foundation issues – can’t run fast enough.

It would be so much easier if all this information was provided honestly up front.

Approaching a storage purchase for your datacenter can feel just as daunting. Now, more than ever, there are a myriad of options in the storage market and most all of them have a valid play and use case. But, before you can choose a new storage array, shouldn’t you know what you have first? Just like any other major purchase, it’s helpful to take stock of what you have and how it’s served you over the time you’ve used it. When attempting to do this with your storage environment, you’re often limited to the capabilities of the arrays or hosts connecting to it. Just like choosing a new place to live, there are a number of factors that often come up.

1. Connectivity
2. SSD Support
3. Data efficiency
4. Ease of use
5. Vendor Support
6. Performance

There are many, many more storage features out there and each manufacturer often has their own unique implementation of them. So, how does one choose which factors should be given more weight in the purchase decision? So often we use a “best guess” to choose, and this can lead to misuse of a large portion of your budget, especially when you consider that storage is often the most expensive piece of the pie when compared to compute and networking. For this reason, it’s important that we take stock of what we have, the problem we’re looking to solve, and make sure that moving forward we have the tools in place to make this decision easier. Once we’re able to report on what we have and how we’re using it, we can then confidently make an informed decision on how to add onto, or replace portions of our storage infrastructure. This is the beauty of this -if we have this information in hand first, suddenly we’re able to quickly filter out all the vendors with solutions that don’t fit our problem. The flooded list of storage vendors suddenly becomes a more manageable list of two or three.


A storage assessment will produce a report providing an inventory of what you have, how you’re using it, and help identify areas of improvement. You can improve performance, availability, and reduce operating costs – who doesn’t want to do all of that?! With this knowledge, you can now start to answer other initiatives with certainty: Should we entertain cloud storage? What about a colocation facility? What kind of storage should we have at our DR site? How much can we save in power and cooling by replacing legacy storage? With clear direction, you can now accomplish more than you thought possible.

- Adam P.

Adam P. is a sales engineer with experience in servers, storage, and virtualization with a focus on data management. He started working with VMware ESX 3.5 and has worked with numerous companies assisting with physical to virtual initiatives. When not talking storage or virtualization he’s binge watching shows on Netflix or enjoying some Short’s brewery beverages and playing board games with friends. His favorite movie of all time is: The Big Lebowski.

Friday, October 16, 2015

Securing the Castle (Part 1)

While the concept of firewalling all the way down to the Access Layer may seem a bit unorthodox, and is certainly not a practice that would be suitable for all occasions based on a number of different reasons, the question is, could it be a feasible design for a network? In short, it absolutely can be. That is not to say it wouldn’t take some planning and engineering work up front, but if we were to look at any secure facility that is of worth throughout history we would find that there was a significant amount of engineering and planning that took place before any construction ever started happening. Why should the foundations of IT infrastructure that are going to support any legitimate business be any different? The truth is that they should not be. However, from my experience, I have found that a comprehensive and holistic view of the IT infrastructure in its entirety is actually something that is quite rare among organizations. Typically, it usually is a patchwork of separate teams who rarely collaborate with each other on the best way to achieve a particular goal, but I digress…..

So the questions that need to be answered before we even hop into what the architectural aspect of such a design become:

1. Firewalls are expensive. How could an organization possibly be able to afford that many firewalled ports?

Answer: The truth is that unless you’ve bound yourself to a particular vendor or technology, there are a number of very good options on the market that can meet this constraint. In some cases the price point for firewalled ports VS non-firewalled ports could actually work out to be less than your typical switch port - if you have an open mind.

2. I don’t know of any firewalls that don’t have the port density that switches do. Are there firewalls that do?

Answer: While I will say that high port density firewalls aren’t going to be part of your typical firewall portfolio, they are out there.Vendors such as Juniper Networks and Fortinet currently have products in their portfolio that can fit the bill. Even Cisco had a product in their FWSM (Firewall Services Module) for the 6500 that gave your Catalyst switch the ability to firewall hundreds of ports.

3. Firewalls add a significant amount of overhead in order for me to be able to manage polices. How could I possibly manage a large environment with firewalls all over the place?

Answer: Central management for products is much more common than it used to be. Many vendors are even starting to offer the ability to manage their products via the cloud as well or through an on premises device. One of the features that these types usually have is the ability to create device templates and push out policies to device groups.  In some situations, this could even be faster than making changes in a switch environment.

4. Firewalls are not as fast as switches. Are there firewalls that can give me the throughput and performance that I need?

Answer: Absolutely.Your typical user isn’t going to be in a switching environment where they need to have nano-second latency like on Wall Street. Many of the firewalls that you see today only have a few ยต seconds of latency added for processing firewalled traffic.That is 1/1,000,000ths of a second to process a firewall policy. Typically, when you measure ping latency it’s only going to be in the 1/1,000ths of a second. It’s pretty quick and impressive, to say the least.

Based on those answers, we start to see that being able to architect a secure network solution where security is at the forefront of the architecture and design has become much more feasible than it once was. However, it comes at the cost of being able to allocate the time and resources up front and get all of the required teams collaboratively meeting with each other in order for such a design to be successful. I’ll discuss more of the architectural components of making such a design a bit more of a reality in my next blog.  

Thursday, October 8, 2015

Don’t Just Leap to the Latest Product Release – Look First!

When new software and hardware versions are released, we nerds love to jump to the latest and greatest. Unfortunately, I played witness to the catastrophes that can occur when due diligence is not performed first.

VMware publishes two independent, but equality important compatibility lists: the Hardware Compatibility Guide and the Product Interoperability Matrixes.

 
The VMware Hardware Compatibility Guide is the de facto guide to determining hardware support with VMware products and versions. Everything from server models to I/O devices supported with ESXi versions can be found. Need to know which SSD options are supported with Virtual SAN 6.0? Yep, that is on there. I have walked into environments with host servers upgraded to an ESXi version that is newer than the Compatibility Guide lists as supported, and sure, ESXi installed successfully, but that is only the first hurdle in a marathon. What happens after installation when you receive a purple screen of death at 2 AM? It means it is time to troubleshoot and fix something that is not verified as supported.

VMware’s Product Interoperability Matrixes is definitely the lesser-acknowledged sibling of VMware’s compatibility lists. Often overlooked after the hardware is supported, the Product Interoperability Matrixes help navigate the increasingly complex web of intertwined VMware products. After you progress beyond basic server consolidation and improved disaster recoverability with Site Recover Manager, you need to take into consideration which versions of Site Recovery Manager are supported with new versions of ESXi and vCenter. Moreover, what about virtualizing the network with NSX? You most definitely should verify support before upgrading ESXi and vCenter following a new release. What happens if the newest version of ESXi does not yet support the older version of NSX that you have not gotten around to upgrading? This could lead to large, unplanned downtime that may not be possible to test in a lab environment without spare time and evaluation licenses.

As much as dotting each I and crossing each T can be a pain and seem unnecessary, especially after things have been stable for so long, I must urge everyone to verify not just hardware support, but also support with intertwined products before jumping to the latest, greatest new version with the sparkling bells and shiny whistles. VMware has released two fantastic resources with the Hardware Compatibility Guide and the Product Interoperability Matrixes, but do not forget to consult your other intertwined products (e.g. backup solution, replication appliance, etc.).

Thursday, October 1, 2015

Challenges in the Unlicensed Wireless Spectrum...

It seems that many of the wireless manufactures are ever seeking ways to improve security on a medium that seems very easy to compromise. For the most part they have done a good job in doing so. However, depending on what vertical you are in, utilizing these tools can land you in some hot water!

An international hotel chain was recently fined from the FCC over blocking personal WiFi hotspots. The practices that were used to accomplish this outcome are very common among most enterprise based wireless deployments today. They exist mainly for a primary reason to keep unauthorized clients from associating to AP's they shouldn't. The problem shows up when that de-association practice is taken a step further by de-associating clients from their own hotspots. As you can imagine, it left the end user trying to access the Internet an undesirable choice, pay the hotel to use their Internet. That didn't fly too well in the eyes of the FCC. Since what we use every day is unlicensed spectrum, it should be shared by all.

Ultimately it should be the responsibility of the end user to use the equipment appropriately. IPS controls can be used to effectively maintain integrity of an enterprise wireless network. Making sure your wireless network has proper Rogue containment is always going to be good practice, along with a host of other tweaks based on your environment. Just make sure you're not the next one in line to write a fat check to the government!