Thursday, December 10, 2015

Toys for Tots: Effects of the VTech Hack

Before all of you parents go running out to your closest toy store (if you haven’t already) to get your child one of the latest tech toys from VTech, there are a couple things that you may want to be aware of.  First and foremost is the fact that in November VTech was hacked and was found to be storing the personal data of roughly 5.2 million people, mostly adults but children too. The second is that some of the information that was accessed contained a significant amount of chat logs and pictures that are part of VTech’s Kid Connect service, which allows parents to text or chat with their children using the VTech tablet via a smartphone app.  So many might be thinking, “Chat logs and pictures? What’s the big deal?”  Well, I can think of many mischievous ways in which our cyber connected world can use this data. 
What immediately comes to mind is the hack that has highest rate of success: social engineering. Social engineering is “is a non-technical method of intrusion hackers use that relies heavily on human interaction and often involves tricking people into breaking normal security procedures” (social engineering definition). Having a wealth of information about the potential target increases the chances of success exponentially because you already have plenty of conversation starters to craft that “trust relationship” by creating small talk.  The other thing that comes to mind is identity theft.  Not in the near future, but later down the road.  If the hackers that gleaned that information wanted, they could potentially have more than enough identities to defraud for years to come.  All it would take is patience and just holding onto the children’s information for a few years until they come of age and then you cleverly start down the list of potential targets who will have long since forgotten, and perhaps never even knew that a significant amount of their personal data had been compromised many years back.

Well, I suppose that perhaps maybe it’s not that big of a deal though when in today’s day and age it’s the social norm for personal lives of people to be on display for the world to see on Facebook, Twitter, or Instagram.  So is it really any wonder that we have all of the cybercrime that we do?  I don’t think so.  If anything, I’m surprised that there isn’t more of it.  I say that we’ve created a Cyber-Cedar Point for hackers where our lives are the main amusement of the park.  It’s not a matter of if the hackers will take a spin, it’s a matter of when the line dwindles down enough for them to get on board.  I honestly wonder sometimes if it’s a lack of security awareness or if it’s really that people just don’t care. 

Thursday, December 3, 2015

The Case Against Unified Storage

Unified storage, a “single” storage solution that handles both file-level and block-based storage, has become more common in data sheets in recent years as manufacturers compete to complete every checkbox on the speeds and feeds charts. I see the advantage behind the reduced device count and simplified management interface; however, I believe that unified storage only serves to place ink in a checkbox.

Most storage solutions that offer “unified storage” are the same block-based storage with a software component bolted on to present a volume on the network using NFS or CIFS/SMB. With this scenario, it is not uncommon to get a block-based storage array with a NAS head-unit that provides the NAS features; while this typically brings integration of the two within the management interface, they are still two separate devices—with the NAS head-unit leveraging a block-based volume on the array.

Now the integrated management of the block-based and file-level components is pretty awesome. Who does not dream of that mystical Single Pane of Glass? The downside is the limited NAS features typically offered with a unified storage solution. Your corporate environment is most likely heavy with Windows devices. What serves CIFS/SMB shares to Windows clients better than a Windows Server? Storage manufacturers are forced to lag behind on features and fault resolution as they attempt to play catch-up as Microsoft releases new features into the Windows File Services. Alternatively, some storage manufacturers offer their NAS head-units as Windows Storage Server devices - is this still “unified”?

Windows Server integrates much better with your backup solution than a unified storage solution. In fact, to protect your file-level data a unified storage solution requires Network Data Management Protocol (NDMP). NDMP is a networking protocol, as such errors can occur. Troubleshooting faults in NDMP is a nightmare. Many backup vendors have built proprietary versions of NDMP that mask the original error message. Scouring online discussions turns up frequent posts of sysadmins trying to resolve an error only to end with, “had to reboot the server to resolve the issue.” Maybe I am a little conservative on this front, but I need to trust my backup solution and be able to easily verify the restorability of my data.

A final thought before I ramble on about this all day …

Virtualization is a given nowadays; there are no valid excuses to not be virtualized. What if we virtualized our file servers to increase availability and reduce maintenance? Why would we not deploy file servers, be it Windows or Linux based, as virtual machines that leverage block-based storage? Now, for the crazy bit, what if we clustered these virtual machines to create always online network shares for our users?

Many dedicated NAS solutions include numerous features that are unique and provide much sought after capabilities, but unified storage solutions only over promise and under deliver with their “jack of all trades” design.

-Ryan M. 

Ryan M. has over six years of experience architecting and implementing SMB and enterprise data center solutions. Currently a Solutions Architect at Great Lakes Computer, Ryan is focused on using modern virtualization and storage technologies to reduce OpEx, increase business continuity, and improve performance for customers.  

Friday, November 20, 2015

Speeds of the Data Center: What's Out There & What's Coming

The term “data center” is something many people are familiar with.  Data centers by nature tend to be hungry for bandwidth and are demanding more throughput than ever before.  It wasn't long ago that the vast majority of network engineers couldn't even imagine filling a 10 GbE link.  The tables have indeed turned with the introduction of cloud computing and virtualization, and bandwidth has seen the demand increased tenfold.

Providers are starting to make the switch over to 100 GbE on their backbone connections to support the workloads that their customers are demanding.  In the data center itself, most are seeing that 100 GbE and, in a lot of cases, even 40 GbE is overkill for the workloads they are currently serving up.

So, looking at the various speeds available today and what is coming in the very near future, you might be left wondering how come there are not more options available.  Today, 10 and 40 GbE speeds are available and widely used in the edge data center.  So, where does this 25 GbE come in? 

Let's take a look at how essentially 25 GbE is derived.  Today's 100 GbE network devices utilize four channels of 25 GbE each.  The effects of using just a single channel are multifold on device and environment sizing.  This will decrease the amount of heat that the device will give off and in turn decrease the amount of power and cooling required.  This allows for a much more cost-effective upgrade in the data center when 10 GbE is not enough and 40 GbE is way too much.  Down the road, this can be extremely beneficial when network operators realize that they need to double or even triple their current speeds.

The IEEE standard for 25 GbE is not set to be recognized as a standard until sometime in 2016.  Its arrival is being anxiously awaited.  Until that time, manufacturers will not be quick to develop products that support those speeds as their profitability would be low.  It’s all about the Benjamins!

Since the adoption of the 40 GbE and 100 GbE standards by the IEEE just a few short years back, it has spawned focus groups to begin development of 400 GbE and even Terabit Ethernet.  Seeing how the edge network is rapidly growing, the demand for these faster speeds will only continue to gain momentum.

There is a lot to look forward to in the coming years in the world of Ethernet, especially if you are a speed junkie.  The beauty behind this push is enterprises will want choices and will in turn push manufactures to produce 25/50/100 GbE NICs, assuring your data pipe will remain flowing like a well-oiled machine!

Wednesday, November 11, 2015

Fixing the Weak Link: The Human Element in Network Security

We’ve all heard the age-old adage, “you’re only as strong as your weakest link.” Although the phrase originated in organized team sports, we use it in business as well. An Enterprise will experience success or failure based on the sum of the whole and, if a certain team or team member isn’t pulling his / her weight, failure is imminent. This statement also applies to network security.

We deploy network security devices in an attempt to secure our network. We place firewalls at the Internet edge and datacenter edge. We have intrusion detection and intrusion prevention hardware or software components running alongside these firewalls to inspect for malicious traffic patterns. We filter our user’s web content to try and prevent access to malicious web sites or code. We run endpoint security software that does anything from scan for viruses to sandboxing applications. We implement multi-factor authentication. Some of us are finally inspecting application traffic and identifying the malicious traffic running over allowed ports. Fewer still are taking the application whitelisting approach and defining what CAN run on a device and blocking everything else. All of this is done with the best of intentions and that is to create the most secure network environment that we can to protect against attacks and attempts to access the data or systems we hold sacred.

And yet, we’re all failing. We’re failing because we’re addressing areas of perceived strength and ignoring the weakest link. “Our latest vulnerability assessment shows that we’re at risk because we have several unpatched servers and one of our web servers is vulnerable to a cross-site scripting attack.” Because of this vulnerability assessment, we now have approval to spend time and money to resolve these vulnerabilities. Unfortunately, this vulnerability assessment doesn’t show that Pat in our finance department has no idea what a phishing email looks like and has just clicked on the link in the “reset your password” email, logging into the company’s online banking portal for a 5th time to reset the password for the account that we use to process payroll each week… unsuccessfully, I might add.  Pat’s phone call to the help desk goes something like this: 

“Hey, are we having Internet problems? I can’t seem to get our online banking page to load.”

Help desk guru responds with “I can’t see any issues with our Internet. Seems to be working fine for me, so try it again in a few minutes. Maybe reboot your computer.”

By now, the fraudulent wire transfer of this week’s payroll has already been started using Pat’s credentials that were typed into the fake password reset form from the emailed link.  Pat is able to log into the account, post reboot, because Pat uses the favorite that was created in Firefox rather than clicking on the link in the email.

This story illustrates one of the many ways that an attacker can get what they want by exploiting the weakest link. At present, we view our network security systems, our firewalls, our IPS, our WAF, and our AV systems as our strongest links because they are configurable and do what we want them to do. People are the variables and are, inherently, our weakest links.  But they don’t have to be.

Some of the most secure networks, and ones that are the biggest targets by attackers, I might add, do not appear to have those perceived weak links. The people are still there, as are the weak links, but they are being educated constantly on the ever-changing threat landscape. Their employers perform routine Security Awareness Training. They perform in-house testing to reinforce that training and then do more training. Rinse and repeat.

They create policies that lock down the network and only allow those things which are necessary to perform core business functions. At the end of the day, your business exists to make widgets or provide a service to consumers. Unless your business IS Facebook or Twitter, what reason could you possibly have for being on those pages during the course of normal business. Obviously, there are exceptions to every rule, but it seems that we, as entitled members of society, have decided that we are all the exception and should have the right to access what we want, when we want, from wherever we want, even if it’s technically not relevant to the task or job function for which we are employed to perform.

If you truly want to protect your network, investing in the technology used to do so is only half of the battle. Education, policy creation and enforcement, and regular testing for new emerging threat types are the weak links that need to be addressed. Let’s face facts - we’re behind the curve when it comes to protecting ourselves from attackers simply because we are always in a reactive mode. If we can effectively educate our users and reinforce the fact that our business network is used to conduct BUSINESS, that’s going to shorten the curve exponentially. As a business owner, network manager, CIO, or whatever your title might happen to be, you may not be able to implement the necessary changes to make this happen in your organization, but I’ll bet you can exert some sort of influence over them. You wouldn’t be reading this if you couldn’t.

*The thoughts and opinions in this blog post are my own and do not reflect the thoughts and opinions of Great Lakes Computer or any of its vendors, clients, or partners.

-Chris C

Chris C has over 15 years of experience designing, implementing, documenting, and supporting networks and infrastructure from SMB through Enterprise level in a multitude of verticals. Currently Sr. Network Engineer at Great Lakes Computer focused on designing and implementing secure network solutions in the datacenter and service provider space. 

Thursday, November 5, 2015

What We Can Learn from Japanese Efficiency in IT

I personally don’t use Twitter very much, but yesterday I was tempted to create a “hashtag” topic to see if it would gain traction and begin trending. What was that topic? #spoiledbyjapan.

I’ve recently returned from a short trip to Nagoya for my brother’s wedding, and I’m still aglow from the experience. This was my first trip to Japan, and I’m already hoping I will have another opportunity to return. I think the best single word I can come up with is “satisfying.” You know that feeling you get when you peel off the plastic protective layer from a new smartphone, or when a box fits exactly into another box? That’s what a lot of Japan feels like. Where there is an opportunity for something to work efficiently and effortlessly, they are the undisputed masters of implementation. Visual attentiveness to detail is of the utmost importance, and all of Japan’s citizens seemed to contribute to that same robotic mantra of proficiency and cleanliness.

You can imagine my chagrin when I returned back to the United States and visited a popular chain clothing store in a shopping mall. There were plenty of clothes on the floor that had fallen off racks, unfolded jeans and shirts hastily strewn on shelves and tables, and large dust bunnies visible to the naked eye everywhere. It was an absurd and frustrating wakeup from my Japanese dreamland of all things visually appealing. Needless to say, I walked out without buying anything. 

We all know that pictures are worth a thousand words, and think of the phrases that come to mind when you see this picture – regardless of whether you understand what’s going on here, or not:

  • The persons responsible for this do not care about how it looks, as long as it works.
  • The persons responsible do not properly manage their time to make something right. 
  • The persons responsible for this do not know what they are doing.
Now, take a look at this picture:
What are you thinking now?
  • The persons responsible for this understand that others that see this will appreciate efficiency, even if they don’t understand how it works.
  • The persons responsible for this take pride in their work.
  • The persons responsible for this take time to make things correct.
Now, put on your C-level hat and ask yourself which you would rather have in your datacenter. Try to stop yourself from the same excuses – we have all heard them before. I’ve also been in Information Technology for a long time, and I can guess what you are thinking.
“It requires downtime and overtime to keep a datacenter organized. It is not cost efficient to make things ‘look nice.’”
Not correct. Though it can be an arduous task to “clean up” a datacenter cabling rack from the state of Picture one to Picture two, that does mean it is inevitable to return to the previous state. It requires a consistent mantra of disciplined attention across your team. If there is something new to be added or removed, it takes far less time to make that single thing right, than it would be to take a shortcut mentality and allow them to continue. Once one person sees that it’s OK to take a shortcut, others will likely follow suit. This is essentially how things get slowly disorganized. Remember, laziness pays off now… but hard work now pays off later. 
Unfortunately, I didn’t get to visit any datacenters in my trip to the far East, but it’s a safe bet that they would look like Picture 2. Organization eliminates errors caused by disorganization, and in this industry, you simply can’t afford to have it any other way. It requires discipline and attention to detail across all members of your team. When everyone subscribes and contributes, everyone wins together. 

-Jason S.
Jason S. has been in Infrastructure Technology consulting for 17 years, and has an extensive background in various methods of business application delivery, hardware and virtualization, storage infrastructures, and enterprise communication processes.  In his spare time he reads tabletop game instruction manuals and chases his lifelong dream of finding the perfect guacamole recipe. He is married with two children and hopes to move to the Ozark Plateau someday.

Thursday, October 29, 2015

Selecting a Solution that Fits: Right-Sizing Your Storage

We are all familiar with the common misnomer that “One Size Fits All” and most of us at one time have fallen prey to buying an item only to find the advice was a bald-faced lie. So when it comes to numerous choices for housing your business’s data, how do you “size” and select the ideal fit for your environment?

For some, it may be as easy as scanning Gartner’s Magic box for this year’s top performers or the breakthrough up and comers. What about transport methods: will you favor iSCSI, Fiber Channel, Fiber Channel over Ethernet, or SAS? Not to mention you must have solid state disks drives; it’s the hot technology, everyone is using it, and you don’t want fall behind! 

If, on the other hand, you are truly interested in investing in only what you need to get the job done efficiently, then the key to selecting the right storage should start with intimately knowing the I/O patterns of the applications that will access the data. The most important metrics to consider are reads, writes, and I/O size. Collecting data can be done through various performance programs such as perfmon for Windows or htop for Linux. These two are among a crowd and everyone has their favorite.  Specifically, you will want to measure during all workloads; peak and off-peak workloads may have different I/O characteristics. Attention should be given to the following points: Disk Reads/sec, Disk Writes/sec, Disk Latency, Size of I/Os being issued, and Disk Queue length. If you are analyzing a database, also include Checkpoint pages/sec and Page Reads/sec.

Once you have a solid idea of how your applications perform, then you can move onto sizing the physical disks. Typical IOPS per spindle range from 100-130 on 10K RPM drives, 150-180 on 15K RPM drives, and 5000+ on solid state drives. Keep in mind that disks with less than 50-70% written capacity will have improved IOPS over disks written at 80% capacity, so spread the data out! You will also want to consider the impact of write penalty your RAID choice will have on I/O issued against the disks. Read and Write Cache on storage controllers can offer improvement to the performance and should be taken into consideration if the applications lean heavily one way or the other.

Sizing storage systems is much more than just sizing the disks. Every component in the path to the physical drives has a throughput limit and can be a potential bottleneck. As more solid state disks are implemented, the configuration of these ancillary components will become more critical. To avoid pitfalls, analyze the potential throughput of each of the following:
  • Connectivity: HBAs, NICs, switch ports (and if they are shared by multiple servers), array ports, and the number of paths between servers and storage.
  • The number of service processors in the array and how the LUNs are balanced across them.
  • The capacity of the backend busses on the array and how the physical disks are balanced across them.
Other considerations that can affect sizing decisions are advanced features offered by today’s latest generation of storage arrays. Examples are thin provisioning, snapshots/clones, compression, deduplication, and storage based replication. In addition to these, some new generation arrays utilize technology that throws all those media IOPS estimates out the window!
The moral of the story is this: arming yourself with in-depth knowledge of your application performance gives you the ability to quantify different array features and purchase only what you really need. Arrays that tout doing everything come with a hefty price tag. If they don’t benefit your applications, they are worthless, and saving money where it makes sense has never been a bad business decision.

Thursday, October 22, 2015

Why Every Business Needs a Storage Assessment

Let’s pretend you’re in the market to buy a new house. You’re confident your current house is going to sell in the next few months, and you’d like to move as soon as possible. 

I am currently undergoing this process, and at times, the number of questions to go over feels daunting. What is on your list of attributes you’d like your new home to have? How are they prioritized? How much does price affect your decision? Here are some common filters when searching for real estate using common sites:

1. Listing type
2. Price range
3. Location
4. Number of beds
5. Home Type
6. Number of bathrooms

And the list goes on.

Maybe you’d like to view according to square feet or how many days it’s been listed. As I run through the filters and questions I start to pick candidates and save them on a list to research.The plot of land and location may be beautiful, but does that river raise the cost of insurance? How much? Is there a flood risk?

Now it’s time to start visiting and walking through places… creepy Michigan basement – no thanks.  Oil heat – Seriously? Water damage – pass. Sketchy roof – next house, please. I walked through one place where the tenants were present. They were really nice and after talking to them a bit, they mentioned that shortly after moving in they noticed that they would find the refrigerator in the middle of the kitchen. Foundation issues – can’t run fast enough.

It would be so much easier if all this information was provided honestly up front.

Approaching a storage purchase for your datacenter can feel just as daunting. Now, more than ever, there are a myriad of options in the storage market and most all of them have a valid play and use case. But, before you can choose a new storage array, shouldn’t you know what you have first? Just like any other major purchase, it’s helpful to take stock of what you have and how it’s served you over the time you’ve used it. When attempting to do this with your storage environment, you’re often limited to the capabilities of the arrays or hosts connecting to it. Just like choosing a new place to live, there are a number of factors that often come up.

1. Connectivity
2. SSD Support
3. Data efficiency
4. Ease of use
5. Vendor Support
6. Performance

There are many, many more storage features out there and each manufacturer often has their own unique implementation of them. So, how does one choose which factors should be given more weight in the purchase decision? So often we use a “best guess” to choose, and this can lead to misuse of a large portion of your budget, especially when you consider that storage is often the most expensive piece of the pie when compared to compute and networking. For this reason, it’s important that we take stock of what we have, the problem we’re looking to solve, and make sure that moving forward we have the tools in place to make this decision easier. Once we’re able to report on what we have and how we’re using it, we can then confidently make an informed decision on how to add onto, or replace portions of our storage infrastructure. This is the beauty of this -if we have this information in hand first, suddenly we’re able to quickly filter out all the vendors with solutions that don’t fit our problem. The flooded list of storage vendors suddenly becomes a more manageable list of two or three.

A storage assessment will produce a report providing an inventory of what you have, how you’re using it, and help identify areas of improvement. You can improve performance, availability, and reduce operating costs – who doesn’t want to do all of that?! With this knowledge, you can now start to answer other initiatives with certainty: Should we entertain cloud storage? What about a colocation facility? What kind of storage should we have at our DR site? How much can we save in power and cooling by replacing legacy storage? With clear direction, you can now accomplish more than you thought possible.

- Adam P.

Adam P. is a sales engineer with experience in servers, storage, and virtualization with a focus on data management. He started working with VMware ESX 3.5 and has worked with numerous companies assisting with physical to virtual initiatives. When not talking storage or virtualization he’s binge watching shows on Netflix or enjoying some Short’s brewery beverages and playing board games with friends. His favorite movie of all time is: The Big Lebowski.

Friday, October 16, 2015

Securing the Castle (Part 1)

While the concept of firewalling all the way down to the Access Layer may seem a bit unorthodox, and is certainly not a practice that would be suitable for all occasions based on a number of different reasons, the question is, could it be a feasible design for a network? In short, it absolutely can be. That is not to say it wouldn’t take some planning and engineering work up front, but if we were to look at any secure facility that is of worth throughout history we would find that there was a significant amount of engineering and planning that took place before any construction ever started happening. Why should the foundations of IT infrastructure that are going to support any legitimate business be any different? The truth is that they should not be. However, from my experience, I have found that a comprehensive and holistic view of the IT infrastructure in its entirety is actually something that is quite rare among organizations. Typically, it usually is a patchwork of separate teams who rarely collaborate with each other on the best way to achieve a particular goal, but I digress…..

So the questions that need to be answered before we even hop into what the architectural aspect of such a design become:

1. Firewalls are expensive. How could an organization possibly be able to afford that many firewalled ports?

Answer: The truth is that unless you’ve bound yourself to a particular vendor or technology, there are a number of very good options on the market that can meet this constraint. In some cases the price point for firewalled ports VS non-firewalled ports could actually work out to be less than your typical switch port - if you have an open mind.

2. I don’t know of any firewalls that don’t have the port density that switches do. Are there firewalls that do?

Answer: While I will say that high port density firewalls aren’t going to be part of your typical firewall portfolio, they are out there.Vendors such as Juniper Networks and Fortinet currently have products in their portfolio that can fit the bill. Even Cisco had a product in their FWSM (Firewall Services Module) for the 6500 that gave your Catalyst switch the ability to firewall hundreds of ports.

3. Firewalls add a significant amount of overhead in order for me to be able to manage polices. How could I possibly manage a large environment with firewalls all over the place?

Answer: Central management for products is much more common than it used to be. Many vendors are even starting to offer the ability to manage their products via the cloud as well or through an on premises device. One of the features that these types usually have is the ability to create device templates and push out policies to device groups.  In some situations, this could even be faster than making changes in a switch environment.

4. Firewalls are not as fast as switches. Are there firewalls that can give me the throughput and performance that I need?

Answer: Absolutely.Your typical user isn’t going to be in a switching environment where they need to have nano-second latency like on Wall Street. Many of the firewalls that you see today only have a few ยต seconds of latency added for processing firewalled traffic.That is 1/1,000,000ths of a second to process a firewall policy. Typically, when you measure ping latency it’s only going to be in the 1/1,000ths of a second. It’s pretty quick and impressive, to say the least.

Based on those answers, we start to see that being able to architect a secure network solution where security is at the forefront of the architecture and design has become much more feasible than it once was. However, it comes at the cost of being able to allocate the time and resources up front and get all of the required teams collaboratively meeting with each other in order for such a design to be successful. I’ll discuss more of the architectural components of making such a design a bit more of a reality in my next blog.  

Thursday, October 8, 2015

Don’t Just Leap to the Latest Product Release – Look First!

When new software and hardware versions are released, we nerds love to jump to the latest and greatest. Unfortunately, I played witness to the catastrophes that can occur when due diligence is not performed first.

VMware publishes two independent, but equality important compatibility lists: the Hardware Compatibility Guide and the Product Interoperability Matrixes.

The VMware Hardware Compatibility Guide is the de facto guide to determining hardware support with VMware products and versions. Everything from server models to I/O devices supported with ESXi versions can be found. Need to know which SSD options are supported with Virtual SAN 6.0? Yep, that is on there. I have walked into environments with host servers upgraded to an ESXi version that is newer than the Compatibility Guide lists as supported, and sure, ESXi installed successfully, but that is only the first hurdle in a marathon. What happens after installation when you receive a purple screen of death at 2 AM? It means it is time to troubleshoot and fix something that is not verified as supported.

VMware’s Product Interoperability Matrixes is definitely the lesser-acknowledged sibling of VMware’s compatibility lists. Often overlooked after the hardware is supported, the Product Interoperability Matrixes help navigate the increasingly complex web of intertwined VMware products. After you progress beyond basic server consolidation and improved disaster recoverability with Site Recover Manager, you need to take into consideration which versions of Site Recovery Manager are supported with new versions of ESXi and vCenter. Moreover, what about virtualizing the network with NSX? You most definitely should verify support before upgrading ESXi and vCenter following a new release. What happens if the newest version of ESXi does not yet support the older version of NSX that you have not gotten around to upgrading? This could lead to large, unplanned downtime that may not be possible to test in a lab environment without spare time and evaluation licenses.

As much as dotting each I and crossing each T can be a pain and seem unnecessary, especially after things have been stable for so long, I must urge everyone to verify not just hardware support, but also support with intertwined products before jumping to the latest, greatest new version with the sparkling bells and shiny whistles. VMware has released two fantastic resources with the Hardware Compatibility Guide and the Product Interoperability Matrixes, but do not forget to consult your other intertwined products (e.g. backup solution, replication appliance, etc.).

Thursday, October 1, 2015

Challenges in the Unlicensed Wireless Spectrum...

It seems that many of the wireless manufactures are ever seeking ways to improve security on a medium that seems very easy to compromise. For the most part they have done a good job in doing so. However, depending on what vertical you are in, utilizing these tools can land you in some hot water!

An international hotel chain was recently fined from the FCC over blocking personal WiFi hotspots. The practices that were used to accomplish this outcome are very common among most enterprise based wireless deployments today. They exist mainly for a primary reason to keep unauthorized clients from associating to AP's they shouldn't. The problem shows up when that de-association practice is taken a step further by de-associating clients from their own hotspots. As you can imagine, it left the end user trying to access the Internet an undesirable choice, pay the hotel to use their Internet. That didn't fly too well in the eyes of the FCC. Since what we use every day is unlicensed spectrum, it should be shared by all.

Ultimately it should be the responsibility of the end user to use the equipment appropriately. IPS controls can be used to effectively maintain integrity of an enterprise wireless network. Making sure your wireless network has proper Rogue containment is always going to be good practice, along with a host of other tweaks based on your environment. Just make sure you're not the next one in line to write a fat check to the government!

Thursday, September 17, 2015

Gartner: Trusted Vendor Consultant or Pure Research Firm?

If you’ve ever been in the market for a new solution for your datacenter, chances are high that you’ve attended at least one sales presentation from a vendor that touts their membership in the elite ”Gartner Magic Quadrant.“ In my experience, this is one of the first things mentioned or focused upon in the intro PowerPoint slide of any C-level presentation on why you should consider that vendor better than its competitors.

But what does it really mean? Are you supposed to be impressed, and immediately consider that vendor to be part of an exclusive club? Should you immediately introduce your Accounts Payable Team and declare, “Shut up and take my money!” and save yourself from the rest of the sales presentation? Let’s find out.

So who is Gartner?

Since 1979, Gartner, Inc. has been providing objective third-party market analysis for technology corporations, government agencies, and investment communities. They’ve grown to over 5,300 employees across 85 countries, and employ over 1,000 expert analysts whose “…rigorous research process and proven methodologies provide the foundation for unbiased, pragmatic, and actionable insight.”  

What the heck is the Magic Quadrant?

The Gartner Magic Quadrant (MQ) is the brand name for a series of market research reports that Gartner publishes every 1-2 years across several specific technology industries. According to Gartner, the Magic Quadrant aims to provide a qualitative analysis into a market and its direction, maturity and participants. They first rate vendors upon two main criteria, which are ”completeness of vision” and ”ability to execute.“ Then, they use an undisclosed proprietary methodology to create vendor scores across four quadrants which include Leaders, Challengers, Visionaries, and Niche Players. 

Sounds great! I’m a technology vendor. Count me in!

Well, it’s not that simple. First of all, vendors cannot choose to opt-in or opt-out. Gartner chooses you, and not the other way around.Vendors are included in the research only if they meet the market definition and inclusion criteria established by the Gartner Analyst. A vendor may choose not to participate in the process or respond to research requests for information, in which case, the analysts will gather as much current information as possible from publicly available sources and will indicate this in the disclaimer in the published document.  

Well, that sounds fishy.  

You’re not alone. Gartner has been criticized on the lack of disclosure of vendor's component scores and the lack of transparency in Gartner's methodology used to derive the vendor's position on the Magic Quadrant map.  However, to ensure consistency in their ratings and placements, a formal process is used. Research Proposals go through several internal review levels, and eventually a review from a senior research board. If you’re still not convinced, remember that that Gartner employs about 1,280 R&D Analysts that are probably much smarter than you.

Hey, I got an e-mail from a Gartner Analyst! I’m somebody! 

Maybe, but now you’re on the hook to provide them with information – and you had better be ready to provide lots of it. It is said that a typical "landscape report" can take up to 150 to 200 hours to produce for each vendor. Unless you have a dedicated Analyst Relations team, you’ll be working through some holidays.

Sweet! We’re mentioned in their report! But wait a minute… I vehemently disagree with where we are ranked.

Though Gartner assures us that their data collection and review process is the exact same for all vendors, contention does occur. The first point of escalation is the analyst who created the research being questioned. The second point of escalation is the analyst’s manager, whose role is to verify that all required methodologies and processes were followed by the analyst(s), and that all research positions have been appropriately supported. The third is the Office of the Ombudsman.


Sure, the Office of the Ombudsman. It’s a Department of State Governmental Agency whose principles pride themselves in Independence, Neutrality and Impartiality, Confidentiality, and Informality. Your tax dollars at work – the Department of State has employees that span 200 Countries, 178 Embassies, 86 Consulates, and 9 Missions.

Johnson, get that PowerPoint slide deck ready! Let’s go sell!  

Fine. Just be sure to include this disclaimer, which few vendors do:

Gartner does not endorse any vendor, product, or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Thursday, September 10, 2015

Got Cloud?

Clouds, those white billowy puffs of moisture, challenge our imaginations into by delivering perceived images to our brains. They have existed since the beginning of time, constantly moving and changing, and can appear differently to each individual person. Some people would look at this picture on the right and see a heart; however, I look at it and see a heart on fire with flames shooting out the back.
So, you can imagine how surprised I was when I started receiving calls from my customers asking if I had a cloud they could use.

While the concept of cloud computing as we know today dates back to the ‘90’s, the actual adoption of that concept into mainstream business practice has been a slow one. In recent years, however, the cloud has matured from that emerging technology into a proven delivery platform used widely throughout enterprises. Customers are clamoring to get their head in the cloud, as well as their applications, their data, and sometime, even their workforce!

Simply put, my favorite definition of the cloud is that it’s just someone else’s computer. But, here comes the tricky part- deciphering what being in the cloud means to each customer. Just like with the cloud image, we both see the same heart but have slightly different interpretations of it.

Virtualization technology is at the heart of the cloud,  giving customers the ability to use a shared proxy of compute resources that allows them to move away from solely footing the bill of expensive CAPEX investments to using more agile OPEX dollars and sharing the expense of owning hardware assets with other businesses. However, handing over control of any aspect of your business to another entity should not be taken lightly. After all, nobody cares about your business as much as you do. In order to successfully utilize cloud computing, companies need to have serious conversations between the management, finance, and IT teams to layout each crucial piece of the puzzle before shopping for a cloud provider, or even a business partner to assist them in utilizing this type of resource.

Security is the primary hurdle faced in cloud use, same as the privacy, compliance, and business SLA’s. Some considerations to discuss when implementing a cloud strategy are:

1. Rate the applications and functions of the business in terms of their cost, manpower to manage, critical uptime, and security or privacy risk. This will give you a clear picture of what could be put outside of your datacenter and into someone else’s hands.

2. Understand what internal products are in place or would need to be purchased in order for the business to do their part to secure sensitive data before sending it outside their doors. Organizations should not automatically assume that a cloud deployment will be any more or less secure than their own internal data center and need to proactively help to assist their provider in managing security.

3. Keep in mind that access to your applications and data is dependent on your internet connection.   Given the items you have identified as candidates to put with an MSP or cloud provider, how will your business function if the connection is unavailable for an hour? 12 hours? 24 hours? What is the loss to the business? Customers who implement cloud computing on a large scale should consider purchasing a backup internet circuit

4. Putting data into the cloud is relatively easy but what SLA’s need to be met if you need to bring your data back? And what will the cost to do that look like? This factor is especially important if you are using cloud resources in a disaster recovery plan versus just housing offsite copies of your backup for long term retention, which most likely won’t need to be accessed often, if ever.

Obviously, this list is not all encompassing. It’s meant to drive thought and conversation internally, and prepare you for questions that partners like Great Lakes Computer will need to know when working with you to develop the best possible use of cloud computing to fit your business.    

Thursday, September 3, 2015

“Nothing is trivial”: The Unsung Heroes of Nimble Storage OS 2.3.4

Here’s a bit of movie trivia, name the movie with this quote:  “Nothing is trivial”

Nimble Storage recently released their latest operating system, 2.3.4, to the masses and I have to say, I’m very happy with the subtleties they baked into their interface.  I’ve always been a fan of looking beyond the major features, past the ‘industry disruptive’ tech, and finding the little nuances that make your life easier without you even noticing.  I’d like to highlight a couple of these features that I’ve come to appreciate with the latest Nimble update.

First of the two features is a simple search box:

Some might not even notice, some may have noticed and just shrugged thinking, “I only have 10 Volumes, no need to search.”  Trust me on this; you will be doing yourself a disservice by overlooking this feature.  Type in a letter, just one character, and you’re immediately returned categorized results containing that character.  Volumes, Users, Initiator Groups, just about every object is searched and returned in the results immediately.  This may not seem valuable unless you have a large number of volumes in your Nimble Group. I tend to agree with you on this, but think for a moment about a recent VMware feature, Virtual Volumes.  Once implemented, this will create a Nimble Volume for each VM file associated with the virtual machine (vmdk, vmx, vswp, etc.).  This means you can say goodbye to your handful of Volumes and welcome a much longer list of Volumes in your Group.  The benefit of VVOLs is for another blog series, but it’s not going away.  Starting to see the value of this Search feature?

The second feature is more subtle.  It’s a Hotfix checker built into the Nimble Windows Toolkit.  This is compatible with Windows Server 2008R2 SP1 onwards and checks for recommended storage stack hotfixes prior to installing the NWT.  If you’ve installed previous versions of NWT, then you may be familiar with the recommended hotfixes - it’s a pretty extensive list.  In addition to the initial checker, a hotfix monitor service is setup that will continue to validate the presence of hotfixes on every reboot.  You can imagine this will be updated to check for new hotfixes as they’re needed.

Seeing these included with the latest release from Nimble brought that movie quote to mind.  Any guess on the movie?  Need a hint?  Here’s the full quote from the movie:

“It’s funny, little things used to mean so much to Shelly.  I used to think they were kind of trivial, believe me, nothing is trivial.”

Sometimes I think there’s so much noise and so many buzzwords created in our industry that the little things go unappreciated.  So, what are you favorite subtle features?  What ‘small’ feature would you like to take from one product and put into another to make it better?

Thursday, August 27, 2015

“You Can’t Have Your Cake and Eat It Too!”

Executive: I’ll be at an offsite meeting with the head of Human Resources and Accounting for the next couple days to go over our staffing strategy for the next year. Can you get the personnel information for everyone in the department, compensation plans, and the performance reviews from the past year and put them into Dropbox for me?

Assistant:  I can, but at the company briefing last week the gal from security said that, with all of the data breaches lately, if there was going to be any sensitive data leaving the company site that they’re providing a USB encrypted drive for transporting the data. Would you like me to do that instead?

Executive: I was hoping to travel light on the tech since I’m going to be taking my golf clubs to go shoot a few rounds after the meeting. Besides, I’ve already got my USB mouse, the extra laptop battery, and the power brick for the laptop, on top of my phone charger and the USB cable.  All the different ends on the cables get confusing and who needs the hassle of another device to lug around? Just put them into Dropbox please. They’ll be fine. The only people with my Dropbox account info is me, you, and my wife. She likes to upload pictures of the kids and share them with people sometimes. Anyway, it’ll be a lot more convenient for me since I can just pull them down from the cloud whenever I need them and not have to worry about it.

Assistant: I’ll have them uploaded. Enjoy your meeting!
There’s an old adage that I feel describes the relationship between security and convenience quite well. The adage that I’m referring is the good ‘ole, “You can’t have your cake and eat it too!” saying that has been around since well, before the time of technology. In essence, the adage is supposed to demonstrate that you can’t have it both ways, since if you were to eat your cake, you would no longer have it as a possession, and if you were to keep it as a possession, you wouldn’t be able to eat it. It’s one or the other. On top of that, when you look at what security and convenience mean at their most basic and fundamentals level, you’ll realize that they’re almost the exact opposite of each other. By nature, the principal of security is to make something more difficult, whereas convenience is going to make something easier.

While one might think the situation above to be an exaggeration just to illustrate a point; I can honestly say that it is not. In fact, the situation that was described has actually occurred.  One of the most common reasons that situations like this occur is because in many organizations, security only applies when it’s convenient for a user or group of users. In most of these types of situations, the user groups that tend to have the least regard for the company’s security policies are the ones that wield some sort of decision-making power.

On the flip side, the organizations that I’ve found that don’t look at security from the “when it’s convenient” perspective are those where security is an initiative that flows from the CEO on down and they take also tend to take security very seriously. This means that the CEO adheres to the same security policies as the common end user. It just goes to illustrate the power of leading by example. At the end of the day, security will not be the major inconvenience that it is sometimes painted to be if expectations are managed and flow from the top down.

Thursday, August 20, 2015

VMware vSphere ESXi Host Web Client

Not everyone is a fan of the current VMware vSphere Web Client provided with vCenter Server, but, one thing is for sure, it’s here to stay. I was not always a fan of the vSphere Web Client, but as improvements were made and my exposure grew, I have become quite fond of it (less reliance on Microsoft Windows? No complaints here).

Unfortunately, one thing still holds me back from ditching the Windows laptop for my Mac or a Linux distribution: ESXi host management. Thanks to Etienne Le Sueur and George Estebe (apologies to others that have contributed, I am going from the list of engineers), we now have a VMware Fling that brings browser based management to our vSphere ESXi hosts.

Warning! Flings are experimental and VMware recommends that they not be run on production systems.

Description from the Fling:
This version of the ESXi Embedded Host Client is written purely in HTML and JavaScript, and is served directly from your ESXi host and should perform much better than any of the existing solutions. Please note that the Host Client cannot be used to manage vCenter. Currently, the client is in its development phase, but we are releasing this Fling to elicit early feedback from our users to help guide the development and user experience that we are creating.

Features available at the time of posting:

• VM operations (Power on, off, reset, suspend, etc.).
• Creating a new VM, from scratch or from OVF/OVA (limited OVA support)
• Configuring NTP on a host
• Displaying summaries, events, tasks, and notifications/alerts
• Providing a console to VMs
• Configuring host networking
• Configuring host advanced settings
• Configuring host services

I am not sure if I am more excited for the ability to manage vSphere ESXi hosts without the need for a Windows installable client, or for a preview of what the vSphere Web Client could be without the dependence on Adobe Flash…

I recommend all virtualization engineers and administrators check it out and contribute feedback.

Note: When using with an ESXi host that was upgraded from ESXi 5.x, a workaround is required to resolve a browser 503 error. William Lam has detailed the workaround on his blog at New HTML5 Embedded Host Client for ESXi (

Wednesday, August 12, 2015

Is 802.11ac Wave 2 the Real Deal?

As the demand for corporate traffic is ever-growing, the demand for infrastructure is growing with it.  So, the question remains, why haven’t you upgraded to 802.11ac Wave 2 yet?  Well, more than likely, it’s because you are hesitant, just upgraded, or feel your Wi-Fi is running fine.  Whatever the reason, it’s probably time to at least think about utilizing Wave 2.  Let’s be honest, data is not going to stop.  In fact, users are only going to consume more!  Can your infrastructure handle the demand?

Making sure your environment can handle the demand, whether it’s wired or wireless, can sometimes feel like a daunting task.  With the first go around of 802.11ac (Wave 1), it was possible to reach speeds of up to 1.3Gpbs, and for most that may have been a stretch, as loads on the access point (AP) and environment come into play.  This doesn’t even start to take into account dense environments - which are where Wave 2 can level the playing field. 

So, how does Wave 2 differ from its predecessor?  Primarily, there are two factors at play: four spatial streams and, more significantly, the support of MU-MIMO or Multi-User Multiple Input Multiple Output.  This should prove to be extremely beneficial to very dense environments.  As APs will have the ability to utilize multiple streams that can reach multiple clients simultaneously, this will greatly improve data transmit times, not to mention freeing up our beloved bandwidth.

In order to accommodate the improved bandwidth and functionality, a 4th spatial stream (i.e. an additional antenna) must be used.  That bundled with the use of the 160 MHz band allows for double the channel bandwidth, completing the package for Wave 2. 

The reality is clients currently are not able to utilize the benefits of Wave 2 just yet. Devices that will have the Wave 2 technology are expected to be released some time in 2016.  If you’re thinking of replacing or upgrading your current wireless infrastructure, now may be a great time.  Typically, the replacement cycle on a wireless infrastructure is 18 to 24 months.  Wave 2 APs will cost about 10% more than Wave 1 and investing in this extra cost now will allow you to skip the next cycle - increasing the ROI and your bottom line. 

Planning your wireless infrastructure is also a crucial step for optimal performance in your wireless network.  A wireless site survey will provide you with several pieces of information, ranging from AP orientation, channel utilization, and power adjustments to name a few.  This information greatly improves overall wireless performance and can have long lasting effects on your wireless environment.  Even if you are not looking to upgrade your wireless infrastructure in the near future, a wireless site survey can provide detailed information regarding your wireless network so you can provide an optimal configuration for your users.  The combination of the advancements offered by Wave 2 and the information provided by a wireless site survey will enable your organization to provide your users with a high performing wireless network now and into the future.

Thursday, August 6, 2015

‘Doveryai, no Proveryai’: Why Corporate Networks Need to Verify First, Trust Later

Firewalls are often thought of as being used to protect your network from bad people and things on the Internet.  While this is technically true, we need to change the way we think about where a firewall should be deployed and how.  With the proliferation and evolution of threats like Malware and Advanced Persistent Threats (APT), the Internet is no longer the only source of malicious issues.  Think about this for a second - when you deploy a firewall, there are two basic security zones that are preconfigured on the box: there’s an Internet or untrusted zone / interface, and there’s an Inside / Internal or trusted interface.  Therein lies the fundamental issue with network security. 

Just because your machine is physically connected to the network and behind the corporate firewall doesn’t mean you should be trusted.  Attackers put malicious code on websites, distribute via email, and find other various ways to get their application onto your machine.  Corporations spend a lot of time and money making sure their Internet Edge is secure so that the attackers can’t get through their corporate firewall, but they focus far less on making sure that user devices can’t access systems that aren’t necessary to perform their job function.  If an attacker were able to install Malware on the Financial Controller’s PC, what servers and systems does that PC have access to?  On what ports?  99% of the time, those answers are very simple.  That particular PC usually has access to ALL of the Corporate Financial systems and will usually have unfettered access to the Internet as well.


If we’re thinking about this from a logical standpoint, there is absolutely no reason to trust devices inside the firewall any more than we trust unknown machines on the Internet.  After all, our Corporate PCs connect to those unknown machines on the internet every day.  Sure, we use the latest anti-virus, make sure to turn on the Windows firewall when we’re on a public network, and maybe even use a host inspecting Network Access Control (NAC) or Unified Access Control (UAC) solution in the office to run a posture assessment on mobile devices to make sure they are clean before we allow them access to our Corporate Network.  But those systems have flaws.  One of the biggest flaws is that they are all signature-based in nature.

Securing your datacenter is as much about having the right systems in the right place as it is about visibility into the communications between those systems.  Knowing what calls your web server is making to your SQL database server can help identify known and expected traffic patterns.  Ideally, every device would be segmented from every other device on your network with all of that traffic running through a datacenter firewall to determine which traffic to allow or drop.  End user devices, especially mobile ones, should be treated the same as Internet-based hosts.  They should not be allowed to access ANYTHING on your network without sending that traffic through a datacenter firewall first.

Ronald Reagan used the phrase “Doveryai, no proveryai” frequently during his term in office. This old Russion proverb translates in English to, “Trust, but verify.” In today’s always-on, instant access, networking ecosystem, we need to adopt a different phrase:

“Proverai to doveryai“ or “Verify to trust.”

Thursday, July 30, 2015

Have You Considered Centralized Storage?

If you’re seeking a centralized storage device that provides traditional block-level storage for your physical or virtual servers, while simultaneously providing a central platform for file-level data storage, then you’re looking for a Unified Storage device.  Traditionally, these devices do not contain atypical hardware that you wouldn’t already find in a block-level SAN, they are SANs with a NAS software feature.  One way to loosely conceptualize the difference between a NAS and a SAN is that NAS appears to a client operating system as a file server (a client can map network drives to shares on that server) whereas a disk available through a SAN still appears to the client OS as a disk, which is visible in disk and volume management utilities (along with client's local disks), and is available to be formatted with a file system and mounted. The connectivity protocols for a SAN include Fibre Channel, iSCSI, whereas the popular NAS connectivity protocols used are NFS and CIFS.  Unified Storage replaces file servers and consolidates data for applications and virtual servers onto a single platform.

NAS-specific storage devices are plentiful in the market, and many offer large, inexpensive, and redundant disk capacities and features usually found in a block-level SAN.  Vendor examples include HP’s StoreEasy product family, which uses Microsoft Windows as a single-solution OS paired with a RAID array, and a crowded middle-market of NFS Linux-based disk devices that include Synology, Transporter, and QNAP.   However, many of these products do not scale beyond their initial capacity, and can present too much of a failure risk for an Enterprise network. Unified storage can provide the cost savings and simplicity of consolidating storage over an existing network, the efficiency of tiered storage, and the flexibility required by virtual server environments.


Potential cost savings attract IT buyers to Unified Storage devices, because, while not every network requires a SAN, most networks have some flavor of a NAS concept implemented.   End users need a shared and central location for storing heaps of documents and other data, and a common way of implementation is with Microsoft-based File Shares (accessed via the legacy SMB protocol or more commonly CIFS).   As networks grow and age, these file share caches usually never shrink in size, which not only causes daily management consternation for the IT Admins, but also increases the vulnerability and importance of the network’s most-coveted data.  File data has to be placed somewhere – why not give it the same efficiency and high availability features of a SAN?  
Most of the popular Enterprise storage vendors have a Unified option.  EMC’s current VNX-Series Generation can use dedicated RAM and CPU resources for Data Movers that can control CIFS traffic to dedicated LUNs.  In 2002, NetApp added block-level capabilities to their popular Filer series, taking the reverse route to get to a Unified option.   3PAR StorServ, now owned by HP, offers the File Persona Software Suite to complement their Block Persona technology. 
Management of Unified Storage is not complex. Hardware competition benefits buyers by having a single pane-of-glass interface for most NAS-functionality paired right along with day-to-day LUN provisioning for SANs.  Security and Access integration with Microsoft Active Directory file permissions can get complex, but only for the most rigorous of Security initiatives. 


Counter arguments to Unified Storage systems usually hover around perceived compromises in performance, because File-based I/O is structurally different that block-based I/O.   If a block-based application is combined on a system that has more dynamic, file-based access, users can experience variability in performance because of resources allocated to the file-based side.  Consistency in disk performance is paramount for many environments, and Unified Storage can be considered a potential risk.

Many IT Departments can see Unified Storage devices as unnecessary, as most already have Windows Server Virtual Machines providing Windows CIFS-based File Share vdisks on their existing block storage LUNs.  The vdisks can scale easily, because NTFS-based volumes can potentially grow to 16TB.  Most Windows-based networks also use Microsoft’s Group Policy to centrally manage File Share access, and adding another abstractive layer with a NAS device can create complexity where it is not desired.

With a Unified solution, administrators need to manage SAN and NAS storage as separate silos.  This complicates administration, forcing them to predict future storage needs for each silo and manage requirements separately.  Since File-Level storage is traditionally placed on less-expensive SATA disk, the Unified Solution will eventually be constrained to the limitations of the entire solution and rapidly accelerate a Storage refresh cycle. 

Final Thoughts

Realistically, most Unified Storage Systems are better at one capability than the other.  This means they are a NAS that figured out a way to provide block storage (NetApp), or they are block storage with some sort of NAS function integrated (EMC).  Although most environments will use a mixture of workloads, a particular workload will often be the most important. Make sure that you test the specific conditions and configurations that will be most important to you.

Thursday, July 23, 2015

Avoiding the ‘Jack of All Trades, Master of None’ Approach

As a member of a professional services group within an IT sales organization, my team’s focus is on evaluating our customer’s business problems and engineering solutions in the form of products and services we offer in order to fix the issue.  We are, by definition, “problem solvers.”  This reminds me of the old adage, “there is more than one way to skin a cat.”  Well Mr. Customer, there is more than one way to solve your storage issue, more than one way to clear up that excessive network traffic, and more than one way to leverage virtualization to increase consolidation and availability.

Between application and operating system software, networking, compute, and storage hardware, there are hundreds of thousands of products one can choose from.  How can we be experts in all of them?  Quite simply, we can’t, you can’t, and no one organization can, without employing hundreds of people to match the product counts. It’s just not feasible in most IT sales organizations, which are typically small to medium sized businesses. If they aren’t careful, this leads to simply having employees who have about kiddie pool’s depth in knowledge about a vast number of technologies instead of specializing in a select few.

Since we want to avoid this “jack of all trades” approach, we have chosen to be diligent in our selection of the hardware and software products that we recommend to business owners so that we can “master” those that we do offer.  As a sales team, we must be able to show the value or ROI of the purchase, implement the solutions in professional services roles, and support the customer post-sale with knowledge and possibly more professional services as the environment grows.  As an organization, we must maintain strong relationships with the manufacturer’s technical resources to overcome problems that will most certainly arise.

Our engineers spend their hours on training, using, patching, identifying bugs as well as fixes, and vetting the products so that we can solve problems before or as they occur.  We choose to hone our skills on a concise portfolio because we know the dangers of trying to be everything to everyone.  Even with all our focused efforts, do we know it all?  Not a chance, but we will also work for you to find answers when we don’t.  While there are many similarities we can assimilate, every environment our products go into will have its own priorities, flaws, designs, and problems.

Outside of buying the tangible product, we find that many organizations struggle because they set out to install new applications or hardware without sufficiently planning for the intangible side: Implementation!  Likely your pre-planning includes how your own IT staff will manage the products in your environment once they are installed, but does that mean they can implement without any experience or manage it without training?  Just like engineers, IT staff members can’t possibly know how to use every product either.  There are not enough hours in the day.   The actual cost to a business to fix a botched implementation, not only in manpower but also in lost productivity or downtime, far exceeds the cost to plan and include the services of expert users ahead of time. Using experienced engineers in an onsite or remote fashion for best practices and “I learned it the hard way” advice to your staff is valuable training that you won’t get from a pre-installation checklist.

Great Lakes Computer is proud to offer our customers professional services, including, but not limited to the following products in our portfolio: VMware, Microsoft, Juniper Networks, Palo Alto Networks, Aruba Networks, Cisco, Fortinet, Nimble Storage, Pure Storage, Veeam, and Unitrends solutions.   Please consider engaging our engineering staff on the front end, in tandem with your IT administrators, or as a supplement to a short staff that might not have the resources to get the project up and running to meet deadlines.   We truly are a partnership that can springboard your next implementation to success!

Thursday, July 16, 2015

Dual Controllers, Single Point of Failure?

We’ve all (hopefully) heard the term before, “Single Point of Failure.”  This phrase strikes fear in the hearts of people in management, the idea that if this one important resource has issues, then everything dependent on it fails.  It’s the weak link, the gremlin in your environment, and according to our old friend Murphy, “Anything that can go wrong will go wrong.”  And you better believe this SPOF gremlin is going to rear its ugly head at the most opportune and painful time – just ask any veteran IT professional.

So how do we combat these SPOF gremlins?  We build in redundancy, we limit failure domains, we vigilantly monitor our environments, and alert on any changes or anomalies.  So when failures do occur, we have either an automatic failover or near immediate solution that will keep our users happily clicking away.

So let’s apply this to the topic of storage, specifically a storage array.  Forget the network connections to the array for now; let’s hone in on the modern storage array chassis itself.  They are often equipped with multiple network connections, power supplies, disks, processors, memory banks, etc.  “We have dual controllers, everything is mirrored the instant it is brought into the array, so this is not a single point of failure.”

So are they correct?  Will a dual controller storage array be able to keep the SPOF gremlins at bay?  I wish I could give you a conclusive answer, because I suspect that some storage manufacturers are nearing the point where the odds of a failure bringing an entire dual controller array down is comical.  But let’s ponder this…and I’m speaking from a painful past experience here. The operating system that runs the array, what is protecting you from failures within that?  “We have the best engineers in the industry,”  “We run our revisions through rigorous tests to ensure stability,” and “We guarantee 99.999% uptime.”

Interestingly enough, five-nines of reliability still allows for up to 5 minutes and fifteen seconds or less of downtime a year.  Think of the damage a SPOF gremlin could do in that amount of time – yeah, it will be painful and likely take longer than 5 minutes to fully recover.

So what do we do?  Well, if you have high tier workloads that require constant uptime, then it’s probably a good idea to look at replica technology.  Storage arrays often have some sort of storage replication built within them as a feature.  If that doesn’t work out, there are multiple applications and features built into services that will provide a similar solution.

My best advice is: continue to be vigilant with your monitoring and don’t let your guard down.  Those gremlins are out there somewhere, and when they strike, you need to be ready.  Let us help with planning your defenses and maintaining your uptime goals.  We have the expertise to identify the single points of failure (they can be very sneaky) and how to combat them.  After all, if you take on the gremlins yourself, could you be considered a SPOF?

Thursday, July 9, 2015

Troubleshooting Performance Bottlenecks with Per-VM Monitoring

Nimble Storage has offered Per-VM monitoring within InfoSight since April, included without requiring the purchase of an additional option or component. All that is required to enable Per-VM monitoring is to register your Nimble Storage array with vCenter (Administration > vCenter Plugin using the array management interface if you have not already) and enable Stream Data in InfoSight (Administration > Virtual Environment in InfoSight). Per-VM monitoring, or Virtual Environment, can be found under the Manage menu item in InfoSight.

The first thing you will notice is an inventory tree on the left with icons for Hosts and Clusters, Virtual Machines, and Storage.

Next, you will notice the content section with headers for Host Activity, Top VMs, Datastore Treemap, Inactive VMs, and Nimble Arrays.
  • Host Activity provides a list of your vSphere hosts and their recent performance metrics
  • Top VMs lists the ten busiest virtual machines over the past 24 hours by I/O and latency
  • Datastore Treemap displays heat maps to compare the performance of virtual machines
  • Inactive VMs lists all virtual machines that have not generated any I/O in the past seven days
  • Nimble Arrays provides a list of Nimble Storage arrays registered with vCenter
All of the reports are pretty self-explanatory, but Datastore Treemap may be the most unique and beneficial of the bunch. The heat map design sizes virtual machines by total I/O, then colors the unit based on observed latency and groups virtual machines by datastore.

Each square represents a virtual machine. This enables us to see which virtual machines are producing the most I/O and easily compare them to the other virtual machines with which they share a datastore. The more red the square, the higher the average latency; hovering the cursor over a square displays a popover with the detailed figures, and clicking on the virtual machine name in the popover will provide the historical performance details of the specific virtual machine.

Now we can adjust the timeframe to do something like narrow down to a time of reported slowness. In this example, we see that the primary factor for the spikes in latency is network bottlenecks. As we look at the spikes, we also notice that they always occur on a Saturday - which also happens to be the day that we perform full backups of our environment.

Below the graph of Virtual Machine Latency, we also see graphs for: Host Performance, Datastore Analysis, and Active Neighbor Analysis.

Datastore Analysis

Active Neighbor Analysis

Thursday, June 25, 2015

Break the PSK Habit!

Many are guilty (I know I am), or have inherited it from a predecessor, setting a WPA/WPA2 (even worse, WEP) personal mode password or PSK on your wireless network.  This is surely the easiest way to get wireless security up and going for both yourself and the people/clients you are providing access to.  However, it is far from secure.  Enter WPA2-enterprise mode; don’t let the term fool you. Just because it says enterprise doesn’t mean it’s for enterprise environments only, or that it comes with a hefty price tag.  The daunting task of setting enterprise mode security on your wireless network is not quite as daunting as it once seemed.  With so many networks getting hacked and experiencing security breaches, focusing not just on the wired side, but the wireless side is equally as important.

Since passwords of WEP and WPA/WPA2 PSK wireless networks are stored on the devices used to access them (usually in plain text), this makes your network vulnerable to attack.  There are several scenarios of how this can happen to you.  The most common is a device becoming lost or stolen.  To protect your network and all the data contained within it, you would have to change the PSK not only in AP or wireless controller itself, but every device as well, in order to prevent a potential breach.

So, how can you make a change for the better?  Let’s dive into the benefits of enterprise mode security and explore how this can fit in not only large enterprises but in the SMB space as well.

802.1X authentication provides an extra layer of security for a wireless network and is overall, a better choice for business networks.  One of the main requirements for this method of authentication is utilizing a Remote Access Dial-In User Service (RADIUS) server.  Essentially, this utilizes a username and password to gain access to wireless services.  In fact, if you are already running Windows Active Directory, then you are half way there!  Installing Network Policy Server (NPS) will take you the rest of the way.  Don’t have Active Directory?  No worries, there are a plethora of RADIUS servers available, ranging from open source to hosted solutions. 

In the event of a lost or stolen device, using WPA2-enterprise mode and disabling the user’s account or changing their password is a far easier task to keep your network secure over making a wholesale security change on your entire infrastructure. Another flaw that personal mode is prone to is eavesdropping.  This will allow an attacker to “listen” to all the wireless traffic that is being exchanged from the victim(s) wireless device and gain access to sensitive information.  This is done via decrypting the traffic that is being sent between devices and APs with the wireless key that was easily obtained from the lost/stolen device.  With enterprise mode, decrypting in this fashion is not possible.

Along with these advantages, come additional features to assist in the overall security of your network.  By requiring users to authenticate to a RADIUS server upon connecting, you can specify unique policies that control a variety of limits including time of day, device, and AP restrictions.  With 802.1X, the ability of setting port access on supporting switches is also a great benefit for security conscious admins.

In certain cases, implementing an 802.1X solution may not always be practical, especially for devices that are not compatible.  As time goes on, this is becoming less and less the case.  Many smart devices have the capability built into their software to authenticate against a WPA2-enterprise mode network.  There are a couple of options for devices that are not capable of handling this encryption type.  Many of them are less than ideal, ranging from MAC authentication to setting up a separate SSID with a PSK for those devices.  Neither of which are a wise path when considering implementing WPA2-enterprise mode security.  As MAC addresses are too easily spoofed and a separate SSID using a PSK can defeat the purpose of enterprise mode.  Ideally, using the wire would be the preferred method with those devices; however, a wireless bridge (disabling the internal WiFi) that utilizes enterprise mode would be the preferred choice.

It is much less time-consuming and simpler to implement a personal mode or a PSK when deploying security on a wireless network.  While taking the time to utilize a WPA2-enterprise mode, deployment may seem challenging and not worth the effort, and it’s true that the tradeoffs need to be heavily weighed.  Are you willing to deal with the repercussions of an attack?  Or is securing your wireless network with WPA2-enterprise mode more of a practical and secure long term solution?  I know my answer.

Friday, June 19, 2015

Healthcare Security – A Horror Story

The following tale is fictitious, only in that certain details have been changed to protect the… patients.  Several weeks ago, I had a conversation with an employee named Rachel who works for a small healthcare provider called Gotham Aging-Care Clinic (GACC).

This particular clinic specializes in the care and well-being of the City of Gotham’s citizens of advanced age. During the course of our conversation, we discussed several key aspects surrounding HIPAA and data security relating to patient information. Rachel explained that her clinic did not have much of an IT budget and that most of the clinicians were using personal laptops to perform their daily duties. While concerns have been raised with management at this small practice numerous times, those concerns have fallen on deaf ears. The clinicians are essentially responsible for their own machines and the security of those machines. Of those that I’ve personally spoke with, few are familiar with HIPAA requirements on patient privacy and one didn’t know what HIPAA actually was.

While in the office, Rachel connects to an open, unsecured company wireless network to access their hosted EMR system. She also connects her smartphone to this same wireless network to listen to streaming music, update Facebook status, tweet, and perform other non-work related activities during her break or lunch hour.  Sometimes during lunch, she will visit a coffee shop that occupies the building next door and work on updating her patient records from her morning rounds, while still connected to the clinic’s wireless network.

Now, there are several red flags in the previous paragraphs, all of which were a part of our discussion.  The EMR system is hosted with a known, reputable provider of secure electronic medical records. They enforce frequent password changes, have very granular logging and access controls, and encrypt traffic to and from the system. Unfortunately, that’s the ONLY portion of the patient access that could be considered secure.

The following is a list of the security related items that I noticed during a ten minute inspection of Rachel’s personal laptop:

Rachel’s password to log into her laptop is a single character…the space bar.  She says this is secure because, “no one would ever guess that as my password”.

Rachel uses Microsoft Terminal Services to access a remote desktop in her office over the internet on the standard port, unencrypted. Frequently, via unsecured wireless connections remotely. Her password is saved, so she doesn’t have to type it in every time she logs in. While in the office, she has a software client that she runs to access the EMR system while connected to the clinic’s unsecured wireless network. The account she uses to access the EMR system is one of four accounts that are shared amongst 10-12 clinicians (a cost-saving decision related to licensing purchases).

Rachel doesn’t run any form of Endpoint Protection / Anti-virus because it “slows her machine down”.

Rachel has Excel spreadsheets containing patient data stored locally on her laptop so she can update information for patients in areas of their clinic where wireless signal is absent or unreliable. This is standard, and practiced by all clinicians in those wings of the building and was recommended by management due to the fact that “wireless networks are expensive to install and maintain”.

Now, following my inspection of Rachel’s laptop and subsequent discussion of all of these findings (and others that I’ve decided to omit, again, to protect the patients), Rachel dropped the security bomb of all bombs.  Her clinic handles the care and wellbeing of senior citizens in Gotham City. Their “hosted” EMR system is actually nothing more than “secure” remote access to the EMR system that resides at Gotham Memorial Hospital (GMH), which is one of the largest healthcare providers in Gotham City. Where GACC has several hundred patients for which they provide services in a given year, GMH services hundreds of thousands of patients throughout the year. Now, GACC does NOT have access to all of their patient data, but an attacker with the proper skill set would have the capabilities to gain this access. An attacker with very little skills could gain access to the hundreds of patients that GACC serves with virtually no effort.

There are multiple lessons to be learned here. So many so, in fact, that I would surely miss one if I tried to hit all of them. My biggest concern here isn’t with GACC and their lack of security policy, although that is a part of it. What concerns me most is that GMH still allows them access to their data, most likely, due to the lack of checks and balances when it comes to the “covered entity” portion of HIPAA. Their system is inherently at risk due to the lack of security of one of their partners. It’s okay though. When a breach occurs, they will have all of the necessary measures in place to be able to pinpoint the source of the breach, tie that to GACC and their lack of security, and wash their hands of any responsibility for the theft of hundreds of thousands of patient records. I know I’ll sleep better at night knowing that the healthcare provider I chose to take care of my family’s healthcare needs isn’t responsible for the identities of my wife and children being stolen. After all, it’s not their fault that GACC checked all of the right boxes on the form. HIPAA doesn’t say they have to validate anything, just that they have to ask the right questions and receive the right answers – so they can still be compliant without being secure.


Between the time of authoring this document and publishing, Rachel resigned from this clinic and has moved on to a larger clinic with a specialization more aligned with her degree. Her laptop was inspected by the GACC system administrator on her last day to ensure that no sensitive data remained on Rachel’s personal laptop. This procedure was performed in less than five minutes. Not only did this system administrator miss several local documents containing patient information, but remote access to the Microsoft Terminal Server and subsequent access to the EMR system is still available. I’m sure access to the EMR system will be terminated when the users are forced to change the passwords on their shared accounts. A report has been filed with the appropriate compliance agencies.