Thursday, December 10, 2015

Toys for Tots: Effects of the VTech Hack

Before all of you parents go running out to your closest toy store (if you haven’t already) to get your child one of the latest tech toys from VTech, there are a couple things that you may want to be aware of.  First and foremost is the fact that in November VTech was hacked and was found to be storing the personal data of roughly 5.2 million people, mostly adults but children too. The second is that some of the information that was accessed contained a significant amount of chat logs and pictures that are part of VTech’s Kid Connect service, which allows parents to text or chat with their children using the VTech tablet via a smartphone app.  So many might be thinking, “Chat logs and pictures? What’s the big deal?”  Well, I can think of many mischievous ways in which our cyber connected world can use this data. 
What immediately comes to mind is the hack that has highest rate of success: social engineering. Social engineering is “is a non-technical method of intrusion hackers use that relies heavily on human interaction and often involves tricking people into breaking normal security procedures” (social engineering definition). Having a wealth of information about the potential target increases the chances of success exponentially because you already have plenty of conversation starters to craft that “trust relationship” by creating small talk.  The other thing that comes to mind is identity theft.  Not in the near future, but later down the road.  If the hackers that gleaned that information wanted, they could potentially have more than enough identities to defraud for years to come.  All it would take is patience and just holding onto the children’s information for a few years until they come of age and then you cleverly start down the list of potential targets who will have long since forgotten, and perhaps never even knew that a significant amount of their personal data had been compromised many years back.

Well, I suppose that perhaps maybe it’s not that big of a deal though when in today’s day and age it’s the social norm for personal lives of people to be on display for the world to see on Facebook, Twitter, or Instagram.  So is it really any wonder that we have all of the cybercrime that we do?  I don’t think so.  If anything, I’m surprised that there isn’t more of it.  I say that we’ve created a Cyber-Cedar Point for hackers where our lives are the main amusement of the park.  It’s not a matter of if the hackers will take a spin, it’s a matter of when the line dwindles down enough for them to get on board.  I honestly wonder sometimes if it’s a lack of security awareness or if it’s really that people just don’t care. 

Thursday, December 3, 2015

The Case Against Unified Storage

Unified storage, a “single” storage solution that handles both file-level and block-based storage, has become more common in data sheets in recent years as manufacturers compete to complete every checkbox on the speeds and feeds charts. I see the advantage behind the reduced device count and simplified management interface; however, I believe that unified storage only serves to place ink in a checkbox.

Most storage solutions that offer “unified storage” are the same block-based storage with a software component bolted on to present a volume on the network using NFS or CIFS/SMB. With this scenario, it is not uncommon to get a block-based storage array with a NAS head-unit that provides the NAS features; while this typically brings integration of the two within the management interface, they are still two separate devices—with the NAS head-unit leveraging a block-based volume on the array.

Now the integrated management of the block-based and file-level components is pretty awesome. Who does not dream of that mystical Single Pane of Glass? The downside is the limited NAS features typically offered with a unified storage solution. Your corporate environment is most likely heavy with Windows devices. What serves CIFS/SMB shares to Windows clients better than a Windows Server? Storage manufacturers are forced to lag behind on features and fault resolution as they attempt to play catch-up as Microsoft releases new features into the Windows File Services. Alternatively, some storage manufacturers offer their NAS head-units as Windows Storage Server devices - is this still “unified”?

Windows Server integrates much better with your backup solution than a unified storage solution. In fact, to protect your file-level data a unified storage solution requires Network Data Management Protocol (NDMP). NDMP is a networking protocol, as such errors can occur. Troubleshooting faults in NDMP is a nightmare. Many backup vendors have built proprietary versions of NDMP that mask the original error message. Scouring online discussions turns up frequent posts of sysadmins trying to resolve an error only to end with, “had to reboot the server to resolve the issue.” Maybe I am a little conservative on this front, but I need to trust my backup solution and be able to easily verify the restorability of my data.

A final thought before I ramble on about this all day …

Virtualization is a given nowadays; there are no valid excuses to not be virtualized. What if we virtualized our file servers to increase availability and reduce maintenance? Why would we not deploy file servers, be it Windows or Linux based, as virtual machines that leverage block-based storage? Now, for the crazy bit, what if we clustered these virtual machines to create always online network shares for our users?

Many dedicated NAS solutions include numerous features that are unique and provide much sought after capabilities, but unified storage solutions only over promise and under deliver with their “jack of all trades” design.

-Ryan M. 

Ryan M. has over six years of experience architecting and implementing SMB and enterprise data center solutions. Currently a Solutions Architect at Great Lakes Computer, Ryan is focused on using modern virtualization and storage technologies to reduce OpEx, increase business continuity, and improve performance for customers.  

Friday, November 20, 2015

Speeds of the Data Center: What's Out There & What's Coming

The term “data center” is something many people are familiar with.  Data centers by nature tend to be hungry for bandwidth and are demanding more throughput than ever before.  It wasn't long ago that the vast majority of network engineers couldn't even imagine filling a 10 GbE link.  The tables have indeed turned with the introduction of cloud computing and virtualization, and bandwidth has seen the demand increased tenfold.

Providers are starting to make the switch over to 100 GbE on their backbone connections to support the workloads that their customers are demanding.  In the data center itself, most are seeing that 100 GbE and, in a lot of cases, even 40 GbE is overkill for the workloads they are currently serving up.

So, looking at the various speeds available today and what is coming in the very near future, you might be left wondering how come there are not more options available.  Today, 10 and 40 GbE speeds are available and widely used in the edge data center.  So, where does this 25 GbE come in? 

Let's take a look at how essentially 25 GbE is derived.  Today's 100 GbE network devices utilize four channels of 25 GbE each.  The effects of using just a single channel are multifold on device and environment sizing.  This will decrease the amount of heat that the device will give off and in turn decrease the amount of power and cooling required.  This allows for a much more cost-effective upgrade in the data center when 10 GbE is not enough and 40 GbE is way too much.  Down the road, this can be extremely beneficial when network operators realize that they need to double or even triple their current speeds.

The IEEE standard for 25 GbE is not set to be recognized as a standard until sometime in 2016.  Its arrival is being anxiously awaited.  Until that time, manufacturers will not be quick to develop products that support those speeds as their profitability would be low.  It’s all about the Benjamins!

Since the adoption of the 40 GbE and 100 GbE standards by the IEEE just a few short years back, it has spawned focus groups to begin development of 400 GbE and even Terabit Ethernet.  Seeing how the edge network is rapidly growing, the demand for these faster speeds will only continue to gain momentum.

There is a lot to look forward to in the coming years in the world of Ethernet, especially if you are a speed junkie.  The beauty behind this push is enterprises will want choices and will in turn push manufactures to produce 25/50/100 GbE NICs, assuring your data pipe will remain flowing like a well-oiled machine!

Wednesday, November 11, 2015

Fixing the Weak Link: The Human Element in Network Security

We’ve all heard the age-old adage, “you’re only as strong as your weakest link.” Although the phrase originated in organized team sports, we use it in business as well. An Enterprise will experience success or failure based on the sum of the whole and, if a certain team or team member isn’t pulling his / her weight, failure is imminent. This statement also applies to network security.

We deploy network security devices in an attempt to secure our network. We place firewalls at the Internet edge and datacenter edge. We have intrusion detection and intrusion prevention hardware or software components running alongside these firewalls to inspect for malicious traffic patterns. We filter our user’s web content to try and prevent access to malicious web sites or code. We run endpoint security software that does anything from scan for viruses to sandboxing applications. We implement multi-factor authentication. Some of us are finally inspecting application traffic and identifying the malicious traffic running over allowed ports. Fewer still are taking the application whitelisting approach and defining what CAN run on a device and blocking everything else. All of this is done with the best of intentions and that is to create the most secure network environment that we can to protect against attacks and attempts to access the data or systems we hold sacred.

And yet, we’re all failing. We’re failing because we’re addressing areas of perceived strength and ignoring the weakest link. “Our latest vulnerability assessment shows that we’re at risk because we have several unpatched servers and one of our web servers is vulnerable to a cross-site scripting attack.” Because of this vulnerability assessment, we now have approval to spend time and money to resolve these vulnerabilities. Unfortunately, this vulnerability assessment doesn’t show that Pat in our finance department has no idea what a phishing email looks like and has just clicked on the link in the “reset your password” email, logging into the company’s online banking portal for a 5th time to reset the password for the account that we use to process payroll each week… unsuccessfully, I might add.  Pat’s phone call to the help desk goes something like this: 

“Hey, are we having Internet problems? I can’t seem to get our online banking page to load.”

Help desk guru responds with “I can’t see any issues with our Internet. Seems to be working fine for me, so try it again in a few minutes. Maybe reboot your computer.”

By now, the fraudulent wire transfer of this week’s payroll has already been started using Pat’s credentials that were typed into the fake password reset form from the emailed link.  Pat is able to log into the account, post reboot, because Pat uses the favorite that was created in Firefox rather than clicking on the link in the email.

This story illustrates one of the many ways that an attacker can get what they want by exploiting the weakest link. At present, we view our network security systems, our firewalls, our IPS, our WAF, and our AV systems as our strongest links because they are configurable and do what we want them to do. People are the variables and are, inherently, our weakest links.  But they don’t have to be.

Some of the most secure networks, and ones that are the biggest targets by attackers, I might add, do not appear to have those perceived weak links. The people are still there, as are the weak links, but they are being educated constantly on the ever-changing threat landscape. Their employers perform routine Security Awareness Training. They perform in-house testing to reinforce that training and then do more training. Rinse and repeat.

They create policies that lock down the network and only allow those things which are necessary to perform core business functions. At the end of the day, your business exists to make widgets or provide a service to consumers. Unless your business IS Facebook or Twitter, what reason could you possibly have for being on those pages during the course of normal business. Obviously, there are exceptions to every rule, but it seems that we, as entitled members of society, have decided that we are all the exception and should have the right to access what we want, when we want, from wherever we want, even if it’s technically not relevant to the task or job function for which we are employed to perform.

If you truly want to protect your network, investing in the technology used to do so is only half of the battle. Education, policy creation and enforcement, and regular testing for new emerging threat types are the weak links that need to be addressed. Let’s face facts - we’re behind the curve when it comes to protecting ourselves from attackers simply because we are always in a reactive mode. If we can effectively educate our users and reinforce the fact that our business network is used to conduct BUSINESS, that’s going to shorten the curve exponentially. As a business owner, network manager, CIO, or whatever your title might happen to be, you may not be able to implement the necessary changes to make this happen in your organization, but I’ll bet you can exert some sort of influence over them. You wouldn’t be reading this if you couldn’t.

*The thoughts and opinions in this blog post are my own and do not reflect the thoughts and opinions of Great Lakes Computer or any of its vendors, clients, or partners.

-Chris C

Chris C has over 15 years of experience designing, implementing, documenting, and supporting networks and infrastructure from SMB through Enterprise level in a multitude of verticals. Currently Sr. Network Engineer at Great Lakes Computer focused on designing and implementing secure network solutions in the datacenter and service provider space. 

Thursday, November 5, 2015

What We Can Learn from Japanese Efficiency in IT

I personally don’t use Twitter very much, but yesterday I was tempted to create a “hashtag” topic to see if it would gain traction and begin trending. What was that topic? #spoiledbyjapan.

I’ve recently returned from a short trip to Nagoya for my brother’s wedding, and I’m still aglow from the experience. This was my first trip to Japan, and I’m already hoping I will have another opportunity to return. I think the best single word I can come up with is “satisfying.” You know that feeling you get when you peel off the plastic protective layer from a new smartphone, or when a box fits exactly into another box? That’s what a lot of Japan feels like. Where there is an opportunity for something to work efficiently and effortlessly, they are the undisputed masters of implementation. Visual attentiveness to detail is of the utmost importance, and all of Japan’s citizens seemed to contribute to that same robotic mantra of proficiency and cleanliness.

You can imagine my chagrin when I returned back to the United States and visited a popular chain clothing store in a shopping mall. There were plenty of clothes on the floor that had fallen off racks, unfolded jeans and shirts hastily strewn on shelves and tables, and large dust bunnies visible to the naked eye everywhere. It was an absurd and frustrating wakeup from my Japanese dreamland of all things visually appealing. Needless to say, I walked out without buying anything. 

We all know that pictures are worth a thousand words, and think of the phrases that come to mind when you see this picture – regardless of whether you understand what’s going on here, or not:

  • The persons responsible for this do not care about how it looks, as long as it works.
  • The persons responsible do not properly manage their time to make something right. 
  • The persons responsible for this do not know what they are doing.
Now, take a look at this picture:
What are you thinking now?
  • The persons responsible for this understand that others that see this will appreciate efficiency, even if they don’t understand how it works.
  • The persons responsible for this take pride in their work.
  • The persons responsible for this take time to make things correct.
Now, put on your C-level hat and ask yourself which you would rather have in your datacenter. Try to stop yourself from the same excuses – we have all heard them before. I’ve also been in Information Technology for a long time, and I can guess what you are thinking.
“It requires downtime and overtime to keep a datacenter organized. It is not cost efficient to make things ‘look nice.’”
Not correct. Though it can be an arduous task to “clean up” a datacenter cabling rack from the state of Picture one to Picture two, that does mean it is inevitable to return to the previous state. It requires a consistent mantra of disciplined attention across your team. If there is something new to be added or removed, it takes far less time to make that single thing right, than it would be to take a shortcut mentality and allow them to continue. Once one person sees that it’s OK to take a shortcut, others will likely follow suit. This is essentially how things get slowly disorganized. Remember, laziness pays off now… but hard work now pays off later. 
Unfortunately, I didn’t get to visit any datacenters in my trip to the far East, but it’s a safe bet that they would look like Picture 2. Organization eliminates errors caused by disorganization, and in this industry, you simply can’t afford to have it any other way. It requires discipline and attention to detail across all members of your team. When everyone subscribes and contributes, everyone wins together. 

-Jason S.
Jason S. has been in Infrastructure Technology consulting for 17 years, and has an extensive background in various methods of business application delivery, hardware and virtualization, storage infrastructures, and enterprise communication processes.  In his spare time he reads tabletop game instruction manuals and chases his lifelong dream of finding the perfect guacamole recipe. He is married with two children and hopes to move to the Ozark Plateau someday.

Thursday, October 29, 2015

Selecting a Solution that Fits: Right-Sizing Your Storage

We are all familiar with the common misnomer that “One Size Fits All” and most of us at one time have fallen prey to buying an item only to find the advice was a bald-faced lie. So when it comes to numerous choices for housing your business’s data, how do you “size” and select the ideal fit for your environment?

For some, it may be as easy as scanning Gartner’s Magic box for this year’s top performers or the breakthrough up and comers. What about transport methods: will you favor iSCSI, Fiber Channel, Fiber Channel over Ethernet, or SAS? Not to mention you must have solid state disks drives; it’s the hot technology, everyone is using it, and you don’t want fall behind! 

If, on the other hand, you are truly interested in investing in only what you need to get the job done efficiently, then the key to selecting the right storage should start with intimately knowing the I/O patterns of the applications that will access the data. The most important metrics to consider are reads, writes, and I/O size. Collecting data can be done through various performance programs such as perfmon for Windows or htop for Linux. These two are among a crowd and everyone has their favorite.  Specifically, you will want to measure during all workloads; peak and off-peak workloads may have different I/O characteristics. Attention should be given to the following points: Disk Reads/sec, Disk Writes/sec, Disk Latency, Size of I/Os being issued, and Disk Queue length. If you are analyzing a database, also include Checkpoint pages/sec and Page Reads/sec.

Once you have a solid idea of how your applications perform, then you can move onto sizing the physical disks. Typical IOPS per spindle range from 100-130 on 10K RPM drives, 150-180 on 15K RPM drives, and 5000+ on solid state drives. Keep in mind that disks with less than 50-70% written capacity will have improved IOPS over disks written at 80% capacity, so spread the data out! You will also want to consider the impact of write penalty your RAID choice will have on I/O issued against the disks. Read and Write Cache on storage controllers can offer improvement to the performance and should be taken into consideration if the applications lean heavily one way or the other.

Sizing storage systems is much more than just sizing the disks. Every component in the path to the physical drives has a throughput limit and can be a potential bottleneck. As more solid state disks are implemented, the configuration of these ancillary components will become more critical. To avoid pitfalls, analyze the potential throughput of each of the following:
  • Connectivity: HBAs, NICs, switch ports (and if they are shared by multiple servers), array ports, and the number of paths between servers and storage.
  • The number of service processors in the array and how the LUNs are balanced across them.
  • The capacity of the backend busses on the array and how the physical disks are balanced across them.
Other considerations that can affect sizing decisions are advanced features offered by today’s latest generation of storage arrays. Examples are thin provisioning, snapshots/clones, compression, deduplication, and storage based replication. In addition to these, some new generation arrays utilize technology that throws all those media IOPS estimates out the window!
The moral of the story is this: arming yourself with in-depth knowledge of your application performance gives you the ability to quantify different array features and purchase only what you really need. Arrays that tout doing everything come with a hefty price tag. If they don’t benefit your applications, they are worthless, and saving money where it makes sense has never been a bad business decision.

Thursday, October 22, 2015

Why Every Business Needs a Storage Assessment

Let’s pretend you’re in the market to buy a new house. You’re confident your current house is going to sell in the next few months, and you’d like to move as soon as possible. 

I am currently undergoing this process, and at times, the number of questions to go over feels daunting. What is on your list of attributes you’d like your new home to have? How are they prioritized? How much does price affect your decision? Here are some common filters when searching for real estate using common sites:

1. Listing type
2. Price range
3. Location
4. Number of beds
5. Home Type
6. Number of bathrooms

And the list goes on.

Maybe you’d like to view according to square feet or how many days it’s been listed. As I run through the filters and questions I start to pick candidates and save them on a list to research.The plot of land and location may be beautiful, but does that river raise the cost of insurance? How much? Is there a flood risk?

Now it’s time to start visiting and walking through places… creepy Michigan basement – no thanks.  Oil heat – Seriously? Water damage – pass. Sketchy roof – next house, please. I walked through one place where the tenants were present. They were really nice and after talking to them a bit, they mentioned that shortly after moving in they noticed that they would find the refrigerator in the middle of the kitchen. Foundation issues – can’t run fast enough.

It would be so much easier if all this information was provided honestly up front.

Approaching a storage purchase for your datacenter can feel just as daunting. Now, more than ever, there are a myriad of options in the storage market and most all of them have a valid play and use case. But, before you can choose a new storage array, shouldn’t you know what you have first? Just like any other major purchase, it’s helpful to take stock of what you have and how it’s served you over the time you’ve used it. When attempting to do this with your storage environment, you’re often limited to the capabilities of the arrays or hosts connecting to it. Just like choosing a new place to live, there are a number of factors that often come up.

1. Connectivity
2. SSD Support
3. Data efficiency
4. Ease of use
5. Vendor Support
6. Performance

There are many, many more storage features out there and each manufacturer often has their own unique implementation of them. So, how does one choose which factors should be given more weight in the purchase decision? So often we use a “best guess” to choose, and this can lead to misuse of a large portion of your budget, especially when you consider that storage is often the most expensive piece of the pie when compared to compute and networking. For this reason, it’s important that we take stock of what we have, the problem we’re looking to solve, and make sure that moving forward we have the tools in place to make this decision easier. Once we’re able to report on what we have and how we’re using it, we can then confidently make an informed decision on how to add onto, or replace portions of our storage infrastructure. This is the beauty of this -if we have this information in hand first, suddenly we’re able to quickly filter out all the vendors with solutions that don’t fit our problem. The flooded list of storage vendors suddenly becomes a more manageable list of two or three.

A storage assessment will produce a report providing an inventory of what you have, how you’re using it, and help identify areas of improvement. You can improve performance, availability, and reduce operating costs – who doesn’t want to do all of that?! With this knowledge, you can now start to answer other initiatives with certainty: Should we entertain cloud storage? What about a colocation facility? What kind of storage should we have at our DR site? How much can we save in power and cooling by replacing legacy storage? With clear direction, you can now accomplish more than you thought possible.

- Adam P.

Adam P. is a sales engineer with experience in servers, storage, and virtualization with a focus on data management. He started working with VMware ESX 3.5 and has worked with numerous companies assisting with physical to virtual initiatives. When not talking storage or virtualization he’s binge watching shows on Netflix or enjoying some Short’s brewery beverages and playing board games with friends. His favorite movie of all time is: The Big Lebowski.