Thursday, April 30, 2015

How Do You Protect Your Data From Going Rogue?

As IT Admins we struggle to keep up with the growing list of demands generated by our users. The span of these demands are immense, you could be working on implementing some new shared storage for your environment only to be interrupted by someone looking for a blank DVD, spare flash drive, or maybe an external HDD.

I’ve been there many times before.

When asked for these temporary storage devices I always wanted to offer these up and get back to the more important tasks I’m working on. But the Admin in me would immediately question the lack of control being introduced with these devices. What if the files they’re putting on these devices contain proprietary information? What if it gets lost? What if it’s stolen? What risks are we susceptible to when offering up these tools to enable collaboration with outside entities?

Many companies have worked to address security and data control concerns through policies and training and for the most part I think these do a good job bringing awareness of the security risks to the end users. But there is always going to be a time where it’s just too easy to drag and drop some files to a flash drive (or to the cloud!) to share with someone else.

Backing up data is one thing – creating multiple copies to protect from disasters or file deletions is generally fairly easy. But when a copy of those files is released into the wild you lose control. How do you solve this common problem?

Like many areas in IT, there are multiple ways to tackle the issue ranging from old-school to more modern approaches. Coming up with the solution that fits your business needs can be a challenge.  There comes a point where the level of control is prohibitive to the ease of use. The engineers at GLC have a range of experience dealing with this and would love opportunity to work with you through this process. We can offer our insight and experience while we work together on defining what you’re needs are and standing up a solution that will work for both you and your users.

Thursday, April 23, 2015

IT SOURCING: How Does Your Business Balance the Charge?

A controversial topic from SMBs thru to Enterprise customers, there are as many opinions on this subject as there are people involved, all slightly different of course!

There are several parties involved in getting the equipment required to run your business. Procurement teams bring their specialized negotiation skills, mitigate risk, hash out terms and contract verbiage and, of course, discover cost savings. IT teams know the infrastructure, keep up with new innovations that improve business productivity, know what it will take to implement them, and identify cost savings. These two departments with similar goals often work completely separate when it comes to IT purchases. Vendors are also involved, businesses that want you to spend your company’s hard earned dollars with them. Are they capable of being a trusted advisor?

Coming from a background of both buying and selling IT products over the course of 20 years I have seen some amazing things when it comes to the ways companies spend money. Did I have my favorite suppliers?  You bet!  I liked the companies that were highly responsive, knowledgeable on technology, flexible to work within my budget, dependable and shipped me quality goods.  And in cases where issues occurred, were timely and helpful to resolve them. On the flip side, what did I look for in a good customer as an account manager? Definitely companies who included me in their business needs, who gave me the opportunity to interact with all the folks involved in not only evaluating the technologies but also the decisions to buy. In short, an organization that gave me the chance to add value and one who respected that my time was being invested in their organization and my own company’s at the same time. 

Many technology manufacturers have embraced a pricing structure that gives the best discounts to the vendor that brings them the opportunity first. This adds a level of complexity to projects, especially for businesses that have made a policy to get multiple quotes before making a purchase because suppliers B and C are already at a competitive disadvantage over supplier A that you first mentioned your needs to. The days of getting 3 “apples to apples” quotes and having it be legitimately competitive are over. Companies would be better served to get 3 quotes for different technologies all designed to solve their problem, letting each supplier articulate and demonstrate the value of their solution. What many companies don’t consider is that vendors don’t want to work with you one-time, they want to work with you a LONG time. They want your business to be successful because that means you will continue to make money, grow, thrive and keep using new technologies. What is in your company’s technological best interest is in the IT vendor’s best interest tomorrow, next week and the next 10 years.

The emerging best practice is for procurement and IT teams to work collaboratively on any major technology purchases. Of course, these will represent different dollar values for different size companies. You can further enhance that effort by enlisting a core set of vendors as trusted technology advisors. Involve them early and often, together with procurement and IT. Give them the information they need on your infrastructure and budgets that allow them to offer viable solutions, and reward them with your business when their solution solves the problem you brought to the table. In return, you will get a team of people focused on the same goal and will drive success to your business’s use of technology.

Thursday, April 16, 2015

Nobody Likes Backups

Who actually enjoys thinking of backup plans when working with an organization’s data?   There is additional management overhead in configuring the backup jobs, monitoring them, and looking into issues when you get alerted of failures.  Then you need to maintain your backups as the environment grows over time, the backup plans you configured 6 months ago may not be the best way to protect your environment now.  The environment tends to take a performance hit when backups are taking place, so after hours schedules are normally used.  Disaster recovery, off premise, archives, replication, tape… not to mention coming up with resources to handle all of this, who wants to pay for a backup plan when everything is running just fine?!  All this work and if everything goes well you’ll never need to restore! 

How to do you convey the risk to management when they’ve never felt the burn of a failed RAID controller, Host failures, extended power outage, or hurricane?

It’s not fun, we get it.  You know what else isn’t fun?  Scrambling to get a server online after data corruption, looking for identical hardware to restore to, or sending workers home after hearing it’s going to take 4-6 hours before services are available again.

I’m speaking from experience here – going through that is the worst.  Talk about a completely helpless feeling for all parties involved.

We can all agree we want to avoid unplanned downtime, but how do we start the conversation to design a backup strategy?

RTO and RPO are a great place to start.  These can be defined large-scale or more granularly according to how critical a service is for business.

RTO, or Recovery Time Objective, is the target time you set for the recovery of IT and business activities after a disaster strikes.  Another way to describe this: “our business can survive 4 hours of system down time” or “our RTO is 2 days.”

RPO, or Recovery Point Objective, is the amount of data loss you’re able to accept in order to restore the most recent recovery point.  In other words, this helps define how often you’re creating recovery points.

RPO and RTO are two key metrics that will help provide a goal to work towards when defining how much you need to invest into your backup and recovery strategy.  Another item is retention, or how long you’d like to hold on to recovery points.  There is no wrong way to do this, each company is different.  Some have compliance standards they must adhere to; others have services that need 99.999% uptime.  I’ve worked with companies that only want to hold on to the most recent recovery point.  At the same time, they need to hold on to all recovery points for just one service in their environment.

Coming up with an effective backup and recovery plan can be a headache.  Allow us to apply our expertise in designing and implementing a solution that exceeds your needs and is easy to maintain.  We have engineers ready to help protect your business and ensure you’re able to focus on the important tasks throughout the day.

Thursday, April 9, 2015

VM Storage Policies with Virtual SAN

Software-defined storage is driven by policies that automate provisioning and reduce complexity. The VASA storage provider provides vSphere with the capabilities available with the backend storage. We create VM Storage Policies with the capabilities we wish to leverage.

With VMware Virtual SAN, we currently have five different capabilities that we can configure per policy:
  • Number of failures to tolerate
  • Number of disk stripes per object
  • Force provisioning
  • Object space reservation (%)
  • Flash read cache reservation (%)

Number of Failures to Tolerate
Number of failures to tolerate defines the number of host, disk, or network failures an object can tolerate. The default value is 1, and the maximum value is 3.

For n failures to tolerate, n+1 copies of the object are created and 2n+1 hosts contributing storage are required.

Failures to Tolerate
Storage Objects Created
Minimum Hosts Required

Number of Disk Stripes per Object
Number of disk stripes per object defines the number of physical disks across which an object is stripe. The default value is 1, and the maximum value is 12. A value higher than 1 may result in increased performance, but it also results in a higher usage of capacity.

When reads miss the flash read cache, data will be fetched from physical disks, resulting in increased latency. Distributing objects across additional physical disks can distribute the burden on the physical disks and reduce latency when many read cache misses occur.

With Virtual SAN, all writes go to the SSD layer (write buffer) and the benefit of additional stripes is not easily recognized.

When the Virtual SAN infrastructure is properly designed, the number of disk stripes per object compatibility should remain the default value of 1 because the flash read cache should not be overcommitted and read cache misses should be at a minimum.

Force Provisioning
When force provisioning is set to yes, the object will be provisioned even if the policy specified is not capable of being satisfied by the datastore. A warning will be displayed on the Summary tab of the virtual machine stating that the VM is not compliant with the storage policy. When additional resources become available in the cluster, Virtual SAN will bring the object to a compliant state.

Note: if the cluster does not have enough space to satisfy a reservation requirement of at least one object, the provisioning will fail even if force provisioning is set to yes.

The default value of no for the force provisioning option is acceptable for most production environments

Object Space Reservation
The object space reservation option reserves capacity in the cluster’s resources for the storage object by the percentage of the logical size of the storage object. This value only applies to thin provisioned virtual disks; thick provisioned virtual disks automatically reserve 100% of their capacity. The Default value is 0%, and the maximum value is 100%.

Flash Read Cache Reservation
The flash read cache reservation is the amount of flash capacity reserved on the SSD as read cache for the object; it is specified as a percentage of the logical size of the object. The default value is 0%, and the maximum value is 100%.

Note: this setting should only be used to address read performance issues after thorough testing. By default, Virtual SAN dynamically allocates read cache to storage objects based on demand; this is the most flexible and optimal use of resources. Over-provisioning cache reservation on a storage object can have a negative impact on the performance of all virtual machines.

After designing VM Storage Policies to meet storage requirements such as performance and availability, we can deploy virtual machines and with the appropriate policy and update existing virtual machines to take advantage of newly defined policies.


Wednesday, April 1, 2015

Q4 2014 Server Market Report

Gartner recently released their fourth-quarter assessments of revenues and shipments for the worldwide server market. In the report, Gartner found that worldwide server shipments increased 4.8 percent year over year while revenue also increased 2.2 percent over Q4 of 2013. As for 2014 as a whole, server shipments grew 2.2 percent and revenue jumped 0.8 percent. Gartner concluded that this growth can be attributed to the steady shift to hyper-scale datacenter deployments and expansion of cloud server provider installations.

As it has for many years, HP continued to be the front-runner of the global server market during Q4 with nearly 28 percent of the market share seeing a 1.5 percent growth year over year. Dell followed with 17.3 percent, IBM in third place with 12.8 percent, and Lenovo at nearly 8 percent. Cisco grew its server business by 19.3 percent, giving it a 5.5 percent share putting it fifth. With the sale of its x86 server business to Lenovo, IBM experienced a decline of 50.6 percent while Lenovo saw rapid growth with an increase of 743.4 percent.

x86 servers continue to be the dominant platform used for large-scale data center builds around the world. Though relatively small, the growth of integrated systems as an overall percentage of the hardware infrastructure market also contributed to the growth of the x86 server market for the year.

Geographical regions that saw the most increase in terms of unit shipments for the fourth quarter were the Middle East and Africa with 10.7 percent, Asia/Pacific with 9.1 percent and North America rounding out the top three at 7.6 percent. As the server market begins to flatter in established markets, Lenovo has acknowledged that they are focused on finding ways to boost sales in emerging markets, particularly China.

Gartner also shares that the outlook for 2015 suggests that server space as a whole will continue to see modest growth throughout the year.  They also note that server consolidation through virtualization continues to be the wild card and could put a strain on the sales of new physical servers.