Thursday, December 19, 2013

Keeping Your Systems Safe from Holiday Phishing Scams

It’s the most wonderful time of the year… for seasonal phishing scams and cyber campaigns. Unfortunately, the holiday season is prime time for cyber criminals. The increased volume of online consumers makes it ideal for cyber criminals to try and exploit those who are unaware of cyber risks so that they can gain access to their personal information.

Not only are people targeted at home during their personal shopping, but employees are targeted at work as well.  Unfortunately, everyone with an email account is a potential target for cyber criminals.  Their attempts are deceptive because the email looks like it comes from a legitimate source, say FedEx, but it is actually a scam to get their targets to open malicious attachments or follow links to fake sites. Once the employee has clicked or followed the link, they’ve enabled a security breach through which cyber criminals can capture corporate information.

Some common examples of holiday phishing scams include:
•    Carrier service delivery notifications
•    Requests to wire transfer money
•    Credit card application forms
•    Fraud alert notifications
•    Requests for charitable contributions
•    Holiday-themed downloads (screensavers, e-cards, etc.)

The best advice for employees trying to keep systems safe during the holidays is not to click on links or open attachments from any retailer. If the email is legitimate, you should be able to get the same information by directly going to their website and signing into your account. Also, advise them not to give out corporate information via email unless they are 100% sure of the source. The tiniest bit of doubt should stop them from sending the email, and finding an alternative way to verify the source, then sending the information in a secure format.

And it’s not just on the employees that need to work to keep systems safe. From an organizational standpoint, it’s absolutely critical to keep all of your corporate devices, whether they are computers, laptops, tablets, or Smartphones, up-to-date with the latest patches and fixes, as well as installing anti-virus software and firewalls to protect the data on your corporate network.

So boys and girls, ‘tis the season to protect your company against “Grinch-like” phishing scams. And the best gift your corporation can receive is peace of mind.

Thursday, December 12, 2013

Nimble Storage Named Best Hybrid Flash Storage Solution

In an award selected by Modern Infrastructure readers, iSCSI Nimble Storage CS-Series arrays have been chosen as the top hybrid flash storage product on the market. Nimble Storage was selected because readers found them to be “a great addition to their data centers, providing the right levels of performance and capacity and extensive software features, all at a reasonable cost.”

The unique performance and capacity benefits offered by Nimble Storage are the result of their patented Cache Accelerated Sequential Layout (CASL) architecture, which leverages dynamic, flash-based read caching as well as a unique write-optimized data layout. CASL also incorporates innovative features such as inline variable-block compression, integrated snapshots and zero-copy clones.

When it comes to software features, Nimble Storage offers an all-inclusive licensing model that incorporates all of the features an enterprise might need to consolidate and manage their data. Their InfoSight portal was particularly noted by readers as an impressive and powerful tool that provides administrators with a single pane-of-glass through which they can perform functions such as: viewing alerts and triggers, running reports, and proactive planning for capacity growth.

Nimble Storage won this award over these other hybrid flash storage products:

Coraid EtherDrive SRX6300
Dell Compellent flash-optimized solution
EMC VNX
HDS HUS VM
HP 3PAR StoreServ 7400 Storage
IBM StorWize
NetApp FAS
Oracle ZFS Storage Appliance, ZS3 Series
Tegile Zebi
Tintri VMStore

For those considering Nimble Storage, you can further explore the features and benefits of Nimble Storage here.

Thursday, December 5, 2013

Protecting Your Network: Incident Management

In network security, one of the questions we frequently ask our clients and colleagues is, “if you had been hacked, would you know it?”  The answer is surprisingly not the one it should be.  Some of the largest Enterprise companies still place too much faith in the security of legacy systems. They place trust in hardware based on a name and the mindset that their data isn’t worth anything to anyone but them.  However, the reality is that systems are, and always will be, insecure.  No matter what we do, there will always be someone, somewhere, looking for a way to access information, data, and systems that they shouldn’t be accessing. 

As Network Security Engineers, we do our best to stop them with firewalls, intrusion prevention systems, intrusion deception systems, malware and botnet detection, traffic monitoring, application layer firewalls, and end user education.  But the attacks keep coming and, in spite of our best efforts, some are still successful and go unnoticed.  Sometimes for days, weeks, or months.  Even a security conscious company that logs EVERYTHING may not notice the intruder right away if they aren’t reviewing each and every log file every single day and looking for specific things.

That’s what makes Event Correlation so important.  Event Correlation looks at all of the log files from all of the systems in your network that are being sent to the SIEM and determines exactly what’s happening and when.  Event Correlation is generally a feature built into a good SIEM (Security Incident & Event Management System).  A good SIEM can examine flows, event logs from application servers, events from switches, firewalls, IPS and web filters.  With all of that information, the SIEM creates a complete profile of everything an attacker touches.  This is all great for a response AFTER an attack to remediate a security issue, but how can this help during an incident?

A good SIEM with an Event Correlation engine will also be capable of generating alerts based on certain behaviors that might indicate a possible security issue.  For example, if User A logs into a database from the office at 11:30am EST and a few minutes later logs into the same database from a remote location 10,000 miles away, there’s a very good chance that one of those is a possible security breach.  When that remote session is initiated, the process of tracking the session begins and an alert can be sent to the proper admin to disable access or revoke a connection.  Some SIEMs can even automatically do this when used with certain security devices.

Make sure that when your network is breached, you’re able to act quickly and that you have the proper Security Incident & Event Management System in place to help you both stop the attacker in their tracks and prevent them from accessing your systems again.

Thursday, November 21, 2013

Is Flash the Future of the Datacenter?

Increasingly, corporations are turning to flash-based solid state drives (SSDs) to improve the performance of their datacenters because they eliminate bottlenecks in system performance that are inescapable with traditional mechanical disk drives.

While the speed of reading and writing data with mechanical disk drives is limited by the nature of the spinning mechanical drives, flash-based SSDs instead rely on an embedded processor to perform these operations. As a result, SSDs offer lower latency and the ability to transact many times more IOPs than physically possible on drives. Not only do SSDs offer speed and performance, but they also offer energy efficiency and reliability.

The issue with flash-based SSDs has always been the price tag. Their performance advantages do not come cheap. However, flash drives have been coming down at a rate of 30% per year, making them a more affordable option. Additionally, these two drive types are not mutually exclusive – you can use a mix of SSDs and HDDs in your datacenter. This delivers a best of breed technology, the performance of flash and the cost-effectiveness of traditional disk drives.

When using a hybrid SSD and HDD combination, it is important to segregate your data by its usage characteristics. Your applications with critical performance requirements and “hot data” can be stored on the flash-based SSDs, while applications with less critical performance requirements and “cold data” can be easily stored on HDDs. The benefits of this technique can be seen with Nimble Storage arrays.

Furthermore, key software vendors have recently added SSD support, including Microsoft and VMware. Microsoft added SSD support to the new Windows Server 2012 R2 and VMware has added enterprise flash storage support to vSphere 5.5 with the new vSphere Flash Read Cache.

So, is flash the future of the datacenter? It certainly seems to be with more and more enterprises investing in the technology. The biggest players in the industry have adopted flash. And now that the expense of the technology is decreasing at a steady rate, the newfound affordability of the technology will only help to accelerate its rise.

Friday, November 15, 2013

The Importance of Storage in a Virtualized Environment

Virtualization is proving to be an unstoppable force in the IT world, the result of a more software-defined approach to the datacenter. While the majority of enterprises begin virtualization at the server level, it is critical that their storage platform be optimized to match up with the needs of a virtualized IT infrastructure.

While server virtualization brings increased performance and management to the datacenter, it is critical to also consider storage because of the additional workload that virtualization creates versus a traditional I/O model.  Inadequate storage performance for virtualized workloads negatively impacts an enterprise's entire data infrastructure and puts a significant strain on their IT staff and end users
As a result, virtualization has created a need for storage specifically designed to handle its unique challenges, including insufficient data protection, increased complexity and management, and inadequate performance.

Virtualized environments can suffer from insufficient data protection because of the need for VM-level data protection and availability, which traditional storage architectures were not designed to provide.  Storage based snapshots are more efficient than software based snapshots; however, storage based snapshots require integration with the operating system or hypervisor to properly quiesce the data.

One of the benefits of virtualization is making IT departments more agile with centralized and unified management, so storage solutions should integrate with the virtualization management utility, creating a single pane of glass from which to manage the entire infrastructure.

As previously mentioned, inadequate performance is the heavy hitter when it comes to virtualization challenges for storage.  Many consider flash to be the best solution to this challenge, but all flash solutions are extremely expensive and do not have the proven track record of traditional spinning disks. One solution is to purchase a hybrid flash storage platform that provides the performance of flash storage with the reliability of spinning disks.

Nimble Storage is a hybrid storage solution developed from the ground up as such, unlike traditional storage vendors that have attempted to attach flash storage to their traditional way of thinking.  Nimble has a strong relationship with VMware and a deep integration with vSphere and vCenter for storage based snapshots, array based replication with VMware’s Site Recovery Manager, and a single pane of glass for management using Nimble’s vCenter plug-in.

The lesson to be learned from all of this is, don’t get caught up in the momentum of virtualization and lose sight of how important storage is to the performance of your environment.  If your storage isn’t up to snuff, it will be impossible to achieve the full benefits offered by your newly virtualized environment.

Friday, November 8, 2013

Why We Became a CommVault Authorized Reseller

In our search to provide our customers with cost-effective, fully featured tools for business continuity, we’ve landed on CommVault as our flagship data protection and management solution. And we’re not the only ones drinking the CommVault Kool-Aid. CommVault is a proven leader in the field of data management, winning numerous accolades for their products, including the Disaster Recovery and Backup Product of the Year Award at the 2013 Storage Awards.

Some of the top reasons that we’ve selected CommVault:
-    Recognized as a “Leader” in the Gartner Magic Quadrant for three years running
-    Excellent integration and synergy with Nimble Storage, our lead storage partner
-    Excellent integration and synergy with VMware, our lead virtualization partner
-    Seamlessly manages both virtual and physical environments

Another big reason that we’re such a fan of CommVault is their flagship product - Simpana software. Simpana is built on a single platform and unifying code base for integrated data and information management. This provides a very efficient and consistent user interface with seamless inter-module communication and interoperability.

Some of the other CommVault Simpana software features and capabilities that we find compelling are: broad storage array and application support, converged backup and archive functions for improved efficiency, highly efficient de-duplication and replication of backups, workflow automation, multi-tenancy support, and a customized web-based reporting capability with dashboards.

Thursday, October 31, 2013

Spooky! Cisco Raises Catalyst Switch Prices By Almost 70%

In the spirit of Halloween, let’s talk about the most recent Cisco experiment. While it has nothing to do with a boiling cauldron over the fire or rejuvenating Frankenstein, it is an interesting one none-the-less. A recent article form Network World claims that they have Cisco documentation stating that they plan to raise the price of some Catalyst switch models by as much as 67%.

Cisco’s planned price increase will not affect newer generation Cisco Catalyst switches. Cisco switches that will be affected by the pricing increases include select Catalyst 3000, 4000, and 6000 series switches, as well as their associated accessories and other related products.

The favored theory behind Cisco’s pricing experiment is that it is a strategic move intended to push Cisco users into buying newer generation product. This is the opposite of the traditional approach, where manufacturers lower the price of previous generation product when releasing a new generation.

Not only does Cisco’s price increase go against the grain as a strategy for a new generation product release, but it is also against the current market trend. Volume and value engineering have lead to a combination of silicon density and chipset consolidation that has resulted in lower networking switch hardware costs across the board.

While some say this may be the start of a new trend for other manufacturers in order to increase their profits, others say this move may drive Cisco users to turn to the secondary or refurbished Cisco market or even, to turn to a competing network manufacturer like Juniper Networks. Taking place as soon as November 2nd, time will tell if Cisco’s experiment turns out to be a trick or a treat for the networking giant.

Source:
http://www.networkworld.com/news/2013/101713-cisco-catalyst-274956.html?hpg1=bn

Thursday, October 24, 2013

5 Things to Avoid When Deploying VDI

Virtual Desktop Infrastructure (VDI) adoption is steadily increasing. If you’re considering VDI for your organization, it’s important to not only take note of what to do when deploying VDI, but also what you definitely do NOT want to do.

Here are the top 5 things that you do NOT want to do when deploying VDI:
  1. Improper resource planning - Improper resource planning is a common mistake that often results of a lack of long-term vision. Don’t just plan for what your users will need at the start of the deployment, but try and plan for years down the road.
  2. Assuming your security woes are gone - Don’t assume that VDI will eliminate your security woes because you now have the option of virtually wiping someone’s device of corporate data. As cool as that is, data breaches can still occur via a number of different methods. 
  3. Having unprepared users - Just like anything else your organization might roll out to your users, advanced notice is always best. Make sure that your users are fully aware of what they can and cannot do with your VDI, and be sure to follow up with them after the deployment. Silence is not always golden. 
  4. Not understanding user profiles - Since managing and troubleshooting user profiles will be a crucial aspect of your VDI, it is important that you gain a good understanding of them since you may not have been familiar when everyone was using physical desktops.  
  5. Not having a backup plan - While one of the benefits of VDI is that you don't necessarily have to back up every individual desktop, it is important to include a full disk image backup and to aim for redundancy in every possible aspect in case of the dreaded, but not altogether uncommon WAN outage.
So, remember to keep in mind these 5 critical VDI mistakes when planning for your VDI implementation. While you could run into an issue by being under-prepared, no one’s ever heard of having an issue because you were over-prepared for something, right?

Thursday, October 17, 2013

Protecting Your Network: Intrusion Deception

The Internet is made up of websites.  We use these websites to shop, bank, research, work and relax.  These websites contain information on people, places, and things.  This means information on me and you.  That information is valuable to someone and it’s important for that information to remain secure.  In previous “Protecting Your Network” blogs, we’ve talked about the ways we secure that information using firewalls, IPS, application inspection, policies and good ole’ fashion common sense.  Today, we’re going to talk about a new tool in the fight against those who want access to our data and information, Intrusion Deception.

What is “Intrusion Deception”?  Simply put, Intrusion Deception is counter-warfare on a technical level.  We’re feeding false information to attackers to make them think they’ve hit a goldmine, all the while gathering information on them, fingerprinting their devices and recording their methods in an effort to quickly identify them. This means that when they try to attack again, we can quickly apply countermeasures to stop them in their tracks. 

In the past, ambitious security engineers would stand up an unpatched web server on the Internet disconnected from anything else and allow it to be compromised.  They would then take it offline and perform a forensic analysis on the machine to see what had been done, from where, how, and what could be implemented from a security standpoint to prevent it from happening again - rinse and repeat.  This method was time-consuming, expensive and completely reactive.  The attackers were always multiple steps ahead in the battle.

Enter Junos WebApp Secure (formerly Mykonos), the first of its kind Web Intrusion Deception System that detects, tracks, profiles and prevents attacks in real-time.  Coupled with Junos Spotlight Secure, a cloud based hacker device intelligence service, we now have a method of identifying and tracking attacks and proactively preventing these attacks from happening.  Rather than relying on the reactive method of signature-based IPS / IDS or Anti-virus / Malware detection points, WebApp Secure relies completely on the malicious actions of the attacker acting on fictitious code embedded in a given website.  Code a normal user will not see, but an attacker will see and view as an easy entry point for gaining valuable information.

Intrusion Deception.  Turning the tables on the attackers.

Thursday, October 10, 2013

Small Business Experiences Largest Growth for Cyber Attacks

According to a survey performed by the National Small Business Association (NSBA), 94% of small business owners are worried about cyber security. And the fact is, they should be. The Symantec Internet Security Threat Report released this April identifies the largest area of growth for cyber attacks in 2012 was for businesses with fewer than 250 employees. In fact, nearly 1 in 3 cyber attacks occurred in this business segment, a reflection of the common practice in which cyber attackers use small businesses for practice before moving on to more sophisticated attacks targeted at larger organizations.

In response to the rate of cyber attacks, the survey respondents stated that upgrade costs, security issues, and the time investment required to fix problems were their top three information technology concerns. This can all add up in terms of hard dollars, which is particularly alarming given that the NSBA also states that the average cost for small-business victims was $8,699 per attack.

While the government was previously the top economic sector targeted for cyber attacks, the focus has shifted. According to this survey, the most targeted economic sector for cyber attacks in 2012 was manufacturing. This may seem like a surprising shift, but the manufacturing sector has become an attractive target for cyber attackers due to the large amount of data that they produce. This data includes corporate intellectual property, technology, and designs, which can result in significant financial losses if stolen.

While the numbers are alarming, help is available for small businesses. Don’t wait until your security is breached; take the first step today because, as the saying goes, the best defense is a good offense. So if you’re concerned about the risk of cyber attacks and the impact on your business, reach out to us at Great Lakes Computer for a complimentary network security consultation.

Sources:
http://news.investors.com/100913-674477-survey-cyberattack-fears-rise.htm
http://boss.blogs.nytimes.com/2013/09/30/worry-about-cyberattacks-increases-survey-says/?_r=0
http://news.thomasnet.com/IMT/2013/07/23/should-manufacturers-use-big-data-to-prevent-cyber-attacks/

Thursday, October 3, 2013

The Nimble Storage “Special Sauce”

So what exactly is the “special sauce” behind Nimble Storage that made Great Lakes Computer decide to partner up with this particular storage vendor? Well, our decision was based on our belief that Nimble Storage is a revolutionary architecture, which may well influence the next generation of storage.

Over the past five years, storage vendors have made some impressive improvements in I/O performance, achieved almost entirely through using lots of high-RPM spindles, lots of cache, and a generous helping of SSD drives for “Tier 1” data. Automatic migration of data to the appropriate tier has been a very hot topic, and has generally been accomplished with proprietary algorithms with varying degrees of success. The downside of all this improved performance is that it also comes with pretty hefty price tags. 

All of this high-powered hardware was being held back by the simple fact that IO was still being done the same old way, small block random I/O, resulting in a great deal of disk activity for every transaction.  Enter the Nimble Storage Cache Accelerated Sequential Layout (CASL) system.  Nimble Storage’s unique architecture, CASL, is the enabling technology that makes converged storage possible.

CASL starts with an inline compression engine that compresses data in real time with no added latency, reducing the amount of data stored 2-4x.  This is possible because of two key Nimble Storage advantages:
  • Nimble Storage’s software was designed upfront to leverage the powerful multi-core CPUs used across the Nimble Storage array family for instant, high performance compression.
  • Nimble Storage is the ONLY primary storage array that natively supports variable size blocks. 
Because data blocks compress at different rates, fixed size blocks become variable sized after compression. This is a critical issue for high performance compression. With fixed size blocks found in other storage architectures, you can only compress data that is offline or rarely accessed.

Next, CASL groups random blocks of data into larger segments before writing it to flash and disk.  These fully sequential writes maximize the performance (and lifespan) of flash, which does not handle randomly written data effectively.  Sequenced writes also maximize the performance of low RPM drives, which do not like to seek but can handle fast sequential streams.  A copy of all active or “hot” data (and even “semi-active” data) is held in the large flash layer, enabling very fast random read performance. Inactive or “cold” data resides only on compressed high-capacity hard drives, further reducing costs. CASL’s intelligent index tracks hot data blocks in real time, adapting to hot spot and application workload changes within milliseconds, while even the most advanced tiered systems require a day or more.

So very simply, here are a couple of take-aways about the Nimble Storage “Special Sauce”:
  1. SSDs are used only as read cache, never as a Tier One storage layer.  This takes advantage of the best quality of an SSD drive, fast reads, and does not rely on them for writes, which SSD’s do not perform very fast.  And in using them as cache, the data on these SSD’s is not vulnerable to loss as it is already written to disk.
  2. Using cache accelerated sequential writes greatly reduces the number of disk I/O’s needed to write data and allows data to be written very efficiently in large sequential blocks.  What this means is that the disk is no longer the bottleneck.
In our brave new world of virtualized servers, virtualized desktops, virtualized storage, and virtualized networks, IOPS is king.  And Nimble Storage seems well-poised to delivery those IOPS in a compact, cost-effective package.

Thursday, September 26, 2013

Introducing Intel's New Datacenter Processor Family

Have you heard? Intel recently launched the powerful new Intel Xeon Processor E5-2600 v2 product family. Also referred to as the "Ivy Bridge EP” product family, the newest Intel Xeon processors are designed for the server, storage and networking infrastructure found in the datacenter.

Featuring up to 12 cores/24 threads and Intel’s leading 22-nm 3-D transistor technology, Intel Xeon E5-2600 v2 processors are said to offer up to 45% greater efficiency and up to 50% better performance when compared to previous Intel processor generations.

The improved efficiency and performance offered by the Intel Xeon E5-2600 v2 product family enable the rapid delivery of services, making these processors a perfect fit for high performance computing, cloud and enterprise segments, compute-intensive workloads, and in-memory database applications.

The big brother of Intel’s Ivy-bridge E product family, further advantages offered by the new Intel E5-2600 v2 Ivy Bridge EP processors include two integrated memory controllers, faster memory support, no GPU, faster QPI, and increased cache size.

Thursday, September 19, 2013

Great Lakes Computer is an Authorized Nimble Storage Reseller

We’re excited to announce our recent partnership with Nimble Storage, the leading provider of flash-optimized storage solutions. As a Michigan-based partner with national capabilities, we have the unique opportunity to leverage the solution set offered by Nimble Storage to round out our extensive datacenter solutions portfolio.

Since Nimble Storage started shipping its product in 2010, they have acquired more than 1,750 customers. The key behind Nimble Storage’s success is their patented Cache Accelerated Sequential Layout (CASL) architecture, which enables their hybrid storage arrays to simultaneously deliver fast performance and cost-effective capacity.

Targeting enterprises of every size, Nimble Storage offers arrays starting with the CS200 series and scaling up to the high-end CS400 series. Our existing partners, like CommVault and VMware, can be used to fully leverage their virtualization and business continuity capabilities. 

Through their technological and integration capabilities, Nimble Storage enables faster application performance, enhanced backup and disaster recovery, and stress-free operations—all while lowering TCO. As a result, Nimble Storage has proven to be an exceptional platform for all mainstream business applications and environments, including Microsoft applications like Exchange, SharePoint, SQL Server, as well as Oracle, data protection, virtual desktop infrastructure (VDI), and server virtualization.

So when it comes down to it, the real reason we chose to partner with Nimble Storage is not only due to their unique technology, but because they fit our business model very well. Nimble Storage provides better technology at a greater value, which is what we’ve been about since day one.

Friday, September 13, 2013

Great Lakes Computer at GrrCON 2013

Kicking off on Thursday, September 12th and lasting through Friday, the 13th, GrrCON is an information security conference held in the Midwest – specifically, our hometown of Grand Rapids, Michigan.  We were a Silver sponsor of GrrCON this year, our first year ever attending the show. From our booth in the Solutions Arena, we got to experience GrrCON first-hand.


Offering four tracks, Wasp, Mendax, Whistler and Phineas Freak, GrrCON offered sessions ranging from “Establishing a Vulnerability and Threat Management Program” to “The Droid Exploitation Saga, All Over Again!” and more. From our Juniper Networks partnership, security expert Keir Asher gave a presentation on our behalf entitled, “Outside the Box: A Discussion around Alternative Security Approaches.”

From our booth, we got a chance to speak with security professionals that were both local and out-of-state. These professionals were from a variety of industries, such as Medical, Legal, Finance, Insurance – you name it. GrrCON attracted the American melting pot (or stir fry, to use a more current phrasing) of individuals and we are appreciative for the chance to meet them all!

So since it was our first year, the question was, will we back next year? And the answer is a resounding yes. GrrCON, be ready to see GLC as a sponsor again in 2014. We can’t wait!

Thursday, September 5, 2013

New from VMworld - vSphere 5.5 and vCloud Suite 5.5

Announced at VMworld 2013, VMware has released updates to both their vSphere virtualization platform with Operations Management and their vCloud Suite, designed to manage vSphere-based private clouds. Both VMware releases feature new and improved capabilities as well as new product integrations.

The updated VMware vSphere 5.5 with Operations Management offers many improvements targeted at enhancing support for business-critical applications. Offering increased customization and improved high availability functionality, vSphere 5.5 also delivers double the scalability as the previous version for levels of virtual and logical CPUs per host and maximum memory per host.  vSphere 5.5 increases the maximum VMDK file size from 2 TB to 62 TB. vSphere virtual graphics processing unit (vGPU) feature has been expanded to include AMD- and Intel-based GPUs in addition to previously supported NVIDIA-based GPUs, and vGPU supports Linux operating systems with vSphere 5.5.

vSphere Flash Read Cache is also introduced with vSphere 5.5.  vSphere Flash Read Cache enables the pooling of multiple Flash-based devices into a single consumable resource. Performance of virtual machines is enhanced by accelerating read-intensive workloads. vSphere Flash Read Cache is transparent to virtual machines, so no modifications are required to applications or operating systems.

As an integrated cloud infrastructure solution, the VMware vCloud Suite 5.5 update features new releases of the different components it consists of. For instance, vCloud Suite 5.5 includes new releases of VMware vCloud Networking and Security, VMware vCenter Site Recovery Manager, VMware vCloud Director and VMware vCloud Connector.

The updated VMware vCloud Suite 5.5 also features updates to the VMware vSphere 5.5 virtualization platform. vSphere 5.5 is included within the vCloud Suite, which takes advantage of the new vSphere App HA feature, Big Data Extensions, and Flash Read Cache capabilities.

Both vSphere 5.5 and vCloud Site 5.5 upgrades will be available at no cost to customers with an existing support contract.

Monday, August 26, 2013

Protecting Your Network: The Policy

In order to successfully defend any network from potential attackers, it’s important to take a layered approach to security. There are many different layers to network security ranging from physical security to network device security to user device security to logging and analysis.

The most important part of any network security implementation should be the written Network Security Policies. Network Security Policies are written with specific items in mind. Some examples of Network Security Policies would be an “Acceptable Use” policy or “Equipment Disposal” policy.  These policies contain rules and regulations and can provide instruction for performing certain tasks within an organization. They are meant to ensure that people follow a designed procedure to prevent any type of breach that could come from not following a specific policy. Additionally, policies must be created and / or updated as technology changes, and those policies must be reviewed and understood by pertinent users in the organization. Having policies is pointless if no one knows you have them or if they don’t understand them.

Network Security as a whole can be a daunting task for even the most security-conscious engineers, managers, and executives. There are a great many pieces and parts that mesh together to form the overall security infrastructure for any business. Think of network security as being much like a medieval knight’s armor. If there are “chinks” in the armor, you may or may not be able to see them - but they are still there. It’s really not a matter of “if” someone will find those “chinks”, but more a matter of “when”. The only major difference between armor and network security is that you know instantly if an attacker penetrates your armor. You may not know an attacker has penetrated your network until it’s too late.

Thursday, August 22, 2013

Great Lakes Computer Achieves Healthcare Certifications from Juniper Networks and VMware

Great Lakes Computer has recently achieved Healthcare certifications from Juniper Networks and VMware. As one of only eleven partners throughout the U.S. to achieve the Healthcare Accreditation from Juniper, this accreditation reflects Great Lakes Computer’s in-depth knowledge of the unique challenges facing the healthcare industry. It also reflects a comprehensive understanding of how Juniper Networks Healthcare solutions can be leveraged to cost-effectively deliver the extra capacity, security, and capabilities healthcare organizations require across both wired and wireless infrastructures.

Great Lakes Computer has also attained a Healthcare specialization from its virtualization partner, VMware, in recognition of their expertise in the Healthcare marketplace. As an expert in leveraging VMware virtualization and cloud solutions for the benefit of Healthcare customers, Great Lakes Computer is one of ten partners in Michigan and 142 nationwide to achieve the Healthcare specialization from VMware.

Through these accreditations, Great Lakes Computer offers the specific expertise to address issues such as:
  • Unreliable or unsustainable legacy networks resulting in loss of connection, lack of coverage, and lack of support
  • The influx of tablets and smartphones placing additional burden on the network and increasing the risk of security breaches
  • Significant increases in data traffic resulting in slow or failed applications, decreasing the speed of response times for clinicians
The healthcare certifications through Juniper Networks and VMware are about more than simply understanding what products allow healthcare organization to meet compliance and budget requirements, but about how to enable better patient care. And for many healthcare organizations, the benefit of having a technology partner that truly understands their specific issues is unparalleled.

For more information, check out the official press release here

Thursday, August 15, 2013

Increase Efficiency with Veeam Backup and Replication v7

Announced in June, the new and improved Veeam Backup and Replication version 7 was engineered to make backups of virtual machines more efficient. Designed for both VMware and Microsoft virtualized environments, Veeam Backup and Replication v7 features seven major modifications and approximately 50 minor tweaks.

Of all of these changes, there are three that seem to be the most promising in terms of expanding Veeam’s existing customer base: built-in Wide Area Network (WAN) acceleration, long-term retention to tape, and backup from storage snapshots.

The built-in WAN acceleration offered by Veeam Backup and Replication v7 boasts an improvement of up to 50 times the speed of a traditional file copy across the WAN by decreasing the amount of data sent across the connection. By pre-determining what data blocks are already there and sending less data, the process is much quicker and more efficient. 

The primary advantages offered by Veeam Backup and Replication v7’s support for long-term retention to tape is the ability to tier levels of backup by archiving long-term data and moving it to cheaper storage. This feature was largely brought about by customer demand, and allows Veeam to perform this function in house, where in the past they would have had to work with a partner.

Last but not least, Veeam Backup and Replication v7 dramatically improves recovery point objectives (RPOs) with backup from storage snapshots. Backup from storage snapshots also significantly reduces the impact that backup activities have on the production environment by enabling a backup process to keep a virtual machine snapshot for only a brief moment, resulting in instant VM snapshot commit. Unlike existing snapshot backup solutions, Veeam leverages VMware changed block tracking (CBT) to greatly reduce backup time for incremental backups. Currently, Veeam’s backup from storage snapshot feature is only available with HP StoreVirtual Storage and HP 3PAR StoreServ Storage products; however, Veeam’s engineers are hard at work with other storage vendors to expand compatibility.

Thursday, August 8, 2013

What's New in VMware Horizon Workspace Version 1.5

VMware has recently released Horizon Workspace version 1.5, a move designed to simplify the experience for both the end user and the IT guy in an increasingly mobile world. This release extends the simplicity offered by the pre-existing single, aggregated workspace combining data, applications and desktops. It also enables VMware Horizon Workspace to aid in controlling the complexity that has arisen from the BYOD explosion.

VMware Horizon Workspace 1.5 makes it a much simpler task to support the mobile workforce and introduces the advantage of a highly integrated mobile management platform. This integrated management interface is designed to support Android devices, with the goal of supporting Apple iOS devices in iOS 7 in the future.

Since the explosion of BYOD has resulted in a variety of mobile device models and platforms, between smart phones and tablets and Android and Apple, IT has been burdened with more components to manage, adding further layers of complexity and stress. Additionally, Android is notoriously difficult to manage. However, Horizon Workspace easily allows IT administrators to standardize the management of Android devices and alleviates this particular IT burden.

VMware Horizon Workspace is available as a virtual appliance that can easily be deployed on site and integrated with existing enterprise services. Other key advantages offered by the latest version of VMware Horizon Workspace include integration with VMware Horizon Mobile, a policy management engine designed to consolidate, model and rationalize policies across all components, and support for mobile applications that more easily allow IT administrators to entitle and manage applications.

Thursday, August 1, 2013

Protecting Your Network: The “New” Firewall

There are different types of firewalls out there, but they mainly perform some variation of the following tasks:  Filter packets based on port, perform “stateful” packet filtering, and perform application level filtering.  Depending upon manufacturer and age, your firewall may be capable of performing one or all of these functions. 

The best firewalls perform additional functions such as IDP (Intrusion Detection) / IPS (Intrusion Prevention), gateway anti-virus, application level inspection (also called application firewalling), as well as standard features like NAT (Network Address Translation) / PAT (Port Address Translation), routing and VPN.  In small businesses, it’s possible to have a device that performs all of these functions.  For larger enterprises, these functions may be separated to different hardware devices to increase performance.

The problem with basic firewalls, ones that only perform packet filtering and stateful filtering, is that many of the attackers use ports and rules to perform a legitimate function that will inject traffic with malicious intent into the allowed traffic.  One example is a simple web server.  A traditional firewall will allow traffic from the internet to TCP port 80 for web access to this site.  Depending upon what the site is, what hardware it actually runs on, how it’s written / coded, and if it has a connection to another resource (such as an E-Commerce site connecting to a SQL server database for customer records), there may be known or unknown vulnerabilities affecting that server.  A common vulnerability in this example would be SQL Injection.  A traditional firewall would allow this type of traffic because it is unable to differentiate between a normal user of the website and a malicious user injecting their code into the data stream. 

Next-Generation Firewalls help to detect this type of attack by actively inspecting and reading the contents of every packet that is allowed into a network.  These firewalls can determine, based on signature or some other definition, which traffic is legitimate and which is not.  A lot of attacks can be thwarted by using application-level inspection.  These firewalls can also look for anomalies in traffic patterns; that is, traffic that does not match a known good or bad signature.  It then performs an action based on the traffic not matching any known pattern.  This feature offers protection against unknown and “zero-day” exploits.  The bottom line is that the old method of installing a firewall, setting up NAT, and allowing certain port traffic in while blocking other ports, is no longer adequate to protect a network.

Thursday, July 25, 2013

HP Insight Control Amplifies ProLiant Server Management

HP Insight Control is server management software that gives system administrators the tools to fully utilize the management capabilities built into HP ProLiant servers.  With Insight Control, you can deploy new ProLiant servers quickly and reliably, catalog your environment, remotely manage ProLiant servers, and proactively monitor server health.  You can reduce unplanned downtime and deliver stable IT services to your users.  Insight Control enables you to quickly respond to pressing business needs by facilitating the rapid rollout of new IT services. HP ProLiant servers with Insight Control enable you to simplify day-to-day operations to get the most of your investment in server assets and staff.

Some of the tools that are available within the HP Insight Control management software suite include HP Lights Out, HP Systems Insight Manager, and the extension for VMware vCenter Server.

With HP Lights-Out (iLO), you can take control of your ProLiant servers at any time and from any location. iLO Advanced and Insight Control offer the Remote Console and Virtual Media features; no more connecting a keyboard, mouse, and monitor to troubleshoot or configure a server.  You can also share the Remote Console session with up to six administrators to collaborate in real time.

Monitor system health and performance through HP Systems Insight Manager. Not only does this allow users to proactively manage your ProLiant servers with a simple, integrated interface, but you can also forward alerts to other enterprise management solutions. Insight Control allows you to maintain consistency with firmware and software baselines using Systems Insight Manager and HP Software Update Manager.  Insight Control also enables you to proactively manage virtual machine workloads based on pre-failure conditions from underlying host and storage systems.

The HP Insight Control extension for VMware vCenter Server enables converged infrastructure to be the best infrastructure solution for VMware administrators.  From a single pane of glass, you can provision, monitor, and manage HP servers, storage, and networking infrastructure.  Invoke HP management tools, such as Systems Insight Manager, iLO, Virtual Connect Manager, and Onboard Administrator directly from the VMware vCenter interface.  Insight Control with VMware vCenter allows you to visually trace and monitor your Virtual Connect network end-to-end without leaving VMware vCenter.

Through these tools and more, HP Insight Control assists in managing your entire server infrastructure.  Reduce staff workload by offloading deployment and compliance checks using Insight Control.  Integration with VMware vCenter provides a powerful, centralized management solution for your virtual environment and the physical infrastructure supporting it. All-in-all, HP Insight control management software enables users to respond to critical business needs by rapidly provisioning ProLiant servers while meeting compliance requirements. 

Tuesday, July 9, 2013

Sell, Recycle, Destroy - Pick Your Asset Recovery Option

What do you do when you’re ready to get rid of your old IT equipment? You basically have three options, right? You can trash it, you can try and sell it and recover some of your investment, or you can recycle it.  You can start with Google and try typing in something like “sell my HP”, or you can come to Great Lakes Computer, and we can give you a hand with any or all of those options.

The first and the third option are the same from our perspective. We have no problem disposing of your unwanted, no-value IT equipment. We accomplish this through our partnership with an R2-Certified Recycler - Valley City. Any IT equipment that is deemed to carry little to no market value will be recycled so that it is disposed of in an environmentally-responsible manner.

As for the second option, Great Lakes Computer purchases the following equipment to stock our 30,000 sq ft integration facility: HP (ProLiant/Integrity), IBM (X/P Series), Dell (PowerEdge), Cisco, HP ProCurve, and HP, IBM and Dell storage.  Through our 27 years of market knowledge specific to the used IT hardware industry, we are able to offer aggressive pricing.

In addition to these options, we also offer hard drive erasure and/or destruction to ensure data security. Great Lakes Computer is experienced in providing these services to our customers and will provide documented confirmation in the form of a certificate of completed service.   So whether you decide to toss it or sell it, Great Lakes Computer can help with your old IT equipment.

What Exactly is a Software Defined Datacenter?

As one of the hottest topics in the industry, the Software Defined Datacenter gets a lot of buzz. But what exactly is a Software Defined Datacenter? And just what are the true benefits that this technology can provide for you datacenter? We’ll walk you through why this industry revolution is truly worth the hype.

Our best, boiled down, definition of the Software Defined Datacenter is, “A datacenter in which the entire infrastructure is virtualized, allowing for control of the datacenter to be automated by software and the infrastructure to be easily delivered as a service.” As a result, the services that used to require hardware can now be performed using software alone.

There are four main components of the Software Defined Datacenter: network virtualization, server virtualization, storage virtualization, and a business logic layer. Basically, the virtualization components are what we mean when we say your entire infrastructure needs to be virtualized. The business logic layer is also required in order to translate application requirements, service level agreements (SLAs), policies and any cost considerations.

In addition to the easy delivery of infrastructure as a service (IaaS), another key benefit offered by the Software Defined Datacenter is that it provides support for both legacy enterprise applications and emerging cloud technologies. This allows organizations to continue running their existing applications, but also take advantage of cutting-edge technologies if and when they so desire.

This trend results in a shift of mindset since a Software Defined Datacenter sounds a lot like the cloud. That’s because a lot of what the cloud promises to do, the Software Defined Datacenter promises to do as well. In fact, the Software Defined Datacenter solution offered by VMware is referred to as the vCloud Suite. Coincidence? Not even close. 

Thursday, June 27, 2013

What is Dynamic Disk Pooling?

When IBM introduced firmware release 7.8x in the fall of 2012, there were a number of new technical advances, but by far the most exciting was Dynamic Disk Pooling.  With the general release of firmware level 7.84, many of these capabilities became available not only to new IBM System Storage DS3500 purchases, but to installed systems as well.  DDP does not require a license, nor does it require a purchase.  As soon as a DS3500 system is upgraded to 7.84 or later (7.86 is the latest), Dynamic Disk Pooling is available.

DDP dynamically distributes all data, spare capacity, and protection information across a pool of drives.  That pool may be all drives in the system, or a subset, such as all of the SAS 15K drives.   Effectively, DDP is a new type of RAID level, built on RAID 6.  It uses an intelligent algorithm to define where each chunk of data should reside.  In traditional RAID, drives are organized into arrays, and logical drives are written across stripes on the physical drives in the array.  Hot spares contain no data until a drive fails, leaving that spare capacity stranded and without a purpose.   In the event of a drive failure, the data is recreated on the hot spare, significantly impacting the performance of all drives in the array during the rebuild process. 

With DDP, each logical drive’s data and spare capacity is distributed across all drives in the pool, so all drives contribute to the aggregate IO of the logical drive, and the spare capacity is available to all logical drives.  In the event of a physical drive failure, data is reconstructed throughout the disk pool.  Basically, the data that had previously resided on the failed drive is redistributed across all drives in the pool.  Recovery from a failed drive may be up to ten times faster than a rebuild in a traditional RAID set, and the performance degradation is much less during the rebuild.

Apart from improved data protection, performance, and failure recovery, there are other benefits from DDP as well.   Administration is significantly easier.  When adding capacity, there is no more agonizing over whether to create a new RAID set or expand a current RAID set, and no more question about where the optimum placement is. You just put in the drive and add it to a pool.  The total capacity of the pool has now been increased, and you can then proceed to expand your logical drive, or create a new logical drive as needed.  Using disk pools instead of RAID sets makes disk utilization much more efficient and avoids issues found with traditional RAID, such as islands of stranded capacity that can’t be recovered.

With the proliferation of large format drives, traditional RAID is becoming progressively more difficult to manage, and the long rebuild times associated with large SATA drives creates unacceptably long windows of vulnerability (as much as 4.5 days with a 3TB drive in an operational array).  Over the coming months and years, technology like DDP will continue to replace traditional RAID because of these issues.

Thursday, June 20, 2013

Protecting Your Network: A Firewall Isn’t Enough

If I had a nickel for every time I heard a networking professional say “I have a firewall, so my network is secure”, suffice it to say, I’d have a LOT of nickels.  10 years ago, firewalls were one of the primary ways you protected your network.  But even then, it was difficult for the engineers to convince their managers, and then for their managers to convince the executives, that a firewall was necessary.  They were costly and the mentality was “we don’t have an issue, why do we need it?”

Today, engineers, specifically security-conscious engineers, tend to stay current on networking security, vulnerability assessment, and the “how data might be compromised” - which is a daunting and never-ending task.  A Security Engineer suspects how someone might gain unauthorized access to their network, and therefore they have methods of preventing or logging and reporting said access.  These things take money and/or time to implement. 

Managers will generally focus on both the benefits and cost of the engineer-proposed solution.  They generally have a good understanding of what the engineer is proposing and the benefits of such a solution.  They also understand what the executive level will and won’t do in relation to authorizing these types of projects, whether it’s funding or time-related constraints. 

For those security projects that make it to the next level, the Executives tend to focus on the primary business objectives of the company rather than focusing on individual business units, such as Networking or Security.  As a result, most security-related projects are done “post-mortem”, i.e. after an incident has caused an issue within the business.

The fact is that almost all businesses spend time and money on network security only after they realize there’s an issue, or if they are forced by a regulatory committee for compliance, i.e. HIPAA, SOX, PCI DSS, etc.  According to the 2013 Data Breach Investigations Report (1), 92% of all breaches are performed by external sources originating OUTSIDE your firewall while 14% originated INSIDE (with 1% originating from trusted Partners).  What this tells us is that a firewall is simply no longer enough to adequately protect a network.


(1) http://www.verizonenterprise.com/DBIR/2013/ - 2013 Data Breach Investigations Report

Thursday, June 13, 2013

VMware vSOM: What's the Big Deal?

The release of VMware’s vSphere with Operations Manager, or vSOM, licensing option has created a bit of an industry buzz. Whether it’s concern over how vSOM will impact vSphere Center Operations Management Suite (vCOPS) licensing or excitement over the advantages of the new licensing option, vSOM has caught the attention of VMware-virtualized organizations.

First of all, the new vSOM licensing option consists of VMware vSphere Standard, Enterprise or Enterprise Plus edition combined with VMware vCOPS Standard Edition for every host in your virtual infrastructure. This makes it an attractive licensing option compared to previous vCOPS licensing, because it allows you pay per CPU for both vSphere and vCops. Additionally, the price point for vSOM is below the price point of any vSphere edition combined with Operations Management.

Secondly, in addition to cost, vSOM enhances operational efficiency. vSOM enables datacenters to improve their VMware capacity utilization by up to 40% because stranded capacity can be reclaimed and utilized efficiently. Reducing the amount of time spent on diagnostics and problem resolution, vSOM also provides, on average, a 36% reduction in application downtime.

There are three ways that users can obtain vSOM. Not only can users straight-out purchase the new vSOM licensing option, users can also obtain vSOM by upgrading their current vSphere licenses. Just a tip, upgrading current vSphere licenses will be significantly less costly than outright purchasing vSOM. Last but not least, vSphere with Operations Management Acceleration Kits are also available for purchase.  The new Acceleration Kits include a vCenter Standard Server license, six vSphere licenses in the edition of the user’s choice, and vCenter Operations Manager Standard licensing with each processor license.

Thursday, June 6, 2013

Driving Innovation with HP ProLiant Gen8 Servers

Now that HP ProLiant G7 servers have officially gone end-of-life, it’s time to take a good, hard look at their successor – HP ProLiant Gen8 servers. What are the key differences between the two generations, and what advantages do the new generation of HP ProLiant blade and rackmount servers provide? Well, with over 150 design innovations from the G7 architecture, there are definitely plenty of improvements to talk about.

Beginning with the improved technologies and features, HP ProLiant Gen8 servers come equipped with cutting-edge Intel Xeon E5 processors for superior energy-efficiency, reduced I/O network latency, and overall improved performance. Gen8 servers also feature LRDIMM memory, which provides reduced loading for enhanced speed and a memory capacity ranging up to 768GB.

Along with enhanced performance, Gen8 servers also feature improved server management through the new HP Integrated Lights Out 4 (iLO 4). HP iLO4 features the new HP Agentless Management, HP Active Health System, HP iLO Mobile App, HP iLO multi-language support, and HP Sea of Sensors 3D. These features deliver enhanced thermal and power control as well as secure server management and simplified server deployment.

Designed to decrease downtime, failures and data loss, HP ProLiant Gen8 servers also feature Integrated Lifecycle Automation with HP Intelligent Provisioning, Dynamic Workload Acceleration, Automated Energy Optimization, and Proactive Insight Experience and Insight Architecture Alliance. These features combine to offer incredible server intelligence and automation for an intuitive, optimized server experience.

The most popular new models within the HP ProLiant Gen8 server family are: HP BL460c Gen8 blade servers, 1U HP DL360p Gen8 rack servers, and 2U HP DL380p Gen8 rack servers.

Thursday, May 30, 2013

What is KVM?

If you’ve been around the industry for awhile, the first thought that comes to mind when someone says “KVM” is the “Keyboard-Video-Mouse” switch that allows you to manage multiple systems through a single console. While that is still valid, the more current definition of KVM is “Kernel-based Virtual Machine”. So let’s talk about what that is, and what it means to today’s IT professional.

The kernel component of KVM was first included in mainline Linux, 2.6.20, released in February 2007. It is a Linux virtualization solution that allows multiple Windows or Linux virtual machines to run on a single physical server. KVM has enjoyed continued improvement and development since its original introduction, and is now generally considered to be a mainstream virtualization hypervisor. In fact, KVM is able to stand on its own against such competitors as VMware vSphere, Microsoft Hyper-V, and Citrix XenServer.

Red Hat, one of the predominant distributions of Linux, offers KVM-based virtualization through subscriptions to several of its distributions including single guest, four guest, or unlimited guest versions of RHEL (Red Hat Enterprise Linux). It also offers a subscription for a lighter-weight, bare-metal hypervisor distribution known as RHEV (Red Hat Enterprise Virtualization) Hypervisor.

KVM recently got a rather large boost by IBM’s shift to open source cloud architecture. IBM opened the KVM Center or Excellence labs in Beijing around the end of 2012 and has announced a second location in New York. The aim is to encourage enterprise adoption of KVM because of its affordability and exceptional performance.  

Thursday, May 23, 2013

A Review of QLogic's FabricCache QLE10000 Adapter

The QLogic FabricCache QLE10000 adapter delivers shared server-based SSD caching to the Storage Area Network (SAN). The QLogic QLE10000 adapter is the first product built on Mt. Rainier technology; it combines a Fibre Channel host bus adapter (HBA) with a Flash-based cache.  The QLogic FabricCache adapters utilize a single HBA driver and standard management software, so no additional skill set or software is required to install and manage the QLogic QLE10000. TheQLogic QLE10000 is deployed as a traditional SAN HBA, and the FabricCache technology is transparent to the SAN and existing management software.

As a combined HBA and cache solution, the QLogic QLE10000 provides a number of benefits for servers running an extremely broad range of enterprise applications.  QLogic FabricCache adapters are in constant communication with one another, providing clustered caching for shared storage that is typically not available with conventional server-based SSD solutions.  Common storage traffic is offloaded from the SAN onto the FabricCache adapter, reducing precious I/O requests to the shared storage. Cache processing is transitioned from the CPU to the HBA, freeing processing resources.

To ensure cache consistency, each LUN is owned by a single FabricCache HBA; when data is requested from a different LUN, the FabricCache HBA will check with the LUN-owner cache.  If the data is cached, the response time for the data is reduced to microseconds.  If the requested data has not been cached, the data is retrieved from the source LUN.  Consistency between the cache and the LUN is maintained and I/O latency is minimized by the 1:1 relationship.

A demonstration video has been published on YouTube to show a multi-server cluster with shared storage.

Friday, May 17, 2013

How Juniper Networks Helps Improve Patient Care

One of the most pressing questions on the mind of hospitals and other healthcare providers is, how do we achieve better patient care? Due to the impact of Meaningful Use Stage 2 on HIPAA privacy and security regulations, many hospitals and healthcare providers are being incentivized, and ultimately required, to make changes to their current network infrastructures in order to better protect patient privacy when exchanging health information.  Fortunately, since network technology is evolving alongside the healthcare industry, these changes also lead to improvements in patient care and satisfaction.

Hospitals and healthcare providers require cost-effective solutions that will deliver the additional capacity, security and capabilities that their network infrastructures require. One key challenge these infrastructures face is the growing number of mobile devices caregivers and patients are using (both on and off premise).

One solution that takes both patient care and HIPAA regulations into account is Juniper’s Simply Connected for Healthcare solution.  As an integrated portfolio of resilient switching, security and wireless products, Simply Connected for Healthcare enables simple, secure access and collaboration, regardless of the type of device, its user, or its location. This makes it simple for both caregivers and patients to securely access medical information via one security policy per user, and it works for everything. 

In addition to secure, device-agnostic connections from any location, the primary benefits offered by Juniper’s Simply Connected for Healthcare solution  are: the creation of a general-purpose, application-agnostic network delivering unrivaled performance and protection,  increased secured information, availability, support for service-demanding healthcare mobility applications and seamless roaming, and simplification of the architecture and software stack of the network.

Juniper’s Simply Connected for Healthcare portfolio simplifies the network infrastructure, provides secure access to medical information, and offers a more reliable and scalable network.  Thus, it addresses both key issues facing the healthcare industry; it provides the necessary infrastructure changes due to HIPAA requirements, and it ensures the continuous improvement of patient care.

Tuesday, May 7, 2013

Secure Data-at-Rest with the IBM DS3500 SAN

The new Controller Firmware (CFW) 7.84 release for the IBM System Storage DS3500 introduced several powerful premium features. One premium feature of particular interest to many industries is the Full Disk Encryption (FDE) capabilities. Through these capabilities, the IBM DS3500 is now able to provide data-at-rest encryption, which meets a variety of regulatory requirements.

The IBM DS3500 SAN meets these regulatory requirements by offering continuous data security through 300GB and 600GB Self Encrypting Drives (SEDs). The SEDs provide the IBM DS3500 with full drive-level encryption that is easily managed through the IBM Disk Encryption Storage Manager for relentless data security.

Full disk encryption prevents unauthorized access to data resulting from the actual, physical removal of the SED from the IBM DS3500 SAN. This is accomplished via “Instant Secure Erase”, whereby an operator performs a secure erase prior to removal of a drive, or via “Auto-Locking”, which locks a drive whenever it is powered down.  However, it does permit transparent access to the data when the drives are unlocked and operating. When drive security is enabled on the array, it restricts data access to a controller with the correct security key.

Since the disk drives being used are self-encrypting, they also protect the data by generating an Encryption Key that never leaves the drive. Because the data is stored in encrypted form, through symmetric encryption and data decryption at full disk speed, there is no impact to the disk performance.

Through these new, powerful full disk encryption capabilities, the IBM DS3500 SAN can now meet the regulatory requirements for a wide range of industries, including HIPAA regulations resulting from Meaningful Use Phase 2. To learn more about the impact of Meaningful Use Phase 2 on Healthcare IT, click here.

Thursday, May 2, 2013

From Rented to Owned - Datacenter Edition

Sometimes it’s difficult to know where to start when you run into a business challenge, especially one that concerns IT. Recently, we dealt with a customer that had previously opted to rent infrastructure space in lieu of operating their own datacenter, but ran into an issue when they decided to transition to Apple for the desktop and end user mobile devices.

Unfortunately, the rented datacenter was not compatible with the Apple platform. Also, because the customer was in the healthcare industry, they had an entirely separate issue with securing data at rest due to recent changes to HIPAA regulations. 

This is where we came in. To help the customer achieve an environment that was both Apple-compatible and HIPAA compliant, we developed a customized primary datacenter solution as well as a Disaster Recovery (DR) site solution with automated failover. This customized solution included new IBM TotalStorage DS3500 SANs at both sites, with full disk encryption technology, a HP C7000 blade enclosure with renew BL460c G7 blade servers, GLC certified Cisco 3750 switches, VMware Site Recovery Manager, and a VMware Essentials Plus acceleration kit.

In addition to a unique combination of best-of-breed hardware and software, the customer received professional services and knowledge transfer through our full installation, along with documentation validating the failover process.

The customer was able to complete the transfer to an entirely Apple-based end user environment in a time-efficient manner and become fully compliant with new HIPAA regulations (thanks to the full disk encryption capabilities offered by the IBM TotalStorage DS3500 SAN). Ultimately, we were able to kill two birds with one stone, simplifying the project and saving the customer money.