The End of the Rotational Disk? The Next Chapter

Posted in: General, SCSI, SSD, Author: yobitech (October 8, 2015)

If you have been keeping up with the storage market lately, you will notice that there has been a considerable drop in prices for SSD hard drives. It has been frustrating to see over the past 4 to 5 years there has not been much changes in the SSD capacities and prices until now. With the TLC (Triple Level Cell) SSD hard drives available, it is the game-changer we have been waiting for. With TLC capacities at almost 3TBs per SSD drive and projected to approach 10TBs per drive in another year, what does that mean for the rotational disk?

That’s a good question, but there is no hard answer to that yet. As you know, in the technology industry, it can change on a dime. The TLC drive market is the answer to the evolution of hard drives as a whole. It is economical because of the high capacity and it is energy efficient as there are no moving parts. Finally, the MTBF (Mean Time Before Failure) is pretty good as SSD reliability was a factor in the actual adoption of SSDs in the enterprise.


The MTBF is always a scary thing as that is the life expectancy of a hard drive. If you recall, I blogged some time ago about the “Perfect Storm” effect where hard drives in a SAN were deployed in series and manufactured in batches at the same time. So it is not uncommon to see multiple drive failures in a SAN that can result in data loss. With rotational disk at 8TBs per 7.2k drive, it can conceivably take days, to even weeks to rebuild a single drive. So I think for rotational disk that is a big risk to take. With TLC SSDs around 10TBs, it is not only a cost and power efficiency advantage, but it is also a lower risk when it comes to RAID rebuilding time. Rebuilding a 10TB drive can take a day or 2 (sometimes hours depending on how much data is on the drive). The MTBF rate is higher because SSDs are predictively failed by logically marking active cells, dead (barring other failures). This is the normal wear and tear process of the drive. Cells have a limited number of writes and re-writes before they are marked dead. In smaller capacities, the rate of writes per cells are much higher because there is only a limited number of cells. With the large amount of cells now offered in TLC SSDs, essentially, cells are written to less often as a much smaller drive inherently. So by moving to a larger capacity increases the durability of the drive. This is the reverse for rotational drives as they become more unreliable as capacity increases.

So what does it mean for the rotational disk?

Here are some trends that are happening and where I think it will go.

1. 15k drives will still be available but in limited capacities. This is because of legacy support. Most drives out there are still running 15k drives. There are even 146Gb drives out there that are running strong but will need additional capacities due to growth and/or replacement for failed drives. This will be a staple in the rotational market for a while.

2. SSDs are not the answer for everything. Although we all may think so, SSDs are actually not made for all workloads and applications. SSD perform miserably when it comes to streaming videos, large block and sequential data flows. This is why the 7.2k, high capacity drives will still thrive and be around for a while.

3. The death of the 10k drive. Yes, I am calling it. I believe that the 10k drive will finally rest in peace. There is no need for it nor will there be a demand for it. So 10k drives, so long…

Like anything in technology, this is speculation, from an educated and experienced point of view. This can all change at anytime, but I hope this was insightful.

Comments Off on The End of the Rotational Disk? The Next Chapter

Big Data

Posted in: Cloud, General, SSD, Author: yobitech (July 13, 2015)

The Internet made mass data accessibility possible. While computers were storing MBs of data locally on the internal hard drive, GB hard drives were available but priced only for the enterprise. I remember seeing a 1GB hard drive for over $1,000. Now we are throwing 32GB USB drives around like paper clips. We are now moving past 8TB, 7200 RPM drives and mass systems storing multiple PBs of data. With all this data, it is easy to be overwhelmed. Too much information is known as information overload. That is when too much information makes relevant information unusable due to the sheer amount of available information. We can’t tell usable data from unusable data.

In recent years, multi-core processing, combined with multi-socket servers has made HPC (High Performance Computing) possible. HPC or grid-computing is the linking of these highly dense compute servers (locally or geographically dispersed) into a giant super-computing system. With this type of system, the ability to compute algorithms that would traditionally take days or weeks are done in minutes. These gigantic systems laid the foundation for companies to have a smaller scaled HPC system that they would use in-house for R&D (Research and Development).

This concept of collecting data in a giant repository was first called data-mining. Data-mining is the same concept used in The Google(s) and the Yahoo(s) of the world. They pioneered this as a way to navigate the ever growing world of information available on the Internet. Google came out with an ingenious light-weight software called “Google Desktop”. It was a mini version of data-mining for the home computer. I personally think that was one of the best tools I have ever used for my computer. It was discontinued shortly thereafter for reasons I am not aware of.

The advancements in processing and compute made data-mining possible, but for many companies, it was an expensive proposition. Data-mining was limited by the transfer speeds of the data on the storage. This is where the story changes. Today, with SSD technologies shifting in pricing and density, due to better error correction and fault predictability and manufacturing advancements, storage has finally caught up.

The ability for servers to quickly access data on SSD storage to feed HPC systems, opens up many opportunities that were not possible before. This is called “Big Data”. Companies can now run Big Data to take advantage of mined data. They can now look for trends, to correlate and to analyze data quickly to make strategic business decisions to take advantage of market opportunities. For example; A telecommunications company can mine their data to look for dialing patterns that may be abnormal for their subscribers. The faster fraud can be identified, the less financial loss there will be. Another example is a retail company that may be looking to maximizing profits by stocking their shelves with “hot” ticket items. This can be achieved by analyzing sold items and trending crowd sourced data from different information outlets.

SSD drives are enabling the data-mining/Big Data world for companies that are becoming leaner and more laser-focused on strategic decisions. In turn, the total cost of these HPC systems pay for themselves in the overall savings and profitability of Big Data benefits. The opportunities are endless as Big Data has extended into the cloud. With collaboration combined with Open Source software, the end results are astounding. We are producing cures for diseases, securing financial institutions and finding inventions through innovations and trends. We are living in very exciting times.

Comments Off on Big Data

It’s finally here!

Posted in: General, SSD, Author: yobitech (February 16, 2015)

I have been talking about SSDs and rotating hard drives for many years now. The inability of SSDs to overtake the mainstream hard drive space has been inhibited by the ability to produce them at an affordable price point (as compared with rotating disk). SSDs has gone through many different iterations, from SLC (single-level cell), MLC (multi-level cell), eMLC (enterprise multi-level cell) and now TLC (triple-level cell).

If you haven’t noticed, the consumer market is unloading 128GB drives at sub $50 and 256GB SSDs are under $100 dollars. This is a steep drop in price and is an indication to the release of the next wave of SSD drives. SSDs are poised to go mainstream because of TLC SSDs. SSDs in general are still, and still will be, expensive and incredibly complex to manufacture, but due to market demands, the TLC drive is now positioned to take the market by storm. So what has changed? Has the manufacturing process changed? Yes, but not so much. The biggest change was the market strategy of the TLC SSD. The drive is manufactured to sacrifice durability in exchange for density. To the point where we can see TLC drives very soon with density in the 2TB, 3TB even 10TB+ capacities. Drive technologies will leverage better drive failure predictability logic algorithms and other “fault tolerating” technologies to augment the lower MTBF.

So what does this mean for the rotating disk? Is it possible that the rotating drive will disappear altogether? I don’t think so. I predict the availability of the TLC drives will virtually eliminate 10k and 15k drives and then over a much longer time period the 7.2k drive will go. This is because the cost per GB still a great value on the 7.2k drives and the densities will grow in tandem with the TLC SSDs. There is also a comfort level of having magnetic media around holding data (for those old-schoolers like me).

It’s been a long time waiting but it is exciting to finally see SSDs making its way into the mainstream.

Comments Off on It’s finally here!

Brand Loyalty

Posted in: Backup, Cloud, General, Author: yobitech (October 13, 2014)

Americans are fascinated by brands. Brand loyalty is big especially when “status” is tied to a brand. When I was in high school back in the 80s, my friends (and I) would work diligently to save our paychecks to buy the “Guess” jeans, “Zodiac” shoes and “Ton Sur Ton” shirts because that was the “cool” look. I put in many hours of working the stockroom at the supermarket and delivering legal documents as a messenger. In 1989, Toyota and Nissan entered into the luxury branding as well with Lexus and Infinity respectively after the success of Honda’s upscale luxury performance brand, Acura which started in 1986. Aside from the marketing of brands, how much value (aside from the status) does a premium brand bring? Would I buy a $60,000 Korean Hyundai Genesis over the comparable BMW 5 Series?

For most consumers in the Enterprise Computing space, brand loyalty was a big thing. IBM and EMC lead the way in the datacenter for many years. The motto, “You’ll never get fired for buying IBM” was the perception. As you may have heard the saying, “Perception is Reality” rang true for many CTOs and CIOs. But with the economy ever tightening and IT as an “expense” line item for businesses, brand loyalty had to take a back seat. Technology startups with innovative and disruptive products paved the way to looking beyond the brand.

I recently read an article about hard drive reliability published by a cloud storage company called BackBlaze. The company is a major player in safeguarding user data and touts over 100 petabytes of data with over 34,880 disk drives utilized. That’s a lot of drives. With that many drives in production it is quite easy to track reliability of the drives by brand and that’s exactly what they did. The article can be found in the link below.

BackBlaze had done an earlier study back in January of 2014 and this article contained updated information on the brand reliability trends. Not surprising, but the reliability data remained relatively the same. What the article did pointed out was that the Seagate 3TB drives were failing more from 9% – 15% and the Western Digital 3TB drives jumped from 4%-7%.

Hard Drive Failure Rates by Model

Company or “branding” plays a role as well (at least with hard drives). Popular brands like Seagate and Western Digital paves the way. They own the low end hard drive space and sell lots of drives. Hitachi is more expensive and sells relatively less drives than Seagate. While Seagate and Western Digital may be more popular, the hard drive manufacturing / assembly and sourcing of the parts are an important part of the process. While some hard drive manufacturers market their products to the masses, some manufacturers market their products for the niche. The product manufacturing costs and processes will vary from vendor to vendor. Some vendors may cut costs by assembling drives where labor is cheapest or some may manufacture drives in unfavorable climate conditions. These are just some factors that come into play that can reduce the MTBF (Mean Time Before Failure) rating of a drive. While brand loyalty with hard drives may lean towards Seagate and Western Digital, popularity here does not always translate into reliability. I personally like Hitachi drives more as I have had better longevity with them over Seagate, Western Digital, Maxtor, IBM and Micropolis.

I remember using Seagate RLL hard drives in the 90s and yes with failed hard drives also, but to be fair, Seagate has been around for many years and I had many success stories as well. Kudos to Seagate as they have been able to weather all these years through economic hardships and manufacturing challenges from Typhoons and parts shortages while providing affordable storage. Even with higher failure rates, failures today are easily mitigated by RAID technology and with solid backups. So it really depends on what you are looking for in a drive.

Brand loyalty is a personal thing but make sure you know what you are buying besides just a name.

Thanks to BackBlaze for the interesting and insightful study.

Comments Off on Brand Loyalty

The Software Defined World

Posted in: Cloud, General, Author: yobitech (September 23, 2014)

The Software Defined World
Before computers were around, typically things were done using pencil and paper. Since the introduction of the computers, it revolutionized the world. From the way we did business to how we entertain ourselves. As one of the greatest inventions ever invented, ranked up there in book along with the automobile, the microwave and the air conditioner.

From a business standpoint, computers gave companies an edge. The company that leverages technology best will have the greatest competitive edge in their industry. In a similar fashion, on a personal level, the person with the newest toys and the coolest toys are the ones that can take advantage of the latest software and apps. Giving them the best efficiency in getting their jobs done while attaining bragging rights in the process.

The Computer Era has seen some milestones. I have listed some highlights below.

The PC (Personal Computers)
The mainframe was the dominant platform as computers and were mainly used in companies with huge air-conditioned rooms as they were huge machines. Mainframes were not personal. No one had the room nor money to utilize them. Aside from businesses, access to the mainframe was mainly in specialized schools, colleges, libraries and/or government agencies.

The industry was disrupted by a few entries into home computing.

TANDY TRS-80 Model 1
Tandy corporation made their TRS-80 Model 1 computer powered by the BASIC language. It had marginal success with most consumers of this computer in schools. There wasn’t really much software for it but was a great educational tool to learning a programming language

The Timex Sinclair was another attempt but the underpowered and tactile feel keyboard device was very limited. It had a black and white screen and an audio tape player as the storage device. There was more software available for it, but it never took off.

The Commodore Vic 20 and Commodore 64 was a different kind of computer. It had software titles available along with the release of the computer. It was in color and offered “sprite” graphics that allowed for detailed, smooth animations in color. The computer was a hit as it also offered an affordable floppy disk drive (5.25”) as the storage device.

Apple and IBM paved the way into the homes not just because they had a better piece of hardware, but there was access to software such as word processing, spreadsheets and databases (and not just games). This was the entry of the Personal Computer. There were differences between Apple and IBM where IBM was not user friendly and largely text based while Apple took a graphical route offering a mouse and menu driven Operating System that made the computer “friendly”.

Commoditization for Progress
Now that the home computer age has begun, the commoditizing of that industry also started shortly there after. With vendors like Gateway, Lenovo, HP and Dell, making these computers became cheap and plentiful. With computers being so affordable and plentiful, HPC (High-Performance Computing) or “grid” computing has been made possible. HPC/Grid Computing is basically the use of 2 or more computers in a logical group to share resources to act as one large computing platform. Trading firms, Hedge Funds, Geological Study and Genetic/Bio Research companies are just some places that use HPC/Grid Computing. The WLCG is a global project that collaborates more than 170 computing centers in 40 countries to provide resources to store, distribute and analyze over 30 petabytes of data generated by the Large Hadron Collider (LHC) at CERN on the Franco-Swiss border. As you can see, commoditization enables new services and sometimes “disruptive” technologies (ie. HPC/Grid). Let’s take a look at other disruptive developments…

The Virtual Reality Age
With PCs and home computers entering the home, the world of “Virtual Reality” was the next wave. Multimedia capable computers made it possible to dream. Fictional movies like Tron and The Matrix gave us a glimpse into the world of virtual reality. Although virtual reality has had limited success over the years, it wasn’t a disruptive technology until recently. With 3D movies making a comeback, 3D TVs widely available and glass cameras, virtual reality is more HD and is still being define and redefined today.

The Internet
No need to go into extensive details about the Internet. We all know this is a major disruption in technology because we all don’t know what to do with ourselves when it is down. I’ll just recap the inception of the Internet. Started as a government / military application (mostly text based) for many years, the adoption of the Internet for public and consumer use was established in the early 90s. The demand for computers with better processing and better graphics capabilities were pushed further as the World Wide Web and gaming applications became more popular. With better processing came better gaming and better web technologies. Java, Flash and 3D rendering made on-line gaming possible such as Call of Duty and Battlefield.

BYOD (Bring Your Own Device)
This is the latest disruptive trend is BYOD. As the lines between work and personal computing devices are being blurred. Most people (not me) prefer to carry one device. As the mobile phone market revolutionized 3 industries (Internet, phone, music device) it was only natural that the smart phone would be the device of choice for us. With the integration of high-definition cameras into the phones, we are finding less and less reason to carry a separate device to just take pictures with or a separate device for anything. There is a saying today, “There is an App for that”. With the commoditizing of cameras as well, Nokia has a phone with a 41 megapixel camera built in. With all that power in the camera, other challenges arise like bandwidth and storage to keep and share such huge pictures and videos.

The Software Defined Generation
There is a new trend that is finally going to take off but has been around for a while. This disruptive technology is called Software Defined “X”. The “X” being whatever the industry is. One example of Software Defined “X” is 3D printing. It was science fiction at one time to be able to just think up something and bring it into reality, but now you can simply defining it in software (CAD/CAM) and the printer will make it. What makes 3D printing possible is the cost of the printer and the materials used for printing due to commoditization. It wasn’t because we lacked the technology to make this earlier, it was just cost prohibitive. Affordability has brought 3D printing into our homes.

Software Defined Storage
Let’s take a look at Software Defined Storage. Storage first started out as a floppy disk or a hard drive. Then it evolved into a logical storage device consisting of multiple drives bound together in a RAID set for data protection. This concept of RAID was then scaled into SANs today that store most of our business critical information. This concept RAID has been commoditized today and is now a building block for Software Defined Storage. Software Defined Storage is not a new concept, just was not cost effective. Since the cost of high-performance networking and processing becoming affordable, Software Defined Storage is now a reality.

Software Defined Storage technology is taking the RAID concept and virtualizing small blocks of storage nodes (appliances-mini SANs) and grouping them together as a logical SAN. Because this Software Defined Storage is made up of many small components, these components can be anywhere in the architecture (including the cloud). As networking moves into 10Gb and 40Gb speeds and Fiber Channel at 16Gb speeds and WAN (Wide Area Networks) at LAN speeds, processors entering into the 12+ cores (per physical processor) and memory that can process at sub-millisecond speeds Software Defined Storage can virtually be anywhere. It can also be stretched over a campus or even between cities or countries.

In the world of commoditizing everything, the “Software Defined” era here.

Comments Off on The Software Defined World

Protecting Your Personal Data

Posted in: Backup, Cloud, General, Author: yobitech (July 17, 2014)

Being in the IT industry for over 20 years, I have worn many hats in my days. It isn’t very often that people actually know what I do. They just know I do something with computers. So by default, I have become my family’s (extended family included) support person for anything that runs on batteries or plugs into an outlet. In case you don’t know, I am a data protection expert and often not troubleshooting or setting up servers anymore. In fact, I spend most of my days visiting people and making blueprints with Microsoft Visio. I have consulted, validated and designed data protection strategies and disaster recovery plans for international companies, major banks, government, military and private sector entities.

For those who ARE familiar with my occupation often ask me, “So what does a data protection expert do to protect his personal data?” Since I help companies protect petabytes of data, I should have my own data protected also. I am probably a few professionals that actually do protect data to the extreme. Sometimes a challenge also because I have to find a balance between cost and realistic goals. It is always easier to spend other people’s money to protect their data. There’s an old saying that, “A shoemaker’s son has no shoes”. There is some truth in that. I know some people in my field that have lost their own data while being paid to protect others.

Now welcome to my world. Here is what I do to protect my data.

1. Backup, Backup and Backup – Make sure you backup! And often. Doing daily backups are too tedious, even for a paranoid guy like me. It is unrealistic also. Doing weekly or bi-weekly is perfectly sufficient. But there are other things that needs to be done as well.

2. External Drives – External drive backups are not only essential, but they are the only way we can survive as keeping pics and home videos on your laptop or desktop is not realistic. Backing up to a single external drive is NOT recommended. That is a single point of failure as that drive can fail with no other backups around. I use a dual (RAID1) external drive. It is an external drive that writes to 2 separate drives at the same time. There is always 2 copies at all times. I also have a 2 other copies on 2 separate USB drives. You should avoid don’t slam the door drives as they add an additional layer of complexity. When they fail, they fail miserably. Often the NAS piece is not recoverable and the data is stranded on the drives. At that time, data recovery specialist may have to be leverage to recover the data. This can cost thousands of dollars.

3. Cloud Backup – There are many different cloud services out there and most of them are great. I use one that has no limit to backing up to the cloud. So all of my files are backed up to the cloud whenever the my external drives are loaded with new data without limits.
4. Cloud Storage – Cloud storage is different from cloud backup as this service runs on the computers that I use. Whenever I add file(s) on my hard drive, it is instantly replicated to the cloud service. I use Dropbox at home and Microsoft SkyDrive for work, as it is saved in the cloud as well as all my computers. I also have access to my files via my smartphone or tablet. In a pinch, I can get to my files if I can get to an Internet browser. This feature has saved me on a few occasions.

5. Physical Off-Site Backup – I backup one more time on an external hard drive that I copy my files onto once a year. That drive goes to my brother-in-law’s house. You can also utilize a safety deposit box for that as well. This is in case there is a flood or my house burns down, I have a physical copy off-site.

Data is irreplaceable and should be treated as such. My personal backup plan may sound a bit extreme, but I can sleep well at night. You don’t have to follow my plan but a variation of this plan will indeed enhance what you are already doing.

Comments Off on Protecting Your Personal Data

Guys with the Fastest Cars Don’t Always Win

Posted in: General, SAN, Author: yobitech (July 10, 2014)

We (us men) are wired internally to achieve greatness. Whether it is having the fastest car or the “bad-est” laptop, it is in us want to get it.

Owning these high-tech gadgets and fast toys doesn’t necessarily make us better or faster. Most of the time it just makes us “feel” better.

In the storage business, SLC SSD drives or Enterprise Flash drives are the “Crème de la Crème” of all drives. Customers pay a premium for these drives, sometimes more than a well equipped BMW 3 series per drive. SAN vendors sometimes use them as cache augmentation or cache extensions while others will use them as ultra-fast/Tier 0 disk for the “killer app” that needs the ultra-low latency. Regardless, SSDs has captured the hearts of the power and speed hungry people. What isn’t always discussed is that the fastest drives in the SAN doesn’t always mean the fastest performance.

There are a few factors that can slow down your SAN. Here are a few tips to make sure you are optimized:

1. Plumbing – Just like plumbing in your house, water flow will always be at the mercy of the smallest pipe. So if you have 5” pipe coming into the house and a thin tube going into your bathtub, you will take a long time to fill that tub. Be sure to optimize the throughput by using the highest available rated speed network.

2. Firmware – hardware devices have software also. Not just on your computers. This “thin” layer of software is written specifically for hardware devices is called firmware. Make sure you are on the latest code and read the “README” file(s) included for release notes

3. Drivers – Devices have software inside the operating system called drivers. Even though devices have firmware, there is software that enables the operating system to use the devices. To give you a perspective of firmware vs. drivers; firmware is like the BIOS of the computer. It is usually the black screen when you turn on your computer that loads the basic information of your computer. Drivers are like the operating system of your computer. Like Windows 8 or OS X that loads on top of the hardware, drivers loads on top of the firmware of the device.

4. Drive Contention – Contention is when you over-utilize a device(s) or drive(s). A common mistake made is to put everything (applications and hosts) on the SAN and then run backups back onto the SAN. Although it may seem logical and economical, it does a disservice to the users data. First, all the data is in one place. SAN failure means loss of data and backups. Second, data first has to be read from of the drives, then written back onto the same SAN (usually the same set of drives). This can cause a massive slowdown of the SAN; regardless of what drives you have in the system

5. User Error – The most common and least talked about is user error. Probably because nobody ever wants to admit mistakes. Misconfigurations in the SAN or application is a common fault. Most people, men in general, will not read the manual and install by trial and error. The goal is if it works, it is good. This gives a false sense of security especially with systems becoming more and more complex. A misconfigured setting may never show up as a problem until much later. Sometimes catastrophic failures can be the result of overlooked mistakes.
If you follow these simple steps to tighten up the SAN, you will achieve greatness through your existing investment.

Comments Off on Guys with the Fastest Cars Don’t Always Win

The Balancing Act of Managing Data Today

Posted in: General, Security, Author: yobitech (June 25, 2014)

Did you hear? Ebay was among the latest of companies being compromised by hackers. Who exactly are these hackers? Hackers are essentially anyone with malicious intent on causing disruption or harm to a system, application or its data. I think the word “hacker” has received a bad rap over the years. Hacking can actually be a good thing. By definition, the “act” of hacking is merely to reverse engineer an application or system with the intent to improve it. There are college courses dedicated to Ethical Hacking as well as certification levels. To be certified to “hack” for the greater good sounds almost paradoxical. I think if you asked most people if ethical hacking was possible, most people will say no. With data being compromised almost daily, companies have taken serious measures safeguard their data through encryption.

Encryption is the purposeful implementation of scrambling the data only to be de-scrambled by the unique digital encryption key. With data ever growing exponentially, over the years, companies have bought into storage saving technologies such as Deduplication and compression. This is to better manage and protect (backup and restoration). To summarize, Deduplication is the process in which duplicated data blocks within a system is not written over and over again. In essence, a single instance in place of many instances. For example, if a company stores a food menu that is 5MB that has been distributed to 1,000 employees, it would consume 5GB of disk storage total. In a deduplicated system, it would only consume 5MB of space, regardless of how many employees. That is because the system sees 1 instance of this menu and will reference the 1,000 instances to this 1 instance. With compression added, this single instance of a 5MB menu can potentially be reduced up to 20x more. Imagine this process over terabytes of data. A tremendous space saving across the enterprise.

With security becoming a top priority with companies already employing deduplication and compression, what implications will encrypting data have on these datasets? The answer is: MAJOR.

Encryption randomizes data where duplication is purposely eliminated. Compression is limited, if at all applicable. Almost counter productive. So what are companies doing? Welcome to the Information Data Management balancing act. This balancing act is, by nature, an enabler to make better tools, to innovate new technologies and to do more with less. As budgets are shrinking with systems becoming more complex, it is exceedingly important to have proper training to maintain these systems. As many do properly train and do it well, but there are some who cut corners. Those companies do themselves an injustice and put their data at risk. Those companies usually fall usually in catastrophic fashion.

The Targets store data breach that happened back in 2013, is still trying to quantify the damages from that event to this day. Uncovering more and more damages as the investigation deepens. It is important to not fortify the front door while leaving the back door wide open.

Comments Off on The Balancing Act of Managing Data Today

The top 5 things you should never do when buying a SAN

Posted in: General, SAN, Author: yobitech (March 14, 2014)

As technical sales veteran in the storage field, I see people all the time. People who make wise decisions based on real-world expectations and people who buy on impulse. You might say that I am biased because I work for a vendor and although that might be true, I was also a consultant before I was in sales.

I operate under a different set of “rules” where my customer’s best interest comes before my quota. Here is a collection of the top 5 things you should never do when buying a SAN.

5. Buy out of a “good feeling”

Sales people are in the business of selling. That’s what they do, that’s their “prime directive”. It is their job to make you “feel good”. Make sure you do your homework and check every feature and quote item. This is so that you know what you are buying. A common practice is that sales people will put something in the quote thinking you may need it, but in reality, it may not ever be used. Make sure you talk to the technical sales person without the sales guy. Ask him for the honest opinion but be objective. Ask about his background as well so you know his perspective. A technical sales person is still a sales person, but he is more likely to give you the honest, technical facts.

4. Buy a SAN at the beginning of the vendor’s quarter

SAN vendors are in business to make money. They also operate under the same sales cycles. If the company is publically traded, you can look up when their quarters begin and end. Some align to a calendar year and some are fiscal. Here is the fact… You WILL get the best deal always at the end of a quarter. If possible, at absolute best deal at the end of the year (4th quarter). Since buying a SAN is usually a long process, you should align your research as well as your buying approval with the quarters. This will get you the best deal for your money.

3. Believe what a vendor tells you

I write this with caution because at some point you need to believe someone. As long as you keep in mind that the sales people that court you during the process has a quota to retire. The one that is willing to back up their claims by objective facts and real-world customer references are the ones that will most likely live up to expectations.

2. Buy a SAN without checking out their support

As a prospective SAN customer, once you are down to your final players, take some time to call their support. Perhaps at 2am on a Sunday or 6am on a Tuesday. See what kind of experience you get. A common mistake is that a SAN is purchased and things are running well, all is good. It is when there is an outage and you are trying to get support on the phone, that is not the time to test their support response. Check also what the industry says about their support. Other questions are, where is the support center located? Is it US based? Follow the sun?

1. Buy a SAN from a startup

I am a big fan of new and disruptive technologies. This is what makes us a great nation. The fact that we can have companies startups pop up overnight, they can also disappear overnight. Startups are great, but for a SAN that I am going to put my company’s “bread and butter” is not such a wise choice. I say this from experience as I have seen startups come and go. The ones that stay are the ones that are usually bought by the bigger companies. The others are just hoping to be bought. Usually 5 years is a good milestone for a SAN company to pass because by that time customers that made the investment in their products will be in the market again to refresh. If they make it past 5 years, there is a good chance they will be around.

Comments Off on The top 5 things you should never do when buying a SAN

Read the “Fine Print”

Posted in: Backup, General, SAN, Author: yobitech (March 4, 2014)

Far too many times I have bought something with much anticipation only to be disappointed. If it wasn’t the way it looked or what it was promised to do; it was something else that fell short of my expectations. I have to say that one of the few companies that go beyond my expectations are the ones I keep going back to. The one I like to frequently talk about is Apple. Their products often surprise me (in a good way) and the intangible features that brings a deep satisfaction way beyond what is advertised. The “new drug” for me is Samsung and Hyundai (cars).

American marketing plays the leading role in setting this expectation. It is the marketing that has become the “American” culture… The “must have” the newest, coolest and flashy-est toys that defines who we are. Unfortunately, marketing of these products almost always falls short of the actual product itself. We all seem to hang on the hope that these products will exceed our expectations. This is why “un-boxing” videos are so popular on YouTube. Product reviews and blogs are also a good way to keep companies honest and helping us with our “addictions” to our toys. This marketing culture is not only limited to personal electronics but is also true for products in the business enterprise as well.

Marketing in the Business Enterprise

The Backup Tape

I remember having to buying backup tapes for my backups. I have often wondered why and how they can advertise 2x the native capacity of the tape? How can they make that claim? For example, a SDLT320 tape is really a 160GB tape (native capacity). How do they know that customers can fit 320GBs on a 160GB tape?” After doing some research, the conclusion I came to was that they really don’t know! It was a surprising fact to me that they can make such a claim based on speculation. How can they do this and get away with? It is easy… It is what I call the “Chaos Factor”. This is when someone or something takes advantage of a situation to further their cause.
In the case of the backup tapes, they capitalize on a few things that facilitate the Chaos Factor:

1. The Backup Software and

2. The Business Requirements.

The Backup Tape “Chaos Factor”

1. The Backup Software

Tape manufacturers know this all too well. Backup software is very complex. Virtually all backup administrators are far too busy worrying about one thing; completing the backups successfully. Looking to see if tapes are being utilized to meet its advertised capacity is not something that is even thought about in the day-to-day operation. In fact, the only time tape utilization ever comes up is if management asks for it. When it is requested, it is usually a time consuming exercise as backup software does not have good reporting facilities to compile this information readily. Tape utilization is not a concern.

1. The Business Requirements

Another reason is how backup software uses tapes. Tape backups are scheduled by jobs. Most jobs are completed before the tape are filled up. Depending on the companys’ policy, most tapes are ejected and stored off-site. So tapes are rarely ever be filled up because of this policy! This is normal for backup jobs and it is when companies leave tapes in the drive(s) to fill them up goes against why they do backups in the first place. Backup tapes are meant to be taken off-site to protect from disaster. It is really the ONLY time (other than having backups larger than a single tape) that a tape can actually be fully utilized.

So this Chaos Factor is also used in the business of data storage. The SAN market is another one of where the protection of data trumps our ability to efficiently manage the storage. The SAN market is full of dirty secrets as I will outline them below.

The SAN “Chaos Factor”

A dirty secret of the storage industry is the use of marketing benchmark papers. Benchmark testing papers are designed to give the impression that a product can perform as advertised. And for the actual paper itself, it may be true, but sometimes these tests are “rigged” to give the product favorable results. In fact, sometimes these performance numbers are impossible in the real-world. Let me illustrate.. For example, I can type about 65 words per minute. Many people can and will view that as average, but if I wanted to “bend the truth”, I can say I can type 300 words per minute. I can technically type “at” 300+ words per minute, but in the real world, I don’t type like that. What good is a book with 1 word (at) printed on 300 pages? This kind of claim holds no water but it is the same technique and concept used for some of these technical papers. Although the results are touted, keep them honest by asking what their customers seeing in their performance on a day-to-day operation.

Here is another technique that is commonly used by vendors. It is what I call the “smoke and mirror” marketing. It is a tactic used to mimic a new technology, feature or product that is hot. The main goal of this is to create the feature at the best possible price and downplay the side-effects. This is the deliberate engineering around providing the feature set at the expense of existing features. Here is an example. I bought a new Hyundai Sonota last year. I love the car, but I am not crazy about the ECO feature that comes with it. I was told that I would save gas with this mode. Although I have to say I think I get a few more miles on a tank of gas, the cost I pay in lost power, torque and responsiveness is not worth me using this feature at all. I believe this feature as well as a smaller gas tank capacity eventually lead to a class-action law suite over Hyundai’s gas mileage claims. So for a vendor to incorporate new features they sometimes have to leverage existing infrastructures and architectures because it is what they already have. In doing so, they now have an inferior product by emulating new features and masking or downplaying the effects. The prospective customers are not going to know the product well enough to know the impact or these nuances. They often just see the feature set in a side-by-side comparison with other vendors and make decisions based on that. While the details are in the fine print, it is almost never looked at before the sale of the product. As a seasoned professional, I commonly do my due diligence to research their claims. I also am writing this to help you avoid making these mistakes by asking questions and researching before making a major investment for your company.

Here are some questions you should ask:

• What trade magazines have you been featured in lately? (last year)
• What benchmarking paper is available for review
• How does that benchmark compare to real-world workloads?
• What reference architectures are available?
• What customers can I talk to on specific feature set(s)?

Here are some things to do for research

• Look through the Administrator’s Guide for “Notes” and fine print details. This will usually tell you what is impacted and/or restricted as a result of implementing the features
• Invite the vendors for a face-to-face meeting and talk about their features
• Have the vendor present their technologies and how they differ from the competition
• Have the vendor white-board how their technology will fit into your environment
• Ask the vendor to present the value of their technology in relation to your company’s business and existing infrastructure
• If something sound too good to be true then ask them to provide proof in the form of a customer testimony

I hope this is good information for you because I have seen time after time, companies making a purchases into something that isn’t the right fit. Then they are stuck with it for 3-5 years. Remember, the best price isn’t always the best choice.

Comments Off on Read the “Fine Print”