Posted in: General
, Author: yobitech (September 19, 2017)
What amazes me about technology is that it is truly unpredictable. Who would have thought that we can make a device that can process 600 billion operations per second and still fit in my pocket? Or just as amazing, devices that can intelligently learn through experiences and then make choices on our behalf? It is precisely these new innovations that spur new businesses and industries that take full advantage of the advances. At the same time, leaving existing, legacy businesses perplexed on how to make good business decisions to stay competitive and relevant. With all of these “unknown” variables it forces businesses to go back to the very roots of technology. That is lean on “logic” to make the good solid decisions.
Here is a classic example of “logic” in the context of compute…
The “Logic” of the OSI model
For those of us that grew up on “Depeche Mode” and “The Police”, would remember the OSI model. This table was drilled into my head and became second nature to me. Because I knew this logical concept, it made me a good troubleshooter. It is, and still is the basis of everything I needed to know about IT. It was also what I needed to know to pass the A+ and numerous other certifications.
But from a logic perspective, we have to examine this model a little closer. Within each layer, there are different technologies and classified accordingly. Let’s take the “Physical” layer for example. Old hubs and repeaters from the 80’s are now replaced with routers and core switches. In the “Network” layer, old Novell IPX is replaced with IPv6. My point here is that the OSI Model does not go out of style; No matter how computing changes or what hardware we use. But what has changed is who is taking ownership and management the different technologies in each of the layers. The Amazons and Googles of the world is taking over management and ownership of these layers so we don’t have to.
With this trend, today’s computing is now focused in the “Presentation” and “Application” layers. Apps and hosted apps with new hardware and mobile platforms are driving this shift. Truly revolutionary because from a business perspective, compute can be a subscription model for an elastic data center without the capital expenditure. It’s a win-win, but not so much for tech folks and hardware vendors.
The abstraction of the other layers below it is reducing the need for technical expertise. This does not make the other layer any less important, it is just managed. This takes all the fun out of having a datacenter to run… essentially, it is no longer cool or interesting. Bluntly speaking, nobody cares anymore. The Cloud computing, SaaS, XaaS and SDx (Software defined “x”) services lets companies “off-load” the entire stack and presents information without the headaches.
What does this shift mean for technical folks like us? It just means that we too have to make the shift up the stack and live in the “Application” and “Presentation” layers. We can either reinvent ourselves as Application experts with experience in infrastructure or we can go the way of the 69 Mustang as a distant memory. With the future becoming more and more “Cloudy”, being “Logical” is the best solution.
Comments Off on The “Logical” Solution
Posted in: Backup
, Author: yobitech (July 24, 2017)
It is not a surprise now that SSD or Flash drives are now mainstream. The rotating hard drive is still around, but for specific use cases… mostly in “Cheap and Deep” storage, video streaming and archiving purposes. Even with 10TB densities, these drives are destined for the junk yard at some point.
Flash storage is approaching 32TB+ later this year and the cost is coming down fast. Do you remember when a 200GB flash drive was about 30k? It wasn’t that long ago. But with flash storage growing so quickly, what does that mean for performance? Durability? Manageability?
These are real-world challenges as flash storage vendors are just concerned with making them bigger, we as consumers cannot just assume that they are all the same. They are not… As the drives get bigger, the performance of flash drives start to level out. The good thing for us is that when we design storage solutions, the bottleneck is no longer in the SAN with flash. So we too can take the emphasis off of performance. The Performance conversation has now become the “uncool” thing. Nobody wants to have that conversation anymore. The bottleneck has now shifted to the application, the people, the process. That’s right! The bottleneck now is with the business. With networking at 10Gb/40Gb and servers so dense and powerful, this allows the business to finally focus on things that matter to the business. This is the reason we see such a big shift into the cloud, app/application development and IoT. Flash is the enabler for businesses to FINALLY start to focus on business and not infrastructure.
So, back to the technical discussion here…
Durability is less of an issue with large flash drives because of the abundant amount of cells available for writes and re-writes. The predictability of drive failures mitigates the need for management of the common unstable legacy storage.
Manageability is easier with SDS (software defined storage) and Hyper-converged systems. These systems can handle faults much better through the distributed design and the software’s ability to be elastic, thus achieving uptime that exceeds 5 nines.
So as flash storage grows, it becomes less exciting. Flash is paving the way to a new kind of storage,the NVMe.
Comments Off on All Flash is Not the Same
Posted in: General
, Author: yobitech (January 9, 2017)
Ever throw a rock into a still pond and watch the ripples from the impact? It is fascinating to see the effects of an object entering a placid body of water. It is interesting how the pond will always gravitate towards the restful and peaceful state it once was. This is called the “Equilibrium” or a “state of balance”. Equilibrium is often used in the context of economics, chemistry and physics, but it can also be applied to the technology industry.
As I have written in past blogs about disruptive technologies, these disruptions are the things we live for as techno-junkies. The disruptive part of solid state drives is the affordability factor. They are growing in capacity and the cost-per-GB is rivaling the traditional spinning disk industry. Adoption by the masses is going to determine the disruption. The faster the adoption, the bigger the disruption. If you look at all of the storage vendors out there, all-flash (ssd) arrays are being sold and overtaking sales of traditional spinning and hybrid systems. New industries and use cases have been enabled by this disruption that has taken place and the rippling effect of this disruption will elevate and innovate new industries when gravitating towards the new equilibrium.
Take for example, the invention of the car. The car was first invented as a basic mode of transportation. As time progressed, the car was transformed into other vehicles with different applications. Trucks and goods transportation emerged; Then the convertible and racing versions spurned an entirely new lifestyle. The applications are exciting and innovative. The SSD industry is now in its prime and is creating new applications and enabling an entirely new era. Here are some examples:
Big Data is data mining on steroids. It is term used for the ability to ingest large amounts of data to index and analyze with flexibility to manipulate at will. The key here is the speed that the manipulation can happen. New applications and services that were not available before is now possible. Some examples of these services are: Identity theft detection, fraud analysis, bio research and national security.
SSDs have enabled a whole new IoT industry. IoT v2. Things like smart thermostats, robotics, automated vacuum cleaners, smart buildings and 4k cameras are possible due to the footprint of these devices that utilize a smaller form of solid state storage. This new breed of technology, in the wrong hands can also do much damage. Thieves have found ways to attach IoT devices to skim and survey areas to collect information on how to better strategize hacking attacks and to disrupt lives.
Having SSD storage allows us to go mobile, not just in our smartphones, but in many different ways. The military application of having an entire mobile datacenter in a Hummer or Jeep is a reality. Real-time, battlefield data collection and servers that live in backpacks gives an advantage in warfare. Disaster recovery tractor trailer datacenters are optimized and enabled, especially in a world that is increasingly growing more and more volatile. Drones, robotics and vehicles are adding features with more abilities. Less dependencies to a central office enables a fleet of devices that are independent yet coordinate in a “swarm-like” approach to achieve objectives faster and more effectively.
The last chapter for spinning disk
I have written in past posts that the SSDs will someday render that rotating traditional spinning disk industry obsolete. That day is fast approaching as it has been eroding into the sales of spinning disk for a while now. 15k and 10k drives are already phased out, and the 7.2k drives still have some life left. The burning question is, when will the 7.2k drives finally go away…
With SSD capacities at over 16TB in a 2.5” form factor available today and the 32TB drive on the horizon, it is the extinction of the 7.2k drive is soon to come. The 7.2k drive is a great drive for capacity, but the problem is that RAID rebuild time is horrendous. A typical 2TB drive takes a significant time to rebuild and the window of exposure to data loss is greatly increased, even at RAID 6 (See my “Perfect Storm” blog for more information). So even as capacity increases, the rebuilt factor alone is attractive to move to high capacity SSD drives.
Comments Off on Finding Equilibrium
Posted in: Backup
, Author: yobitech (February 26, 2016)
“Penny Wise and Pound Foolish” is one of my favorite lines. My top personal motto in life is, “It’s the cheap man that pays the most”. Time and time again, I have seen so many people opting to save a few mere “pennies” while paying a monumental price in the long term. Whatever it is and whatever the reason, doing due diligence and going the extra mile for good and objective research can help in making wise choices, particularly when it comes to technology. Technology changes so fast and sometimes nullifying current technologies, it is important to understand the “how” and the “why”, not just the “here” and “now”. Today, I am going to talk about a topic that has been talked about over and over again, “Data Management” with the current state of technology.
The New Data Management Landscape
Although I have been writing about this topic for many years, this blog addresses some of the new challenges most IT managers face. With the proliferation of data from IoT, Big Data, Data Warehousing, etc. in relation to data security and the dynamic implications of governance and compliance, there are far too many variables in play to effectively and efficiently “Manage” data. Implementing a business mandate can have far-reaching implications on data management. Questions like; What is the balance between storing data vs securing data? What is the cost involved going too far one way vs. the other respectively? How can I communicate to management these implications and cost factors?
False Security: The Bits and Bytes vs. ROIs and TCOs
One of the biggest challenges in IT is communicating to the people who “signs the checks” the need to spend money (in most cases, more money) for technology. The need to spend the money to effectively and successfully implement the mandates put forth by management. Unfortunately, this is not an easy task, only mastered by a few in the industry, often highly regarded professionals and living in the consulting field. Guys who are good at understanding the “Bits and Bytes” are usually illiterate at the business side of things. The business side understands the “Return of Investments” (ROI) and “Total Cost of Ownership” (TCO) language and couldn’t care less what a “Bit or Byte” is. The end result: many systems out there are poorly managed and their management having no idea about it. The disconnect is real and companies do business as usual everyday until a crisis arise.
IT managers, directors and CIOs/CTOs need to be acutely aware of the current systems and technologies at the same time remain on the cutting edge of new technologies to perform the current, day-to-day operations as well as supporting all of the new business initiatives. The companies that do well to mitigate this gap are the ones that have good IT management and good communications with the business side of the company. This is also directly related to how much is spent on IT. It is a costly infrastructure, but these are the systems that can meet the demands of management and compliance.
The IT Tightrope
Understanding current and new technologies is key to an effective data management strategy. Money spent on technology may be rendered unusable or worse, hinders the use of new technology needed to meet the demands of business. It is a constant balancing act because a solution today can be tomorrow’s problem.
Data deduplication is a mature technology and has been an effective way to tame the data beast. It is basically, in a nutshell, an algorithm that scans data for duplication. So when it sees duplication in data, it will not re-write that data, but put a metadata in its place. In other word, the metadata is basically saying, “I got this data over there, so don’t rewrite it”. This happens over the entire volume(s) and is a great way to save storage capacity. But with data security at the top of most company’s minds, data encryption is the weapon of choice today. Even if it is not, compliance mandates from governing agencies for forcing the hand to implement encryption. But how does encryption impact data management? Encryption is basically taking data and randomizing it with an encryption key. Data deduplication is made more difficult with encryption. This is a high-level generalization and there are solutions out there, but considerations must be made when making encryption decisions. Additionally, encryption adds complexity to data management. Without proper management of encryption keys can render data unusable.
Back in the day of DOS, Norton Utilities had a great toolbox of utilities. One of them was to compress data. I personally did not do it as it was risky at best. It wasn’t a chance I wanted to take. Beside, my data was copied to either 5.25” or 3.5” floppies. Later, Windows came in with compression on volumes. Not one to venture into that territory. I had enough challenges just to run Windows normally. I have heard and seen horror stories with compressed volumes. From unrecoverable data to sluggish and unpredictable performance. The word on the street was that it just wasn’t fully baked feature and was a “use at own risk” kinda tool. Backup software also offered compression for backups, but backups were hard enough to do without compression, adding compression to backups just wasn’t done… period.
Aside from using the PKZip application, compression had a bad rap until hardware compression. Hardware compression is basically an offload of the compression process to a dedicated embedded chipset. This was magic because there was no resource cost to the host CPU. Similar to high-end gaming video cards. These video cards are (GPU) Graphics Processing Units to offload the high-definition and extreme texture renderings at high refresh rates. Hardware compression became mainstream. Compression technology went mostly unnoticed until recently. Compression is cool again and made popular as a feature of data on SSDs. Some IT directors that I talked to drank the “Cool-Aid” on compression for SSDs. It only made sense when SSDs were small in capacity and expensive. Now that SSDs are breaking the 10TB per drive mark and cheaper per GB than spinning disk. Compression on SSDs are not so cool anymore. It’s going the way of the “mullet” hairstyle, and we all know where that went… nowhere. Adding compression to SSDs is another layer of complexity that can be removed. Better yet, don’t buy into the gimmick for compression on SSD’s, rather look at the overall merits of the system and the support of the company that is offering the storage. What good is a storage system if the company is not going to be around?
What is your data worth?
With so many breaches in security happening, seemingly every other week, it is alarming to me that we still want to use computers for anything. Some of the businesses affected by hackers I am a customer of. I have about 3 different complimentary subscriptions to fraud prevention services because of these breaches. I just read an article of a medical facility in California was hit with ransomware. With the demands of payment in the millions via bitcoin, the business went back to pen and paper to operate. What’s your data worth to you? With all of these advances in data storage management and the ever changing requirements from legal and governing agencies, an intimate knowledge of the business and data infrastructure is required to properly manage data. Not just to keep the data and to protect it from hackers but also from natural disasters.
Comments Off on Penny Wise and Pound Foolish
Posted in: General
, Author: yobitech (February 15, 2016)
I have heard of many acronyms being in the technology industry for a long time, but there are some that really are strange. Here are some of my favorites:
– POTS: Plain old Telephone System
– JBOD: Just a Bunch of Disks
– WYSIWYG: What You See Is What You Get
– VFAT: Virtual File Allocation Table
It wasn’t until recently, the past year, that I came across another one; IoT (Internet of Things). “What the heck is that supposed to be?”, was what I said to myself, not knowing how relevant IoT was and is becoming. In fact, IoT is one of the fastest growing sectors in technology today. Fueling the exponential growth of data.
So what exactly is IoT? IoT is defined as, “A group or a network of physical objects, devices, vehicles, buildings embedded with electronics, software, sensors and connectivity in which data is exchanged and collected”. IoT is a pretty broad term, but it is a necessary category because we have moved beyond the typical desktop and laptop for computing. We have unhooked ourselves from our desks with WiFi/Bluetooth and have gone mobile with cellular broad band, thus giving birth to the mobile workforce.
The ability to make devices smaller and with the flexibility of communicating wirelessly, new devices were produced to be installed and embedded to go to places where computers were never able to go in the past. GoPro cameras, helped pioneer the IoT category. Mounted on the heads of extreme athletes we are able to get an HD, “First Person’s” POV of jet propelled, winged men flying through holes in rock formations. To see the POV of an Olympic skier in high-speed, downhill slalom runs. We are able to see, analyze and document the natural, the candid, the “real-time” events as they happen.
The iPhone or Android device in your hand is a massive producer of IoT data. These devices have location transceivers, gyroscopes and sensors embedded into them. Apps like Waze and Swarm collect data from us in different ways and for different purposes. Waze uses location services and “crowdsourcing” as a way to bring valuable information to us like real-time traffic jams to best route us to our destination. To locate and validate locations of police and road hazards. The Swarm App lets us “check-in” to different restaurants and establishments to interactively increase the experience for the consumer. We can offer advice, reviews or read reviews instantaneously. For example, I walked down a busy street and stop in front of a restaurant to read the menu, Swarm detected my stop and sent me reviews and special offers for the restaurant! I know IoT brings up some privacy concerns and I am concerned as well, but we cannot stop progress. I admit, that I do enjoy all the benefits of this, but I was creeped out at first because I felt like my smartphone was a little too smart for me.
The Raspberry Pi
IoT goes beyond our smartphones and GoPros. I just picked up a bunch of Raspberry Pi’s. If you are not familiar with them, they are actually quite amazing. What started out as a DiY science project kit, sort of like an updated version of the “Radio Shack Ham radio transistor kits of the 1960s, the Raspberry Pi’s are palm-sized, full functioning computers. They include a 1/8” headphone jack, HDMI port, 4xUSB ports, SD card slot and power puts IoT into the hands of enthusiasts and techies like myself. With the Open Source community and YouTube, new projects are posted virtually everyday. Things like robotic toys, home alarm systems, MAME arcade machines (old-school arcade games through emulation), video streaming boxes and many more brings new meaning to DiY projects.
How much is too much?
Finally, the IoT from an embedded standpoint is the most exciting, but also the most frightening. Gone are the days of being able to buy a rear-wheel drive, stick-shift car without traction control, push-button start and solenoid controls. Cars that are operated by pulleys and cables are becoming a thing of the past. The term “Drive-By-Wire” is basically a modern car that has virtually removed the driver from the road. Abstracting the real world with computerized controls sensing in “real-time” the driving experience. Braking and acceleration pedals have been replaced with switches and solenoids. Levers and pulleys for A/C and heat is replaced with computerized “climate controls”. Anti-lock brakes, traction control, active steering and economy mode pacifies the auto-enthusiast. And although these things are luxurious and “cool”, it also increases the failure rate by adding complexity. I own a 2013 import and it started shifting erratically a few months ago. I had the car towed to the dealer and found out that the computer sensor that controls the transmission had malfunctioned. While I was in the tow truck with the driver, I asked him if he saw an increase of these types of problems. He said, “Yes, most of his tows are BMW’s, Mercedes, Maserati’s and other highly teched-out vehicles”
Squeezing Water from a Rock
IoT is everywhere and is going to be even more prevalent in the next few years. From improving the way we perform through the collection and analysis of the data to enhancing our entertainment of virtual reality and functionality. The possibilities are endless, but only if we can analyze all of this data. Which brings me to the storage of all this data. We definitely have the compute power and the RAM power, but how about storage? We definitely have the storage capacity in rotating disk, but with 3D NAND and/or TLC SSD drive developments, capacities upwards of 10TB per drive is in the reach of consumers later 2016. High capacity SSDs enables IoT to produce amazing advancements across industries and even new industries will come from IoT.
Comments Off on The “Internet of Things” explosion
Posted in: General
, Author: yobitech (October 8, 2015)
If you have been keeping up with the storage market lately, you will notice that there has been a considerable drop in prices for SSD hard drives. It has been frustrating to see over the past 4 to 5 years there has not been much changes in the SSD capacities and prices until now. With the TLC (Triple Level Cell) SSD hard drives available, it is the game-changer we have been waiting for. With TLC capacities at almost 3TBs per SSD drive and projected to approach 10TBs per drive in another year, what does that mean for the rotational disk?
That’s a good question, but there is no hard answer to that yet. As you know, in the technology industry, it can change on a dime. The TLC drive market is the answer to the evolution of hard drives as a whole. It is economical because of the high capacity and it is energy efficient as there are no moving parts. Finally, the MTBF (Mean Time Before Failure) is pretty good as SSD reliability was a factor in the actual adoption of SSDs in the enterprise.
The MTBF is always a scary thing as that is the life expectancy of a hard drive. If you recall, I blogged some time ago about the “Perfect Storm” effect where hard drives in a SAN were deployed in series and manufactured in batches at the same time. So it is not uncommon to see multiple drive failures in a SAN that can result in data loss. With rotational disk at 8TBs per 7.2k drive, it can conceivably take days, to even weeks to rebuild a single drive. So I think for rotational disk that is a big risk to take. With TLC SSDs around 10TBs, it is not only a cost and power efficiency advantage, but it is also a lower risk when it comes to RAID rebuilding time. Rebuilding a 10TB drive can take a day or 2 (sometimes hours depending on how much data is on the drive). The MTBF rate is higher because SSDs are predictively failed by logically marking active cells, dead (barring other failures). This is the normal wear and tear process of the drive. Cells have a limited number of writes and re-writes before they are marked dead. In smaller capacities, the rate of writes per cells are much higher because there is only a limited number of cells. With the large amount of cells now offered in TLC SSDs, essentially, cells are written to less often as a much smaller drive inherently. So by moving to a larger capacity increases the durability of the drive. This is the reverse for rotational drives as they become more unreliable as capacity increases.
So what does it mean for the rotational disk?
Here are some trends that are happening and where I think it will go.
1. 15k drives will still be available but in limited capacities. This is because of legacy support. Most drives out there are still running 15k drives. There are even 146Gb drives out there that are running strong but will need additional capacities due to growth and/or replacement for failed drives. This will be a staple in the rotational market for a while.
2. SSDs are not the answer for everything. Although we all may think so, SSDs are actually not made for all workloads and applications. SSD perform miserably when it comes to streaming videos, large block and sequential data flows. This is why the 7.2k, high capacity drives will still thrive and be around for a while.
3. The death of the 10k drive. Yes, I am calling it. I believe that the 10k drive will finally rest in peace. There is no need for it nor will there be a demand for it. So 10k drives, so long…
Like anything in technology, this is speculation, from an educated and experienced point of view. This can all change at anytime, but I hope this was insightful.
Comments Off on The End of the Rotational Disk? The Next Chapter
Posted in: Cloud
, Author: yobitech (July 13, 2015)
The Internet made mass data accessibility possible. While computers were storing MBs of data locally on the internal hard drive, GB hard drives were available but priced only for the enterprise. I remember seeing a 1GB hard drive for over $1,000. Now we are throwing 32GB USB drives around like paper clips. We are now moving past 8TB, 7200 RPM drives and mass systems storing multiple PBs of data. With all this data, it is easy to be overwhelmed. Too much information is known as information overload. That is when too much information makes relevant information unusable due to the sheer amount of available information. We can’t tell usable data from unusable data.
In recent years, multi-core processing, combined with multi-socket servers has made HPC (High Performance Computing) possible. HPC or grid-computing is the linking of these highly dense compute servers (locally or geographically dispersed) into a giant super-computing system. With this type of system, the ability to compute algorithms that would traditionally take days or weeks are done in minutes. These gigantic systems laid the foundation for companies to have a smaller scaled HPC system that they would use in-house for R&D (Research and Development).
This concept of collecting data in a giant repository was first called data-mining. Data-mining is the same concept used in The Google(s) and the Yahoo(s) of the world. They pioneered this as a way to navigate the ever growing world of information available on the Internet. Google came out with an ingenious light-weight software called “Google Desktop”. It was a mini version of data-mining for the home computer. I personally think that was one of the best tools I have ever used for my computer. It was discontinued shortly thereafter for reasons I am not aware of.
The advancements in processing and compute made data-mining possible, but for many companies, it was an expensive proposition. Data-mining was limited by the transfer speeds of the data on the storage. This is where the story changes. Today, with SSD technologies shifting in pricing and density, due to better error correction and fault predictability and manufacturing advancements, storage has finally caught up.
The ability for servers to quickly access data on SSD storage to feed HPC systems, opens up many opportunities that were not possible before. This is called “Big Data”. Companies can now run Big Data to take advantage of mined data. They can now look for trends, to correlate and to analyze data quickly to make strategic business decisions to take advantage of market opportunities. For example; A telecommunications company can mine their data to look for dialing patterns that may be abnormal for their subscribers. The faster fraud can be identified, the less financial loss there will be. Another example is a retail company that may be looking to maximizing profits by stocking their shelves with “hot” ticket items. This can be achieved by analyzing sold items and trending crowd sourced data from different information outlets.
SSD drives are enabling the data-mining/Big Data world for companies that are becoming leaner and more laser-focused on strategic decisions. In turn, the total cost of these HPC systems pay for themselves in the overall savings and profitability of Big Data benefits. The opportunities are endless as Big Data has extended into the cloud. With collaboration combined with Open Source software, the end results are astounding. We are producing cures for diseases, securing financial institutions and finding inventions through innovations and trends. We are living in very exciting times.
Comments Off on Big Data
Posted in: General
, Author: yobitech (February 16, 2015)
I have been talking about SSDs and rotating hard drives for many years now. The inability of SSDs to overtake the mainstream hard drive space has been inhibited by the ability to produce them at an affordable price point (as compared with rotating disk). SSDs has gone through many different iterations, from SLC (single-level cell), MLC (multi-level cell), eMLC (enterprise multi-level cell) and now TLC (triple-level cell).
If you haven’t noticed, the consumer market is unloading 128GB drives at sub $50 and 256GB SSDs are under $100 dollars. This is a steep drop in price and is an indication to the release of the next wave of SSD drives. SSDs are poised to go mainstream because of TLC SSDs. SSDs in general are still, and still will be, expensive and incredibly complex to manufacture, but due to market demands, the TLC drive is now positioned to take the market by storm. So what has changed? Has the manufacturing process changed? Yes, but not so much. The biggest change was the market strategy of the TLC SSD. The drive is manufactured to sacrifice durability in exchange for density. To the point where we can see TLC drives very soon with density in the 2TB, 3TB even 10TB+ capacities. Drive technologies will leverage better drive failure predictability logic algorithms and other “fault tolerating” technologies to augment the lower MTBF.
So what does this mean for the rotating disk? Is it possible that the rotating drive will disappear altogether? I don’t think so. I predict the availability of the TLC drives will virtually eliminate 10k and 15k drives and then over a much longer time period the 7.2k drive will go. This is because the cost per GB still a great value on the 7.2k drives and the densities will grow in tandem with the TLC SSDs. There is also a comfort level of having magnetic media around holding data (for those old-schoolers like me).
It’s been a long time waiting but it is exciting to finally see SSDs making its way into the mainstream.
Comments Off on It’s finally here!
Posted in: Backup
, Author: yobitech (October 13, 2014)
Americans are fascinated by brands. Brand loyalty is big especially when “status” is tied to a brand. When I was in high school back in the 80s, my friends (and I) would work diligently to save our paychecks to buy the “Guess” jeans, “Zodiac” shoes and “Ton Sur Ton” shirts because that was the “cool” look. I put in many hours of working the stockroom at the supermarket and delivering legal documents as a messenger. In 1989, Toyota and Nissan entered into the luxury branding as well with Lexus and Infinity respectively after the success of Honda’s upscale luxury performance brand, Acura which started in 1986. Aside from the marketing of brands, how much value (aside from the status) does a premium brand bring? Would I buy a $60,000 Korean Hyundai Genesis over the comparable BMW 5 Series?
For most consumers in the Enterprise Computing space, brand loyalty was a big thing. IBM and EMC lead the way in the datacenter for many years. The motto, “You’ll never get fired for buying IBM” was the perception. As you may have heard the saying, “Perception is Reality” rang true for many CTOs and CIOs. But with the economy ever tightening and IT as an “expense” line item for businesses, brand loyalty had to take a back seat. Technology startups with innovative and disruptive products paved the way to looking beyond the brand.
I recently read an article about hard drive reliability published by a cloud storage company called BackBlaze. The company is a major player in safeguarding user data and touts over 100 petabytes of data with over 34,880 disk drives utilized. That’s a lot of drives. With that many drives in production it is quite easy to track reliability of the drives by brand and that’s exactly what they did. The article can be found in the link below.
BackBlaze had done an earlier study back in January of 2014 and this article contained updated information on the brand reliability trends. Not surprising, but the reliability data remained relatively the same. What the article did pointed out was that the Seagate 3TB drives were failing more from 9% – 15% and the Western Digital 3TB drives jumped from 4%-7%.
Company or “branding” plays a role as well (at least with hard drives). Popular brands like Seagate and Western Digital paves the way. They own the low end hard drive space and sell lots of drives. Hitachi is more expensive and sells relatively less drives than Seagate. While Seagate and Western Digital may be more popular, the hard drive manufacturing / assembly and sourcing of the parts are an important part of the process. While some hard drive manufacturers market their products to the masses, some manufacturers market their products for the niche. The product manufacturing costs and processes will vary from vendor to vendor. Some vendors may cut costs by assembling drives where labor is cheapest or some may manufacture drives in unfavorable climate conditions. These are just some factors that come into play that can reduce the MTBF (Mean Time Before Failure) rating of a drive. While brand loyalty with hard drives may lean towards Seagate and Western Digital, popularity here does not always translate into reliability. I personally like Hitachi drives more as I have had better longevity with them over Seagate, Western Digital, Maxtor, IBM and Micropolis.
I remember using Seagate RLL hard drives in the 90s and yes with failed hard drives also, but to be fair, Seagate has been around for many years and I had many success stories as well. Kudos to Seagate as they have been able to weather all these years through economic hardships and manufacturing challenges from Typhoons and parts shortages while providing affordable storage. Even with higher failure rates, failures today are easily mitigated by RAID technology and with solid backups. So it really depends on what you are looking for in a drive.
Brand loyalty is a personal thing but make sure you know what you are buying besides just a name.
Thanks to BackBlaze for the interesting and insightful study.
Comments Off on Brand Loyalty
Posted in: Cloud
, Author: yobitech (September 23, 2014)
The Software Defined World
Before computers were around, typically things were done using pencil and paper. Since the introduction of the computers, it revolutionized the world. From the way we did business to how we entertain ourselves. As one of the greatest inventions ever invented, ranked up there in book along with the automobile, the microwave and the air conditioner.
From a business standpoint, computers gave companies an edge. The company that leverages technology best will have the greatest competitive edge in their industry. In a similar fashion, on a personal level, the person with the newest toys and the coolest toys are the ones that can take advantage of the latest software and apps. Giving them the best efficiency in getting their jobs done while attaining bragging rights in the process.
The Computer Era has seen some milestones. I have listed some highlights below.
The PC (Personal Computers)
The mainframe was the dominant platform as computers and were mainly used in companies with huge air-conditioned rooms as they were huge machines. Mainframes were not personal. No one had the room nor money to utilize them. Aside from businesses, access to the mainframe was mainly in specialized schools, colleges, libraries and/or government agencies.
The industry was disrupted by a few entries into home computing.
TANDY TRS-80 Model 1
Tandy corporation made their TRS-80 Model 1 computer powered by the BASIC language. It had marginal success with most consumers of this computer in schools. There wasn’t really much software for it but was a great educational tool to learning a programming language
The Timex Sinclair was another attempt but the underpowered and tactile feel keyboard device was very limited. It had a black and white screen and an audio tape player as the storage device. There was more software available for it, but it never took off.
COMMODORE VIC 20/ COMMODORE 64
The Commodore Vic 20 and Commodore 64 was a different kind of computer. It had software titles available along with the release of the computer. It was in color and offered “sprite” graphics that allowed for detailed, smooth animations in color. The computer was a hit as it also offered an affordable floppy disk drive (5.25”) as the storage device.
APPLE AND IBM
Apple and IBM paved the way into the homes not just because they had a better piece of hardware, but there was access to software such as word processing, spreadsheets and databases (and not just games). This was the entry of the Personal Computer. There were differences between Apple and IBM where IBM was not user friendly and largely text based while Apple took a graphical route offering a mouse and menu driven Operating System that made the computer “friendly”.
Commoditization for Progress
Now that the home computer age has begun, the commoditizing of that industry also started shortly there after. With vendors like Gateway, Lenovo, HP and Dell, making these computers became cheap and plentiful. With computers being so affordable and plentiful, HPC (High-Performance Computing) or “grid” computing has been made possible. HPC/Grid Computing is basically the use of 2 or more computers in a logical group to share resources to act as one large computing platform. Trading firms, Hedge Funds, Geological Study and Genetic/Bio Research companies are just some places that use HPC/Grid Computing. The WLCG is a global project that collaborates more than 170 computing centers in 40 countries to provide resources to store, distribute and analyze over 30 petabytes of data generated by the Large Hadron Collider (LHC) at CERN on the Franco-Swiss border. As you can see, commoditization enables new services and sometimes “disruptive” technologies (ie. HPC/Grid). Let’s take a look at other disruptive developments…
The Virtual Reality Age
With PCs and home computers entering the home, the world of “Virtual Reality” was the next wave. Multimedia capable computers made it possible to dream. Fictional movies like Tron and The Matrix gave us a glimpse into the world of virtual reality. Although virtual reality has had limited success over the years, it wasn’t a disruptive technology until recently. With 3D movies making a comeback, 3D TVs widely available and glass cameras, virtual reality is more HD and is still being define and redefined today.
No need to go into extensive details about the Internet. We all know this is a major disruption in technology because we all don’t know what to do with ourselves when it is down. I’ll just recap the inception of the Internet. Started as a government / military application (mostly text based) for many years, the adoption of the Internet for public and consumer use was established in the early 90s. The demand for computers with better processing and better graphics capabilities were pushed further as the World Wide Web and gaming applications became more popular. With better processing came better gaming and better web technologies. Java, Flash and 3D rendering made on-line gaming possible such as Call of Duty and Battlefield.
BYOD (Bring Your Own Device)
This is the latest disruptive trend is BYOD. As the lines between work and personal computing devices are being blurred. Most people (not me) prefer to carry one device. As the mobile phone market revolutionized 3 industries (Internet, phone, music device) it was only natural that the smart phone would be the device of choice for us. With the integration of high-definition cameras into the phones, we are finding less and less reason to carry a separate device to just take pictures with or a separate device for anything. There is a saying today, “There is an App for that”. With the commoditizing of cameras as well, Nokia has a phone with a 41 megapixel camera built in. With all that power in the camera, other challenges arise like bandwidth and storage to keep and share such huge pictures and videos.
The Software Defined Generation
There is a new trend that is finally going to take off but has been around for a while. This disruptive technology is called Software Defined “X”. The “X” being whatever the industry is. One example of Software Defined “X” is 3D printing. It was science fiction at one time to be able to just think up something and bring it into reality, but now you can simply defining it in software (CAD/CAM) and the printer will make it. What makes 3D printing possible is the cost of the printer and the materials used for printing due to commoditization. It wasn’t because we lacked the technology to make this earlier, it was just cost prohibitive. Affordability has brought 3D printing into our homes.
Software Defined Storage
Let’s take a look at Software Defined Storage. Storage first started out as a floppy disk or a hard drive. Then it evolved into a logical storage device consisting of multiple drives bound together in a RAID set for data protection. This concept of RAID was then scaled into SANs today that store most of our business critical information. This concept RAID has been commoditized today and is now a building block for Software Defined Storage. Software Defined Storage is not a new concept, just was not cost effective. Since the cost of high-performance networking and processing becoming affordable, Software Defined Storage is now a reality.
Software Defined Storage technology is taking the RAID concept and virtualizing small blocks of storage nodes (appliances-mini SANs) and grouping them together as a logical SAN. Because this Software Defined Storage is made up of many small components, these components can be anywhere in the architecture (including the cloud). As networking moves into 10Gb and 40Gb speeds and Fiber Channel at 16Gb speeds and WAN (Wide Area Networks) at LAN speeds, processors entering into the 12+ cores (per physical processor) and memory that can process at sub-millisecond speeds Software Defined Storage can virtually be anywhere. It can also be stretched over a campus or even between cities or countries.
In the world of commoditizing everything, the “Software Defined” era here.
Comments Off on The Software Defined World