Archive for the ‘SSD’ Category

Just when we thought spinning had drives were dead, here comes another one. This one actually has teeth. The 12TB helium-sealed hard drives. This is interesting because the helium drive concept is not new. I had blogged years before about it, but there were technologies that prevented the actual mass production of such a drive. It finally came! With technologies available today in SSD technology and the high capacities that are available, where does this drive fit in the overall market of storage?

Well, the current state of life in general is interesting. Technology is spinning off in so many different directions it is hard to tell. There are also shortages in SSD drives that is driving additional demand for large capacity HHDs. Another driving force is the CapEx to an OpEx cost model that is increasingly gaining popularity. Ultimately this drive has a place.

People still want options

As much as we all love SSDs, people still love their HHDs. The familiarity of it and the cost factor are the main reasons for it, but there is something to be said about writing to magnetic media. Old school folks like me like to know that the data is actually written on the platter. It still makes economic sense to go HHD as long as the capacity can keep up with SSDs. At the end of the day, people also want options. With SANs still supporting HHD and SSD cohabitation (hybrid), this will continue to drive HHD use. With this new Helium-Sealed drive technology, it breathed new life into this industry.

Pressure from the other side (SCM)
SSDs are now mainstream but as we all know from history, there is always a newer technology that is going to take a bite into the storage market. In the compute space, we all know that DRAM (RAM) is the fastest storage. With SSDs living right in between DRAM and HHDs, there is still a big gap in between. SCM (Storage Class Memory) is the next big thing. The intention of this technology is to fill in this gap. This technology is still taking shape, and you will be hearing more about it soon. There will be new players in the storage market that is emerging, so keep an eye out for those players. I will be here soon.

With new industries and new market opportunities brings with it different kinds of requirements. Things like crypto currency mining and HPC vs. IoT and social media brings a different class of computing and storage from both ends. There will be a place for these new nifty drives and Long live the Helium HHD!!!

It is not a surprise now that SSD or Flash drives are now mainstream. The rotating hard drive is still around, but for specific use cases… mostly in “Cheap and Deep” storage, video streaming and archiving purposes. Even with 10TB densities, these drives are destined for the junk yard at some point.

Flash storage is approaching 32TB+ later this year and the cost is coming down fast. Do you remember when a 200GB flash drive was about 30k? It wasn’t that long ago. But with flash storage growing so quickly, what does that mean for performance? Durability? Manageability?

These are real-world challenges as flash storage vendors are just concerned with making them bigger, we as consumers cannot just assume that they are all the same. They are not… As the drives get bigger, the performance of flash drives start to level out. The good thing for us is that when we design storage solutions, the bottleneck is no longer in the SAN with flash. So we too can take the emphasis off of performance. The Performance conversation has now become the “uncool” thing. Nobody wants to have that conversation anymore. The bottleneck has now shifted to the application, the people, the process. That’s right! The bottleneck now is with the business. With networking at 10Gb/40Gb and servers so dense and powerful, this allows the business to finally focus on things that matter to the business. This is the reason we see such a big shift into the cloud, app/application development and IoT. Flash is the enabler for businesses to FINALLY start to focus on business and not infrastructure.

So, back to the technical discussion here…

Durability is less of an issue with large flash drives because of the abundant amount of cells available for writes and re-writes. The predictability of drive failures mitigates the need for management of the common unstable legacy storage.

Manageability is easier with SDS (software defined storage) and Hyper-converged systems. These systems can handle faults much better through the distributed design and the software’s ability to be elastic, thus achieving uptime that exceeds 5 nines.

So as flash storage grows, it becomes less exciting. Flash is paving the way to a new kind of storage,the NVMe.

Ever throw a rock into a still pond and watch the ripples from the impact? It is fascinating to see the effects of an object entering a placid body of water. It is interesting how the pond will always gravitate towards the restful and peaceful state it once was. This is called the “Equilibrium” or a “state of balance”. Equilibrium is often used in the context of economics, chemistry and physics, but it can also be applied to the technology industry.

As I have written in past blogs about disruptive technologies, these disruptions are the things we live for as techno-junkies. The disruptive part of solid state drives is the affordability factor. They are growing in capacity and the cost-per-GB is rivaling the traditional spinning disk industry. Adoption by the masses is going to determine the disruption. The faster the adoption, the bigger the disruption. If you look at all of the storage vendors out there, all-flash (ssd) arrays are being sold and overtaking sales of traditional spinning and hybrid systems. New industries and use cases have been enabled by this disruption that has taken place and the rippling effect of this disruption will elevate and innovate new industries when gravitating towards the new equilibrium.

Take for example, the invention of the car. The car was first invented as a basic mode of transportation. As time progressed, the car was transformed into other vehicles with different applications. Trucks and goods transportation emerged; Then the convertible and racing versions spurned an entirely new lifestyle. The applications are exciting and innovative. The SSD industry is now in its prime and is creating new applications and enabling an entirely new era. Here are some examples:

Big Data
Big Data is data mining on steroids. It is term used for the ability to ingest large amounts of data to index and analyze with flexibility to manipulate at will. The key here is the speed that the manipulation can happen. New applications and services that were not available before is now possible. Some examples of these services are: Identity theft detection, fraud analysis, bio research and national security.

IoT
SSDs have enabled a whole new IoT industry. IoT v2. Things like smart thermostats, robotics, automated vacuum cleaners, smart buildings and 4k cameras are possible due to the footprint of these devices that utilize a smaller form of solid state storage. This new breed of technology, in the wrong hands can also do much damage. Thieves have found ways to attach IoT devices to skim and survey areas to collect information on how to better strategize hacking attacks and to disrupt lives.

Mobility
Having SSD storage allows us to go mobile, not just in our smartphones, but in many different ways. The military application of having an entire mobile datacenter in a Hummer or Jeep is a reality. Real-time, battlefield data collection and servers that live in backpacks gives an advantage in warfare. Disaster recovery tractor trailer datacenters are optimized and enabled, especially in a world that is increasingly growing more and more volatile. Drones, robotics and vehicles are adding features with more abilities. Less dependencies to a central office enables a fleet of devices that are independent yet coordinate in a “swarm-like” approach to achieve objectives faster and more effectively.

The last chapter for spinning disk
I have written in past posts that the SSDs will someday render that rotating traditional spinning disk industry obsolete. That day is fast approaching as it has been eroding into the sales of spinning disk for a while now. 15k and 10k drives are already phased out, and the 7.2k drives still have some life left. The burning question is, when will the 7.2k drives finally go away…
With SSD capacities at over 16TB in a 2.5” form factor available today and the 32TB drive on the horizon, it is the extinction of the 7.2k drive is soon to come. The 7.2k drive is a great drive for capacity, but the problem is that RAID rebuild time is horrendous. A typical 2TB drive takes a significant time to rebuild and the window of exposure to data loss is greatly increased, even at RAID 6 (See my “Perfect Storm” blog for more information). So even as capacity increases, the rebuilt factor alone is attractive to move to high capacity SSD drives.

I have heard of many acronyms being in the technology industry for a long time, but there are some that really are strange. Here are some of my favorites:

– POTS: Plain old Telephone System
– JBOD: Just a Bunch of Disks
– WYSIWYG: What You See Is What You Get
– VFAT: Virtual File Allocation Table

IoT
It wasn’t until recently, the past year, that I came across another one; IoT (Internet of Things). “What the heck is that supposed to be?”, was what I said to myself, not knowing how relevant IoT was and is becoming. In fact, IoT is one of the fastest growing sectors in technology today. Fueling the exponential growth of data.

So what exactly is IoT? IoT is defined as, “A group or a network of physical objects, devices, vehicles, buildings embedded with electronics, software, sensors and connectivity in which data is exchanged and collected”. IoT is a pretty broad term, but it is a necessary category because we have moved beyond the typical desktop and laptop for computing. We have unhooked ourselves from our desks with WiFi/Bluetooth and have gone mobile with cellular broad band, thus giving birth to the mobile workforce.

The GoPro
The ability to make devices smaller and with the flexibility of communicating wirelessly, new devices were produced to be installed and embedded to go to places where computers were never able to go in the past. GoPro cameras, helped pioneer the IoT category. Mounted on the heads of extreme athletes we are able to get an HD, “First Person’s” POV of jet propelled, winged men flying through holes in rock formations. To see the POV of an Olympic skier in high-speed, downhill slalom runs. We are able to see, analyze and document the natural, the candid, the “real-time” events as they happen.

The SmartPhone
The iPhone or Android device in your hand is a massive producer of IoT data. These devices have location transceivers, gyroscopes and sensors embedded into them. Apps like Waze and Swarm collect data from us in different ways and for different purposes. Waze uses location services and “crowdsourcing” as a way to bring valuable information to us like real-time traffic jams to best route us to our destination. To locate and validate locations of police and road hazards. The Swarm App lets us “check-in” to different restaurants and establishments to interactively increase the experience for the consumer. We can offer advice, reviews or read reviews instantaneously. For example, I walked down a busy street and stop in front of a restaurant to read the menu, Swarm detected my stop and sent me reviews and special offers for the restaurant! I know IoT brings up some privacy concerns and I am concerned as well, but we cannot stop progress. I admit, that I do enjoy all the benefits of this, but I was creeped out at first because I felt like my smartphone was a little too smart for me.

The Raspberry Pi
IoT goes beyond our smartphones and GoPros. I just picked up a bunch of Raspberry Pi’s. If you are not familiar with them, they are actually quite amazing. What started out as a DiY science project kit, sort of like an updated version of the “Radio Shack Ham radio transistor kits of the 1960s, the Raspberry Pi’s are palm-sized, full functioning computers. They include a 1/8” headphone jack, HDMI port, 4xUSB ports, SD card slot and power puts IoT into the hands of enthusiasts and techies like myself. With the Open Source community and YouTube, new projects are posted virtually everyday. Things like robotic toys, home alarm systems, MAME arcade machines (old-school arcade games through emulation), video streaming boxes and many more brings new meaning to DiY projects.

How much is too much?
Finally, the IoT from an embedded standpoint is the most exciting, but also the most frightening. Gone are the days of being able to buy a rear-wheel drive, stick-shift car without traction control, push-button start and solenoid controls. Cars that are operated by pulleys and cables are becoming a thing of the past. The term “Drive-By-Wire” is basically a modern car that has virtually removed the driver from the road. Abstracting the real world with computerized controls sensing in “real-time” the driving experience. Braking and acceleration pedals have been replaced with switches and solenoids. Levers and pulleys for A/C and heat is replaced with computerized “climate controls”. Anti-lock brakes, traction control, active steering and economy mode pacifies the auto-enthusiast. And although these things are luxurious and “cool”, it also increases the failure rate by adding complexity. I own a 2013 import and it started shifting erratically a few months ago. I had the car towed to the dealer and found out that the computer sensor that controls the transmission had malfunctioned. While I was in the tow truck with the driver, I asked him if he saw an increase of these types of problems. He said, “Yes, most of his tows are BMW’s, Mercedes, Maserati’s and other highly teched-out vehicles”

Squeezing Water from a Rock
IoT is everywhere and is going to be even more prevalent in the next few years. From improving the way we perform through the collection and analysis of the data to enhancing our entertainment of virtual reality and functionality. The possibilities are endless, but only if we can analyze all of this data. Which brings me to the storage of all this data. We definitely have the compute power and the RAM power, but how about storage? We definitely have the storage capacity in rotating disk, but with 3D NAND and/or TLC SSD drive developments, capacities upwards of 10TB per drive is in the reach of consumers later 2016. High capacity SSDs enables IoT to produce amazing advancements across industries and even new industries will come from IoT.

If you have been keeping up with the storage market lately, you will notice that there has been a considerable drop in prices for SSD hard drives. It has been frustrating to see over the past 4 to 5 years there has not been much changes in the SSD capacities and prices until now. With the TLC (Triple Level Cell) SSD hard drives available, it is the game-changer we have been waiting for. With TLC capacities at almost 3TBs per SSD drive and projected to approach 10TBs per drive in another year, what does that mean for the rotational disk?

That’s a good question, but there is no hard answer to that yet. As you know, in the technology industry, it can change on a dime. The TLC drive market is the answer to the evolution of hard drives as a whole. It is economical because of the high capacity and it is energy efficient as there are no moving parts. Finally, the MTBF (Mean Time Before Failure) is pretty good as SSD reliability was a factor in the actual adoption of SSDs in the enterprise.

MTBF

The MTBF is always a scary thing as that is the life expectancy of a hard drive. If you recall, I blogged some time ago about the “Perfect Storm” effect where hard drives in a SAN were deployed in series and manufactured in batches at the same time. So it is not uncommon to see multiple drive failures in a SAN that can result in data loss. With rotational disk at 8TBs per 7.2k drive, it can conceivably take days, to even weeks to rebuild a single drive. So I think for rotational disk that is a big risk to take. With TLC SSDs around 10TBs, it is not only a cost and power efficiency advantage, but it is also a lower risk when it comes to RAID rebuilding time. Rebuilding a 10TB drive can take a day or 2 (sometimes hours depending on how much data is on the drive). The MTBF rate is higher because SSDs are predictively failed by logically marking active cells, dead (barring other failures). This is the normal wear and tear process of the drive. Cells have a limited number of writes and re-writes before they are marked dead. In smaller capacities, the rate of writes per cells are much higher because there is only a limited number of cells. With the large amount of cells now offered in TLC SSDs, essentially, cells are written to less often as a much smaller drive inherently. So by moving to a larger capacity increases the durability of the drive. This is the reverse for rotational drives as they become more unreliable as capacity increases.

So what does it mean for the rotational disk?

Here are some trends that are happening and where I think it will go.

1. 15k drives will still be available but in limited capacities. This is because of legacy support. Most drives out there are still running 15k drives. There are even 146Gb drives out there that are running strong but will need additional capacities due to growth and/or replacement for failed drives. This will be a staple in the rotational market for a while.

2. SSDs are not the answer for everything. Although we all may think so, SSDs are actually not made for all workloads and applications. SSD perform miserably when it comes to streaming videos, large block and sequential data flows. This is why the 7.2k, high capacity drives will still thrive and be around for a while.

3. The death of the 10k drive. Yes, I am calling it. I believe that the 10k drive will finally rest in peace. There is no need for it nor will there be a demand for it. So 10k drives, so long…

Like anything in technology, this is speculation, from an educated and experienced point of view. This can all change at anytime, but I hope this was insightful.

The Internet made mass data accessibility possible. While computers were storing MBs of data locally on the internal hard drive, GB hard drives were available but priced only for the enterprise. I remember seeing a 1GB hard drive for over $1,000. Now we are throwing 32GB USB drives around like paper clips. We are now moving past 8TB, 7200 RPM drives and mass systems storing multiple PBs of data. With all this data, it is easy to be overwhelmed. Too much information is known as information overload. That is when too much information makes relevant information unusable due to the sheer amount of available information. We can’t tell usable data from unusable data.

In recent years, multi-core processing, combined with multi-socket servers has made HPC (High Performance Computing) possible. HPC or grid-computing is the linking of these highly dense compute servers (locally or geographically dispersed) into a giant super-computing system. With this type of system, the ability to compute algorithms that would traditionally take days or weeks are done in minutes. These gigantic systems laid the foundation for companies to have a smaller scaled HPC system that they would use in-house for R&D (Research and Development).

This concept of collecting data in a giant repository was first called data-mining. Data-mining is the same concept used in The Google(s) and the Yahoo(s) of the world. They pioneered this as a way to navigate the ever growing world of information available on the Internet. Google came out with an ingenious light-weight software called “Google Desktop”. It was a mini version of data-mining for the home computer. I personally think that was one of the best tools I have ever used for my computer. It was discontinued shortly thereafter for reasons I am not aware of.

The advancements in processing and compute made data-mining possible, but for many companies, it was an expensive proposition. Data-mining was limited by the transfer speeds of the data on the storage. This is where the story changes. Today, with SSD technologies shifting in pricing and density, due to better error correction and fault predictability and manufacturing advancements, storage has finally caught up.

The ability for servers to quickly access data on SSD storage to feed HPC systems, opens up many opportunities that were not possible before. This is called “Big Data”. Companies can now run Big Data to take advantage of mined data. They can now look for trends, to correlate and to analyze data quickly to make strategic business decisions to take advantage of market opportunities. For example; A telecommunications company can mine their data to look for dialing patterns that may be abnormal for their subscribers. The faster fraud can be identified, the less financial loss there will be. Another example is a retail company that may be looking to maximizing profits by stocking their shelves with “hot” ticket items. This can be achieved by analyzing sold items and trending crowd sourced data from different information outlets.

SSD drives are enabling the data-mining/Big Data world for companies that are becoming leaner and more laser-focused on strategic decisions. In turn, the total cost of these HPC systems pay for themselves in the overall savings and profitability of Big Data benefits. The opportunities are endless as Big Data has extended into the cloud. With collaboration combined with Open Source software, the end results are astounding. We are producing cures for diseases, securing financial institutions and finding inventions through innovations and trends. We are living in very exciting times.

I have been talking about SSDs and rotating hard drives for many years now. The inability of SSDs to overtake the mainstream hard drive space has been inhibited by the ability to produce them at an affordable price point (as compared with rotating disk). SSDs has gone through many different iterations, from SLC (single-level cell), MLC (multi-level cell), eMLC (enterprise multi-level cell) and now TLC (triple-level cell).

If you haven’t noticed, the consumer market is unloading 128GB drives at sub $50 and 256GB SSDs are under $100 dollars. This is a steep drop in price and is an indication to the release of the next wave of SSD drives. SSDs are poised to go mainstream because of TLC SSDs. SSDs in general are still, and still will be, expensive and incredibly complex to manufacture, but due to market demands, the TLC drive is now positioned to take the market by storm. So what has changed? Has the manufacturing process changed? Yes, but not so much. The biggest change was the market strategy of the TLC SSD. The drive is manufactured to sacrifice durability in exchange for density. To the point where we can see TLC drives very soon with density in the 2TB, 3TB even 10TB+ capacities. Drive technologies will leverage better drive failure predictability logic algorithms and other “fault tolerating” technologies to augment the lower MTBF.

So what does this mean for the rotating disk? Is it possible that the rotating drive will disappear altogether? I don’t think so. I predict the availability of the TLC drives will virtually eliminate 10k and 15k drives and then over a much longer time period the 7.2k drive will go. This is because the cost per GB still a great value on the 7.2k drives and the densities will grow in tandem with the TLC SSDs. There is also a comfort level of having magnetic media around holding data (for those old-schoolers like me).

It’s been a long time waiting but it is exciting to finally see SSDs making its way into the mainstream.

It is human nature to assume that if it looks like a duck, quacks like a duck and sounds like a duck then it must be a duck. The same could be said about hard drives. They only come in 2.5” and 3.5” form factors, but when we dig deeper, there are distinct difference and developments in the storage industry that will define and shape the future of storage.

The Rotating Disk or Spinning Disk

So there were many claims in the 90’s of how the “mainframe is dead”, but the reality is, the mainframe is alive and well. In fact, there are many corporations still running on mainframes and have no plans to move off of it. This is because there are many other factors that may not be apparent on the surface, but it is reason enough to continue with the technology because it provides a “means to an end”.

Another claim was in the mid 2000’s that “tape is dead”, but again, the reality is, tape is very much alive and kicking. Although there have been many advances in disk and tape alternatives, tape IS the final line of defense in data recovery. Although it is slow, cumbersome and expensive, it is also a “means to an end” for most companies that can’t afford to lose ANY data.

When it comes to rotating or spinning disk, many are rooting for the disappearance of them. Some will even say that is going the way of floppy disk, but just when you think there isn’t any more that can be developed for the spinning disk, there are some amazing new developments. The latest is…

The 6TB Helium Filled hard drive from HGST (a Western Digital Company).

Yes, this is no joke. It is a, hermetically sealed, water proof, hard drive packed with more platters (7 platters) to run faster and more efficiently that the conventional spinning hard drive. Once again, injecting new life into the spinning disk industry.

What is fueling this kind of innovation into a supposedly “dying” technology? For one, solid state drives or SSDs are STILL relatively expensive. The cost has not dropped (as much as I would have hoped) like most traditional electronic components thus keeping the spinning disk breed alive. The million dollar question is, “How long will it be around?” It is hard to say because when we look deeper into the drives, there are differences. They are also fulfilling that “means to an end” purpose for most. Here are some differences…

1. Capacity
As long as there are ways to keep increasing capacity and keep the delta between SSDs and spinning disk far enough, it will dilute the appetite for SSDs. This will trump the affordability factor because it is about value or “cost per gigabyte”. We are now up to 6TBs in a 3.5” form factor while SSDs are around 500GBs. This is the single most hindering factor for SSD adoption.

2. Applications
Most applications do not have a need for high performance storage. Most storage for home users are for digital pictures, home movies and static PDF files and documents. Most of these files are perfectly fine for the large 7.2k multi-terabyte drives. In the business world or enterprise, it is actually quite similar. Most companies’ data is somewhat static. In fact, on average, about 70% of all data is hardly ever touched again once it is written. I have personally seen some customers with 90% of their data being static after being written to for the first time. Storage vendors have been offering storage tiering (Dell Equallogic, Compellent, HP 3Par) that automates the movement of storage based on their usage characteristics without any user intervention. With this type of virtualized storage management maximizes the ROI (Return on Investment) and the TCO (Total Cost of Ownership) for spinning disk in the enterprise. This has extended the existence of spinning disk as it maximizes the performance characteristics of both spinning disk and SSDs.

3. Mean Time Before Failure (MTBF)
All drives have a MTBF rating. I don’t know how vendors come up with these numbers, but they do. It is a rating of how long the device is expected to be in service before they fail. I wrote in a past blog called “The Perfect Storm” where SATA drives would fail in bunches because of the MTBF. Many of these drives are put into service in massive amounts at the same time doing virtually the same thing all of the time. MTBF is theoretical number but depending on how they are used, “mileage will vary”. MTBF for these drives are so highly rated that most of them that run for a few years will continue to run for many more. In general, if a drive is defective, it will fail fairly soon into the operational stage. That is why there is a “burn-in” time for drives. I personally run them for a week before I put them into production. Those drives that last for years eventually make it back on the resale market only to run reliably for many more. On the other hand, MTBF for an SSD is different. Although they are rated for a specific time like the spinning disk, the characteristics of an SSD is different. There is a process called “cell amplification” where the cells in an SSD will actually degrade. They will eventually be rendered unusable but there is software that will compensate for that. So as compared to a spinning disk where there is no cell amplification, SSDs are measurably predictable to when they will fail. This is a good and bad thing. Good in the aspect of predicting failure but bad in the sense of reusability. If you can measure the life of a drive, this will directly affect the value of the drive.

In the near future, it is safe to say that the spinning disk is going to be around for a while. Even if the cost of SSDs come down, there are other factors that meet the needs for the users of storage. The same way that other factors that have kept the mainframe and tape technologies around the spinning disk is has earned its place.

Long live the spinning hard drive!

I am constantly inspired by the technologies that come into the marketplace. Although most new products are mediocre, there are some that are really deserving of praise.

Occasionally a disruptive product shakes the world. Oh how I look forward to those times… However, there are products that make specific use of multiple technologies to leverage the strengths and weaknesses to compliment each other for a common goal.

Each technology may not be a great technology, but when combined it is amazing. Take for example the hybrid car. When the gasoline shortage of the 70s happened it was a shock to most people that gas can actually run out. That prompted the development of an alternative fuel for cars. While most people thought of a 1 to 1 fuel replacement, a combination of fuels for a single vehicle was radical. Gas prices stayed low after that and the gas shortage scare was a distant memory.

The concept of an alternative fuel car was put on low priority. I have seen some attempts of a natural gas car, the electric car and even the vegetable oil car (usually collected from restaurants left over from cooking).

All valiant efforts worthy of merit, but the hybrid car was the one that made it into production. The hybrid car makes use of electric for low-speed, stop and go local driving while making use of the gasoline engine for high speed driving. The use of each the technology were used where they are most efficient. The end product; a car capable of 90+ miles per gallon.

The same thing has now been done for SSD drives. There are 2 kinds of SSD drives, the MLC and the SLC drive. The MLC is the cheaper, consumer grade SSD that has a lower MTBF (mean time before failure) than the SLC drive. It is not a write optimized drive but does a good job at reads and affordable. The SLC is a more expensive, enterprise grade drive, but write optimized. the SLC has a higher MTBF. With these attributes and limited IT budgets it was only a question of time before an SSD hybrid SAN would be available.

Dell Compellent was one of the first to take the hybrid concept to use multiple RAID levels and multiple drive speeds to uniquely offer the most efficiency in SAN storage on the market. The SSD hybrid SAN is the next generation of storage virtualization and tiering bringing affordable, ultra-fast performance at a relatively reasonable cost.

We have to give credit where credit is due, we owe shrinking IT budgets and stormy economic conditions for the birth of innovations such as these.

It is the rainy days that make the sunny days so much more enjoyable.

I LOVE my Macbook Air. Hands down the best computer I ever used. Elegant, light-weight, but in every way a real computer.

What impressed me the most is not so much the size and sleekness of the MacBook Air, but Apple’s uncanny timing and ingenuity to “trail-blaze” on-board flash memory chips as hard drive storage. No other manufacture would make such a bold and gutsy move like this. It’s just way too risky and costly (from a support perspective).

Apple saw something different. They saw this as an opportunity to rewrite a page in Solid State Drive storage while creating the thinnest laptop in the world. This came at the perfect time also as Apple took advantage of the fact that the SSD market is still evolving. So consumers are not picky about what kind or type of SSDs are used, but that ANY kind of RAM or Flashed based storage would be awesome.

This is like the early days of flat screen televisions. About 7-8 years ago, it didn’t matter if Whirlpool made flat screen TV, people would just buy it because it was flat and cheap. As SSDs continue to take shape, it is unquestionable that Apple has clearly set a new standard. A concept that was once thought not economically feasible became their very advantage over every competitor.

Apple’s keen ability to operate and to think “outside-the-box” keeps us, the consumer, always wanting more! Just when we think Apple has run out of ideas, they always seem to surprise us again. This is the stuff that gets us out of bed early and to wait on long lines just to BUY an Apple product. Hats off to a company that almost went bankrupt a few times to now the biggest company that ever existed.

Flash Evolution