preloader
Paperless Technology Solution
Gurd shola Addis Ababa,
info@paperlessts.com
Ph: +251936515136
Work Inquiries
work@paperlessts.com
Ph: +251936515136

Innovation Forward: The Power of Keeping It Simple – Spiceworks News and Insights

EXPLORE
Simplicity sells. Find out why simple is better in technology and innovation.

In a world where tech and innovation are continually caught up in the dance of complex evolution, the art of simplicity never gets old. Yan Ness, the CEO of VergeIO, sheds light on the power of keeping it simple as we continue on our collective path of technological maturity.
Tech giants like Apple, Microsoft, and Amazon are all great at one thing: simplicity. 
Apple made the smartphone so simple that my 22-year-old niece and 93-year-old father both love and use it. It’s incredibly intuitive and easy to use, and neither of them has ever read a manual or taken a lesson. 
Microsoft (and Apple, too) made the personal computer much simpler to use than MS-DOS. The mouse and the GUI commoditized the use of computers dramatically, increasing adoption. My mother, who had never used a computer at the time, was playing solitaire for hours a week. She never asked for a simpler computer, and they never asked her what she wanted. They just made it simpler to use, and it took off.
Amazon made it super easy to find, buy, pay for, then receive just about anything. That “one-click” simplification allowed them to take a huge chunk of retail from legacy retailers. Today Amazon trucks, planes, and automobiles are everywhere. Amazon Prime made it even easier and built a massive new market.
Simplicity sells. With complexity, users perceive increased risk, costs, and annoyance in the learning curve.
Apple taught us that simple is better, but it’s also a whole lot more work behind the scenes.
I’ve been in the IT industry since the 1980s. Since then, speeds and feeds have gotten a whole lot better, but ultimately, it’s still just hardware and software organized into silos of industries. There are huge companies for servers, networking, security, management, operating systems, backup, and more. Some focus on certain industries or use cases, hoping to make things simpler to adopt and understand in our everyday lives. But they still all require very skilled staff with multiple certifications and years of experience. 
This caused staffing challenges. Combined with the growing cyber risk, regulatory challenges, and CapEx demands, enterprises started to seek providers. Outsourcing and managed service providers filled the gap. This was done in an attempt to shift the risk, the CapEx, and the complexity so that the enterprise could concentrate on more strategic things than server uptime.
The MSP or provider hid all the complexity of silos, staffing, and software in exchange for a (supposedly) simple monthly fee. MSPs use cross-customer automation and operations management to deliver value, ideally unique. Some have internal software development teams (we did) to build portals, automation, and orchestration of the 10+ vendors required to deliver a seamless service, such as disaster recovery or private clouds.
The rapid proliferation of compliance regulations such as HIPAA, PCI, FISMA, Fedramp, and GDPR added to the complexity and further burdened the enterprise IT department. So additional vendors and additional tools were brought to bear, in the name of simplifying, but really only increasing complexity.
The cloud came along to further simplify IT infrastructure. Got a credit card and about 30 minutes? Go ahead and spin up a bunch of infrastructure on demand. Now you can have “microservices,” meaning there are hundreds of different services you can use to write your application for basically infinite scale if you can afford it.
But if you were to look at your AWS or Azure bill, you’d quickly realize this isn’t simpler. It still requires lots of experts (AWS experts are rare and expensive), and bills can be complex and full of surprises. It’s not simple.
Next, we software-defined everything, starting with server virtualization or software-defined compute. Then, we virtualized storage with software-defined storage and networking with software-defined networking. This enabled us to “converge” and “hyper-converge” the three software-defined tools into a single-looking piece of infrastructure. This didn’t replace the three with a single piece of software. Rather, it mushed them together and added a coordination and management layer. Under the hood, it’s still separate pieces. In fact, some users of hyper-converged products (e.g., Nutanix) still run hypervisors on them (VMware) just because it’s what they are used to. 
But we wanted the ability to move workloads around without regard to the infrastructure underlying them. We wanted portability, expandability and agility. The quest for even more portability resulted in containers. This added another layer of complexity, though, as these workloads had dependencies that needed to be managed. So now we have a bunch of tools to manage containers. 
See More: How to Automate Repetitive IT Tasks and Drive More Business Value
Future workloads will run outside the data center and away from the confines of a highly centralized IT staff and orchestration. Future workloads will run “in the field” and all the way out to the edge. This is crying for more simplicity: to do this, we will need a simpler, more autonomous way to deliver infrastructure in the form of virtual data centers.
To virtualize the entire data center is the next step in the simplicity evolution. This means we need to do to the data center what VMware did to the original server.
Now that compute, storage and networking have been turned into buckets of bits, we need to have a single, simple bucket of bits that can represent everything from the storage layers; the hypervisors; the networking, including public IP, DNS; management; orchestration; automation and self-healing tools. Most importantly, it needs to be fast and create fully independent virtual data centers with no dependencies, so they can be moved at will from one location, not another, as you can a VM or a container.
Out of that bucket of bits, we can dynamically create infrastructure out of commodity x86 hardware and coordinate the activity between the three software-defined resources (compute, storage and networking). We can simplify and automate some of the tasks, so it takes less than 20 minutes to set up and use, and so it’s friendly and intuitive.
An encapsulated data center would start with the storage architecture. It would have different tiers of storage. Then the hypervisor would work directly with that storage. Software defined-networking would handle all layer 2 and 3 networking, including DNS, load balancing, and access control.
Built-in DR and a backup would be part of the encapsulated data center, both on-site and offsite.
Compliance and security settings, along with automation, orchestration, logging and monitoring, would also be fully encapsulated. 
We should be able to pick up this single, encapsulated virtual data center and move it, with no separate connections, like you can a Kubernetes container. You should be able to clone, start, modify, and delete it. You should be able to snapshot it as easily as you do a virtual machine. It’s a fully encapsulated virtual data center.
With this level of isolation, we can “nest” virtual data centers to meet compliance requirements. A service provider can sell infrastructure through channel partners, each with its own virtual data center space.
See More: Top Tips on Enhancing IT Efficiency & Cutting Your Energy Bills
A fully virtualized data center isn’t complexity disguised as simplicity. It requires less expertise, scales easily, radically reduces risk and drives down costs. But the real value of simplification—not only for virtual data centers but for all advanced tech—is the potential for geometric changes in adoption. Beyond the tactical savings, agility, and risk reduction, simplification enables new uses that often weren’t even considered before. We can take photos with our phones, have meetings with our computers, and watch Mrs. Maisel on Amazon.
The virtual data center will change the simplicity/effectiveness paradigm similarly. We’re imagining commodity-level adoption and use of cheap, low-risk “micro clouds:” a cloud in every house, on every desktop, laptop, pocket, and someday wrist.
Simplification and commoditization also change the skills gap. In movie palaces of old, the projectionist was a very highly valued, highly paid job. Now it’s a “hands-and-eyes” job of merely swapping cartridges and pressing play. Running servers, storage, and networking can be roughly that simple too. What exciting things will that new simplicity enable us to do with the fully virtualized data center? 
While the power of simplicity is evident, there are times when simplicity isn’t a high priority. Microsoft Windows made personal computing simpler and commoditized it for the masses, but it was and is still not appropriate for every application. Real-time operations in fighter jets, manufacturing robots and other use cases have demands that eclipse the power of simplicity. Even in those cases, simple is better but maybe not the most essential trait.
How are you keeping it simple while enabling innovation? Share with us on Facebook, Twitter, and LinkedIn.
Image Source: Shutterstock

CEO, VergeIO
On June 22, Toolbox will become Spiceworks News & Insights

source

Post a comment

Your email address will not be published. Required fields are marked *

We use cookies to give you the best experience.