Tiger Technology Brings the Cloud to You

Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

The first presentation of Storage Field Day 19 was Tiger Technology. They are a data management company that has been around since 2004, mainly providing solutions primarily for the media and entertainment industry.

This industry is interesting to modern storage because of their application requirements, in particular video. These applications are usually mission critical, and require high bandwidth and low latency. Because these applications are so diverse, there really isn’t a standard. One requirement they do all have in common is that they are intolerant of data loss. Think of video games suffering lag, or a live sporting event dropping frames or even pixels – these are just not acceptable performance in this industry.

The Tiger Technology team took us on the journey of how they built their new Tiger Bridge offering. Tiger Bridge is a cloud tiering solution for Windows (they are working on Linux) that brings cloud storage to current (and legacy) workflows in a way that is invisible to your workers.

Tiger Technology’s Journey to the Tiger Bridge

The customer problem that took them on their journey to create Tiger Bridge was surveillance for an airport. The airport wanted to upgrade their surveillance systems. They had 300 HD cameras with a retention time of 2 weeks and wanted to scale within 3 years to 10,000 4K cameras that would have a retention of 6 months. Tiger Technology computed that the capacity for this project would be ongoing at 15 petabytes of data.

Tackling this problem using standard file systems would be prohibitively expensive, not to mention that it wasn’t even possible to get Windows to that capacity at the time they started.  They knew object storage would work better. Because of the security implications, other requirements were no latency or BW impact, no tamper point, software only, and scalable.

If you think about surveillance cameras, you need a way to keep the data on-site for a while, then you need to send the data someplace that doesn’t cost as much to store it. But you need to be able to bring that data back with fidelity if you need to check the videos for something. These customer challenges are how they came up with the idea for Tiger Bridge.

What is Tiger Bridge?

Tiger Bridge is a hierarchical storage management (HSM) system. It installs in less than five minutes on a server. The agent installed on the servers is a Microsoft Filter Driver and sits between the application reads and writes and target storage.  Since it is integrated with the file system as a filter driver it also falls under Active Directory control, which is great for existing workloads and policies.

With Tiger Bridge, files are replicated and tiered automatically based on policies set on last access time and/or volume capacity. The agent does the tiering work in the background, so sending or retrieving the file from the cloud, even cold cloud storage, is transparent to the user.

Via the TigerBridge website

The team focused on providing this seamless experience to applications that are hosted on the Windows platform. Since they wanted this to also work for legacy apps, one thing they had to figure out is how to use all the commands that are common in a file system that aren’t replicated in the cloud, things like lock, move, rename, etc. They also wanted to support all the cloud storage features like versioning, soft delete, and global replication, since applications written for the cloud require these features.

The example they gave of bridging cloud and file system features was rename. You can rename any Windows file, no problem. But rename isn’t available on public cloud systems, you have to do a copy. For a couple of files, that’s probably no big deal. But if you rename a folder with lots of files in it, that could be a huge rename job. It may take time, and it will probably get expensive.

Their solution keeps track of where the files are, and any changes that have been made. This solves the problem of data being rendered useless because it’s no longer associated with its original application, a common issue that brings on lock-in anxiety. Files under the Tiger Bridge control maintain a link with the file system on premises and the public cloud. Users never know if they are hitting the data on premises or in the cloud.

Check out the demo from a user perspective:

What does Tiger Technology do for users?

What this means is that a user on their laptop can use the Windows file system they are familiar with, and the agent handles where the file actually is in the background.  Administrators can make rules that tier the data that make sense to the business. It allows organizations to use the cloud as an extension of their data storage.

Other use cases are disaster recovery. Having a location like the cloud so you can have a backup of your data in a different location without having to manage another site or tapes is a very attractive use case. Since it is so easy to bring files back from the cloud, Tiger Bridge is able to handle this use case as well.

Real Talk about Tiger Technology

 I think this is the year we’re going to see a lot more solutions bubble up that truly bridge on-premises and the cloud, and I think we’ll seem them from older companies like Tiger Technology. These companies understand application requirements and the technical debt that companies are battling with, and they are finding ways to make the cloud model fit into the reality of their customers’ current realities.

The Tiger Technology presentation reminded me of something we used to say at EMC: a disk, is a disk, is a disk. Users, and applications, don’t really care where the disk they are writing to is located, who manages it, and what it costs. They care about their application being easy to use, low latency, and security. Tiger Technology has figured out how to make that old storage saying work for public cloud and legacy applications.

What do you think? Let us know in the comments!

Please follow and like us:
error0

Is storage still relevant?

storage field day

Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley CA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer. I was not obligated to blog or promote the vendors’ technologies. The content of this blog is of my own opinions and views.

Is storage still relevant in today’s cloud and serverless environments? At Storage Field Day 19 we spent several hours with Western Digital, and heard from ten different presenters. Did they show us that storage is still relevant?

Hardware Must Innovate for Software to Innovate

I think the industry often forgets that software innovation is impossible without hardware innovation. We’ve seen some pretty amazing hardware innovations over the last decade or so, and hardware companies are still at it.

You may be asking: how is an old hardware company able to keep up, let alone still be innovating? Well, Western Digital has 50 years of storage experience, and they are still innovating. Their heritage is highlighted in this slide.

Western Digital’s 50 year heritage via https://www.youtube.com/watch?v=Lqw3_HgiA9o

Western Digital is looking at how to solve the data storage challenges for emerging workloads. They already have tons of experience, so they know that the data must be stored, and that more data is being created now than ever before.

More data is being created today than ever before, and it all needs to be stored so it is available to have compute applied to it. Compute is what turns the data is turned into actionable information. But there is so much data now – how should it get stored? How will it be accessed? It’s becoming pretty obvious that the old ways of doing this will not be performant, or maybe not even scalable enough.

One workload they talked about throughout many of the presentations was video. Just think about what kinds of devices that now create streams of video. IoT devices, survellance cameras, cars, the general public, etc. Much of the new types of streaming video is being created at the edge. The edge cases are so diverse that even our understanding of “edge” may be antiquated.

So is storage still relevant? Maybe not the type I came up on – SANs and NASs. But the next evolution of storage has never been more relevant than now.

Composable Infrastructure

Western Digital also discussed composable infrastructure, and how technologies such as NVMe over Fabric make composable infrastructure possible. Don’t worry if you have no idea what I’m talking about – the standards for NVMe over Fabric weren’t pulled together until 2014, and the standard became real in 2016. Also, hardware standard boards are so peculiar – they don’t use the NVMe acronym, they use “NVM Express”. This makes it hard to find primary source information, so keep that in mind when you’re googling.

What can NVMe over Fabric do for composable infrastructure? First, let’s answer why would you need composable infrastructure?

Western Digital’s Scott Hamiliton walked us through this. First of all, new types of applications like machine learning and deep learning need the data to be close to where the compute is happening. Even after considering tradeoffs that must be made because of data gravity, traditional architecture slows things down because resources are locked in that traditional stack.

Composable infrastructure takes the resources trapped in traditional infrastructure, breaks them up and disaggregates them. After that’s done, the resources can be recreated into the leanest combination possible for a workload, virtually composed, creating a new type of logical server. The beauty is this can then be modified based on the dynamics of a workload.

According to Hamiliton, Western Digital believes NVMe will the foundation of next-gen infrastructures, and that eventually ethernet will be the universal backplane. It was an interesting session, check it out for yourself below.

Western Digital at Tech Field Day via https://www.youtube.com/watch?v=LuRI1TlBJgA

Zoned Storage

Western Digital is also championing the Zoned Storage initiative. This will be part of the NVMe standard. Zoned Storage creates an address space on disk (HDD or SSD) that is divided into zones. Data must be written sequentially to a zone, and can’t be overwritten sequentially. Here’s Western Digital’s explanation:

[Zoned Storage] involves the ability to store and retrieve information using shingled magnetic recording (SMR) in hard disk drives (HDDs) to increase the storage density and its companion technology called Zoned Name Spaces in solid state drives (SSDs).

via https://www.westerndigital.com/company/innovations/zoned-storage

Why does the industry need this? According to Swapna Yasarapu, Sr. Director of Product Marketing for Western Digital’s Data Center Business Unit, we’re moving into an era where large portions of unstructured data are being created. All of this data can’t be stored via traditional methods. Additionally, unstructured streams come from IoT edge devices, video, smart video, telemetry, and various other end devices. Many of these streams must be written sequentially to unlock the information the data contains.

Finally, this is an open source initiative that will help write this data in a more practical way for these types of data streams to HDDs and SSDs.

Watch the entire presentation here:

Acronyms as an innovation indicator

One way I can tell when there is innovation is when I come across acronyms I don’t know. After 3 years focusing on virtualization hardware, I found myself having a hard time keeping up with the acronyms thrown at us during the presentations.

The good news is that some of these technologies are brand new. So much for storage being old school! Plus, can you imagine what apps are waiting to be written on these new architectures that have yet to be built?

Here are the acronyms I didn’t know. How many can you define?

  • TMR: tunneling magnetoresistance
  • TPI: Track Per Inch (disk density)
  • PZT: Piezoelectric actuator (see this earlier Storage Field Day post)
  • VCM: Voice Coil Motor (see this video )
  • SMR: Shingled Magnetic Recording
  • SSA: Solid State Array
  • ZBC: SCSI Zoned Block Commands
  • ZAC: Zoned ATA Commands
  • ZNS: Zoned Named Storage

Is Storage Still Relevant? Final thoughts

I think you know my answer on the questions is storage still relevant: of course! We are just beginning to create the standards that will issue in the real digital transformation, so there is plenty of time to catch up.

Please follow and like us:
error0

Storage Field Day 19: Getting Back to My Roots

storage field day

I’m excited that I have been invited to be a delegate at Storage Field Day 19. This is a little different than the Tech Field Day I attended in 2019, because the focus of all the presentations at this event is data storage.

I am looking forward to this because I am a storage person. My career started as a Technical Trainer at EMC, I was a storage admin for a pharma company. I went back to EMC to develop technical training, I then went to work for Dell Storage, and then Inktank (a startup that provided services and support for Ceph). I guess you could say storage is in my blood, so Storage Field Day should be lots of fun.

What to expect at Storage Field Day

Here are the companies we’ll be visiting (in the order they will be presenting), and what I’m looking forward to hearing about from them. Remember, you can join in on this event too by watching the livestream and participating in the twitter conversation using the hastag #SFD19.  You can @ me during the livestream and I can ask a question for you.

Disclosure: I am invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in Silicon Valley. My expenses, travel, accommodation and conference fees will be covered by GestaltIT, the organizer and I am not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog represents my own opinions and views.

Tiger Technology

The first presentation we hear will be from Tiger Technology. Just looking at the website, they claim to do lots of stuff. When I look at their About page, they’ve been around since 2004 “developing software and designing high-performance, secure, data management solutions for companies in Enterprise IT, Surveillance, Media and Entertainment, and SMB/SME markets”. They are headquartered in Bulgaria and Alpharetta, and since my mom was born and raised in Alpharetta, they get extra points.

Skipping to their News page, it looks like they have a new solution that tiers data in the cloud. I’m looking forward to hearing how they do that!

NetApp

NetApp talked with us at TFD20 (my blog review of that presentation). They talked to us then a bit about their flavor of Kubernetes, and the work they are doing to make it easy for their customers to have data where they want it to be. Hoping they do a deeper dive on CVS and ANF, their PaaS offerings for the current public cloud offerings.

Western Digital

Western Digital has presented at previous Tech Field Day events, and have acquired many companies who are Tech Field Day presenting alums. The last time they presented back in February 2019 they talked about NVMe, and I love that topic.

One thing I think that doesn’t get enough attention is the incredible innovation that has happened over the last several years in storage hardware. The software is now catching up, and apps will follow. So there is cool tech stuff happening on prem too, not just in the public cloud domain.

I peeped their twitter account, and they have interesting things they are showing this week at CES. Like this 8TB prototype that looks like a cell phone battery bank.  That would be a pretty sweet piece of swag! 😊

Infrascale

This will be Infrascale’s first appearance at Storage Field Day. Their website says what they do right up front: they have a DRaaS (Disaster Recovery as a Service) solution that fails to a second site, booting from an appliance or the cloud.

After storage, the biggest time I’ve spent in my career has been with data protection and disaster recovery, so I’ll be looking forward to this presentation as well. Really looking forward to hear about how this solution can included in an architecture.

Dell EMC

Since I’ve worked in storage at Dell and EMC, and I’m just coming off a tour at VMware, of course I’m excited to sit in on presentations from my Dell Technologies federation homies! There will be presentations on Isilon and PowerOne, but the one I’m most curious about is one on DevOps.

Komprise

Komprise has presented at Storage Field Day before (in 2018). They are a data management and tiering solution. At AWS re:invent they unveiled a cloud data growth analytics solution. I hope we hear about that.

WekaIO

WekaIO’s  has presented at Tech Field Day a couple of times before. They have a distributed storage system for ML/AI, it looks like they directly access NVMe flash drives. It looks like they also have a solution on AWS. So this should be an interesting conversation. I’m just hoping we don’t have to listen to a “what is AI story” before they get to the good stuff.

Minio

This will be Minio’s first presentation at Tech Field Day. Minio sells high performance object storage. One of the other Tech Field day delegates, Chin-Fah Heoh, has already written a blog post about how Mineo is in a different class than other object storage providers. I’m really looking forward to this presentation.

Please follow and like us:
error0

NetApp Goes to the Cloud #TFD20

man releases paper airplanes from a window

This post – NetApp Goes to the Cloud – is my review of materials presented at #TFD20.

NetApp’s 1st presentation at #TFD20 was about NetApp’s cloud strategy. I was very excited to see Nick Howell (aka @DatacenterDude), NetApp’s Global Field CTO for Cloud Data Services, there to greet us and kick things off.  I’ve always known him to be knowledgeable, visionary, and a bit controversial. All of my favorite things! And I was psyched to see how he was going to frame his conversation.

Infrastructure Admin’s Journey to the Cloud

Nicks’ presentation was titled “OK, Now What?” An Infrastructure Admin’s Journey to the cloud.

He set up the history of things for datacenter admins, and how quickly they need use their existing skills to pivot if they’re going to support cloud. I liked this slide highlighting historical design patterns for datacenters.

Cloud Native Strategy, via NetApp

He gave a great overview of the struggles IT Ops folks will need to go through in order to support their organization’s move to the cloud: new training, new certs, etc. It will take effort to get up to speed from a technical perspective.

NetApp Goes to the Cloud

Of course, the message was how easy NetApp makes it for their customers to get to “the cloud” using NetApp Cloud Data Services. He brought in the Google Cloud Partner of the Year award that NetApp was awarded this year’s at Google Next. To me, that makes it obvious they are doing the hard integration work to enable hybrid cloud with NetApp storage.

They’ve been at this for a few years after hiring an exec to run a cloud business in 2017, and acquiring cloud startups (Greenqloud 2017, StackPointCloud 2018). Two years later, NetApp has built a suite of cloud products that are delivered in the cloud, as-a-Service, by NetApp.

They have an IaaS offering called CVO (Cloud Volumes ONTAP), which is a virtual version of ONTAP in the cloud which allows customers to do everything they would do with ONTAP on prem plus more in the three major public cloud services. They have a free trial if you’re interested in kicking the tires. There are also two PaaS offerings called CVS (AWS Cloud Volumes, Google Cloud Volumes) and ANF (Azure NetApp Files).

NetApp goes to the cloud

They are building a control plane, that Nick compared to vCenter, called Fabric Orchestrator. It will give a global view of all data, no matter where the data resides. You’ll have oversight and management control from this control plan. This is set to launch in 2020.

NetApp Kubernetes Service

While this is great work to provide the services to make NetApp hybrid architectures possible, what can you *do* with it? Data capacity exists to host applications, and the way to orchestrate modern applications is Kubernetes.

NetApp has their own Kubernetes service that they call NKS. It is a pure upstream Kubernetes play, and they support the latest release within a week. It has been built to provision, maintain, and do lifecycle management no mater the cloud on which it runs.

Real talk

From everything we were shown, if you’re a NetApp customer you have lots of opportunity on which cloud to use as you build a hybrid and/or multi-cloud strategy. You have a a cloud organization that understands your fears and pains, and they are working to make cloud as easy as possible for you.

NetApp seems to have the right team and attitude to make multi-cloud a reality for their customers. They’ve built a cloud team from cloud native veterans to drive this strategy. They seem to be very intent on shepherding traditional operations teams into the new cloud native era. Will this be enough to span the digital transformation gap? Only time will tell.

Please follow and like us:
error0

Tech Field Day from the Other Side

Last month I accomplished my dream of becoming a Tech Field Day delegate for #TFD20. Because I left my job at VMware in order to launch Digital Sunshine Solutions, I finally no longer work for a vendor and I qualify to be a delegate! This post is a reflection on the differences between being at a vendor and hosting Tech Field day and being a delegate.

Tech Field Day history

For those of you who don’t know, Tech Field Day is a roving event run by Stephen Foskett. 10 years ago, when we were all figuring out what blogging and podcasting meant to big tech companies, he had the vision to take influencers who were talking technically and strategically about products on their personal blogs and podcasts right to the vendors. This gave vendors the opportunity to explain the products and processes, as well as meet this new type of advocate/influencer. Stephen paved the way for enthusiastic, independent influencers the same recognition as analysts and press have always received. Smart vendors welcomed his travelling crewe into their inner circle.

My first time as a delegate

The reason I’ve never participated before: I’ve always worked at a vendor! You can see from my Tech Field Day Delegate page that I’ve participated as a vendor and blogger since the beginning of Tech Field Day. I’ve been responsible for organizing and hosting as a vendor and let me tell you that was no small accomplishment at the time!

Experiencing Tech Field Day as a delegate was exponentially more challenging than following or even hosting as a vendor. Most days we needed to be downstairs before 7. So I was up early to go to the gym and put on makeup. I hate wearing makeup, but my good friend Polly has been playing with a YouTube channel and let me know that if you’re on camera, you need makeup. She is probably right.

We traveled to several vendors a day, hearing their current pitches. Some were amazing, some could have been better. Everyone was very nice though, and treated us like VIPs. After a full day of presentations (we went from 7 – past 5 every day), there were dinner and socialization activities.

My view from the Tech Field Day delegate table

I have known most of the other delegates for a long time (decade even). Talking about the technical and business challenges brought up by the vendors really did bring us together in a community for the week.

What’s in it for vendors

Since I’ve worked for a vendor, I know how hard it can be to secure the funding to bring Tech Field Day to your company. In case you had any reservations, let me put your mind at ease: every single delegate is very keen to hear, understand, and discuss what you’re presenting. There was so much experience in our set of delegates that we had some very vigorous discussions about what you presented. I’m just now getting around to writing blog posts, because I needed the time to reflect and research a bit before I put pen to paper.

The food and swag all were nice, but we were honestly most interested in what your speakers had to say. A couple of the presentations were a little rough, and we found out later that the folks presenting were tapped at the last minute. This is no disrespect to those presenters, but vendors you really want to ensure that you have your guru in the room. Even if they are a little rough, just coach them on what not to say. Let them get up there and geek out. Having folks present that are super safe because not as comfortable with material as they would have liked, or worse sticking to a script is very frustrating, I know this can happen when someone is asked to cover at the last minute. It just leaves you with this feeling that the really good stuff is missing.

The Tech Field day event has always been such a good blend – mixing curious, experienced techies with the product people who want feedback and input to their product strategy. If you have a new message or launch you would like to test, Tech Field Day is a great vehicle for that.

Participate in the Tech Field Day Community

There are so many ways to participate in the Tech Field Day community! To start with, you can watch all of the events live online from the Tech Field Day website. If you’re a vendor, you can become a sponsor and have the delegates live at your location. If you’re an independent techie, maybe one day you can also live the dream.

Please follow and like us:
error0