Design Principles for Interactive Online Events

Do you need to host interactive online events? This is a good time to revisit this concept. As the world faces a viral health epidemic, meeting face-to-face in large groups of people may not be feasible. The World Health Organization even provides guidance on planning large events. It may be in the interest of public safety to move your event online.

But how do you capture the energy and excitement of an annual conference in an online platform? How can you be sure your employees are actively participating if you don’t have them all corralled in a conference room? This post will discuss where you can start the process of planning interactive online events.

Step 1: Review Original End Goal for the Face-to-Face Event

You may have been forced to transform your annual face-to-face event into a virtual event. You may be tempted to look at the tools first, but slow down!

The first thing to consider is your end goal. What was the end goal of the event when it going to be in person? Some common answers may be:

  • Product launch announcement. Launching a product at an in-person event is great because all of your internal experts will have face-to-face interactions with customers, partners, analysts, and press.
  • Helping customers understand technical details.
  • Training. You don’t think your teams will pay attention to online training, so you take them out of their job environments to ensure their full attention is on the training content.

Step 2: Map End Goals to Virtual Execution Methods

Once you’ve revisited the original goals, revisit how you planned to execute them during an in-person event.

Write down every aspect of your face-to-face event. Next to every element, explain how you were going to accomplish your goals in that face-to-face environment.

You may not have thought about this explicitly, but taking the time to re-evaluate your expectations will help you design interactive online events using tools that encourage participation.

Here’s an example of what this evaluation could look like for a conference and training event:

Event TypeEvent Element Face-to-Face ExpectationsFace-to-Face Interactive Expectations
ConferenceProduct Launch KeynoteBig splash to convey vision and how new product will drive that vision. Executives present topics on a big stage, usually with technical presentations to prove it actually works.Media reach and excitement. Social media pictures and commentary during the keynote to increase buzz and excitement.
ConferenceMedia BriefingsMeet in person with press, analysts, and bloggers to be sure they understand your vision and the new product. Gain insight into how they think this will impact the market, influence press stories.Real-time feedback on messaging and market fit. Relationship building.
ConferenceSessionsHelp customers understand the new vision, from business reasons to how it works technically.High level information transfer, answer customer questions in person, receive real-time feedback.
Face-to-Face EventTrainingEnsure learners are paying attention by having them in the same room as an expert.Knowledge Transfer, uninteruppted by distractions

Step 3: Interactive Online Events Must Be Deconstructed Face-to-Face Events

You can’t “lift and shift” a face-to-face event to virtual platforms and expect the same results. But you can thoughtfully design to create online versions of the interactive elements you expect in a face-to-face meeting. You have to create a deconstructed version of the events you normally plan.

To design interactive online events, continue your analysis by thinking of online ways to encourage the interactions you know how to drive in a face-to-face environment. The interactive expectations will most likely be the same, so think about ways to meet those expectations if everyone is connecting via laptops instead of handshakes.

Event TypeEvent Element Interactive ExpectationsOnline Tools for Interaction
ConferenceProduct Launch KeynoteMedia reach and excitement. Social media pictures and commentary during the keynote to increase buzz and excitement.Live webinar, with chat.

Concurrent interactions on social media platforms.
ConferenceMedia BriefingsReal-time feedback on messaging and market fit. Relationship building.Webinar and live call.
ConferenceSessionsHigh level information transfer, answer customer questions in person, receive real-time feedback.Live webinar, with chat.

Concurrent interactions on social media platforms.
Face-to-Face EventTrainingKnowledge Transfer, uninteruppted by distractionsLive webinar, with chat.

Concurrent interactions on social media platforms.

Real Talk

It is possible to create interactive online events, but you must design them. You can’t just lift and shift the content to a webinar wand expect your audience to interact, let alone pay attention.

This post discussed ways to evaluate how you want your event to be interactive, and suggestions for how to create a deconstructed online event. In the next posts, I’ll discuss tools to facilitate interactivity as well as a real world example of an online event that was designed for interactivity.

Evaluating Digital Sources During an Epidemic

Evaluating digital sources is an important component of being digitally literate. During an epidemic, this skill can literally mean life or death.

But how do you sift through sources to evaluate them when you are fatigued and stressed out from the information itself?

NSE: Never Stop Evaluating Digital Sources

You must look at the source of the information that you consume. Can you trust everything that the nightly news reports? Can you trust things your friends post on social media?

This exercise is great to template for evaluating digital sources:

  • Who wrote the article/post? Just because they have a title such as doctor, are they really a doctor? Use Google to find out!
  • Who do they quote? Sometimes articles will have great quotes from credible people, but they don’t link to the original source. Google the person’s name and the context of the quote to be sure it hasn’t been taken out of context.
    Extra Credit: Be careful about clicking on highlighted words. Many times these are ads, and the website is making money on every click you make.
  • What is the site’s purpose? Are they sharing information to enforce their point of view? This is important when you are looking at claims. Do they back their claims up with links to original sources like congressional hearings, interviews, and research?
    If you google the claim word-for-word, do other sites agree come to the same conclusion?

Disinformation is Warfare

Here’s is how Miriam Webster defines disinformation:

false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth.

Miriam Webster.com

Lea Gabrielle is the Special Envoy & Coordinator for the U.S. Department of State’s Global Engagement Center. She testified before the Senate Relations Subcommittee on State Department and USAID Management, International Operations, and Bilateral International Development on March 5, 2020 (10 days before this blog post was written, testimony transcript here).

She explained her mission, which is “leading and coordinating the interagency to decisively expose and counter foreign state and non-state disinformation and malign propaganda”. In her testimony, she discussed how China provides misinformation via censoring information:

Beijing attempts to censor the sheer extent of this global public health crisis – from downplaying the number of casualties, limiting criticism of the CCP’s response, and silencing Dr. Li Wenliang’s initial red flags about the deadly outbreak.

From testimony transcript, available here.

She also discussed the disinformation techniques that Russia uses:

These include cyber-enabled disinformation operations; propaganda campaigns that seek to rewrite history; coordinated social media swarms that inflame existing fault lines of societies, and an array of other actions that fall within the scope of their malign information operations.

From testimony transcript, available here.

The Guardian ran a story about several thousand Russian bots that had been “previously identified for airing Russian-backed messages on major events such as the war in Syria, the Yellow Vest protests in France and Chile’s mass demonstrations – are posting “near identical” messages about the coronavirus”. The goal seems to be to sow seeds of distrust between the US and China.

Fight Information Fatigue

Lexico.com is a mashup of dictionary.com and Oxford University Press. This is how they define Information Fatigue:

Apathy, indifference, or mental exhaustion arising from exposure to too much information, especially (in later use) stress induced by the attempt to assimilate excessive amounts of information from the media, the Internet, or at work.

Lexico.com

I don’t know about you, but this is exactly how I feel currently about the information available for the COVID-19 pandemic. I think the term information overload applies here as well. The danger with information fatigue is the same as any fatigue – it keeps you from taking action.

Right now there is so much information – and disinformation – about COVID-19 that getting fatigued can mean you don’t pay attention to the information you need to survive. After all, it takes time and effort to evaluate sources. Can you afford to get so fatigued you don’t evaluate your sources, and get sucked into reacting to disinformation?

Here’s how I’m trying to fight this fatigue:

  • I’m limiting my information intake to a couple of times a day, and timeboxing the time I spend on it.
  • If I get a direct message from a friend, I look critically at what they send me. It usually sets off a spirited text conversation or phone call (we love each other, so that’s ok).
  • I’m taking care of my mental health in my normal ways: yoga (most studios in Austin are teaching classes via Zoom), eating properly, getting enough sleep, walking my dog, working, and creating. I am always creating for work, which is incredible But bright side to social distancing: I’m home for the first time in a decade in the spring. So I have a REAL GARDEN going and I’m so happy about that.
    Bottom Line: take care of yourself so you have the strength to determine what information is important for you and your loved ones.

Real Talk

During a global epidemic you have to make sure you’re getting the information you need to survive, even if you’re overwhelmed by the sheer amount of information.

Be sure to take care of your mental and physical health so that you can avoid information fatigue. And NSE – never stop evaluating digital sources.

Tools for Interactive Online Events

What are the best tools for designing interactive online events? As I explained in my last post (Design Considerations in Designing Online Events), you can’t just lift and shift a face-to-face event to the digital world and expect interactive experiences to happen. You must deconstruct the face-to-face event and design an interactive online event.

This post will dive into various tools for designing interactive online events. I won’t be recommending specific tools, but providing advice for evaluating these tools.

You Probably Have the Tools You Need

How do you currently host live webinars? Think about the tools you already have, and leverage your investment in them. Your investment includes things like licensing costs, IT integration with your company’s identity systems, and integration with your marketing automation systems.

You will want to ensure that the platform can handle the size of the audience you’ll have if you convert a face-to-face event into an online event, and that you’re able to accommodate for the increased bandwidth.

It’s also a good idea to use this as an opportunity to investigate all options the platform provides, especially if you’ve been using it for a while. Does the platform allow for things like file sharing, built in polls, closed captioning, or even broadcasting to other platforms like YouTube.

Go where your audience congregates

The most overlooked tools for interactive events are the ones where your audience can communicate with you and other audience members. The way to encourage people to participate is to meet them where they are.

Where does your target audience interact online? Is it Twitter? Slack? LinkedIn? Reddit? Find a way to go to them. But be careful of making it weird — don’t show up in the tools your audience is comfortable using with your perfect messaging and expect people to want to interact. That’s just weird.

This is a great time to interview the experts who work in your company, who are probably already interacting in these spaces (yes, with your customers!). Work with them to figure out the best way to make these spaces a part of your architecture for interactive online events. Your internal experts are key to making this part not weird.

Our world is networked, your customers don’t interact with you in the hierarchical ways of the past. Some even say that now co-learning trumps marketing. Take advantage of this opportunity, and meet your customers where they are.

Make Your Content Interactive

Do you want the audience to be interactive? Make sure the content you are
creating for these events is interactive! Stop making sage on the stage
presentations, they are boring.

Also, no one wants to sit through 15 minutes of your marketing message before they can get to the real content they came to hear. If your downstream marketing teams are doing their jobs, everyone has heard this message before. In presentations. On product pages. In marketing emails. All over social media.

The quickest way to disconnect folks is to drone on about what YOU want them to hear. Create presentations that focus on problems your audience have, but be sure you’re telling the story in a way that your customers recognize their environments. Help them see how using your problem helps them solve their problems.

If you’re being honest, you know when your face-to-face audiences tune out. Whether it’s a keynote being delivered after a live band performs at 8 AM or a training class, people go to their phones the minute the content is irrelevant to their needs.

Get back to basics. Focus on your audience’s needs, not your need for
them to hear your polished messaging. Focus on what they came to hear, make sure that is bounded by your messaging, and provide a mechanism for live feedback if they do reach for their phones.

Staff Appropriately for Interactive Online Events

It is tempting to attempt to save money by cutting back on the staff assigned to support interactive online events. Give in to that temptation at your own peril! You need staff to monitor the online event and fan the flames of interest to get that roaring fire of interactivity going.

Bare minimum for staffing during the event is a speaker and perhaps a moderator. But you also need subject matter experts (SMEs) manning where you’ve planned to have interaction. That may mean having an SME in the chat in the webinar, but don’t forget to have SMEs manning social media.

These folks shouldn’t just answer questions. Be personable – just like you
would be in a face-to-face conference! For most of us, this isn’t unusual to do on social media. If you approach these areas openly, you’ll probably encounter some snark. But let’s be honest, if customers trust you they are going to be snarky to your face as well.

Having SMEs monitoring interactive areas can also help find problem areas. It may become apparent that the audience doesn’t understand or agree with the presentation. That could derail the presentation in the interactive space. Having a monitor that understands that language of your audience who can act as a mediator is critical.

This SME can also pass any problem areas to the host or speaker, so that the speaker can address the audience concern. This real time interactive
acknowledgement of the audience is something that can not be done in a live face-to-face keynote. Imagine the impact of really being heard could have on your customer audience.

Don’t underestimate the value of having your marketing teams also monitoring audience interactions. They can document questions to build an online FAQ, take measurements on which social platforms seemed to be most lively, and monitor discussions afterwards with social media tools. If your marketing team works with your SMEs to find keywords and create hashtags, this will help you keep the interactive fires warm until your next event, either online or face-to-face.

Real Talk

There’s no doubt about it – you will need to rely on tools for interactive online events. The good news is that you probably already have the tools you need. You’re going to need to evaluate these tools, create interactive content, and resist the urge to run these events with a minimal crew.

In our next post in this series, I’ll review an online event that was designed for interaction. I’d love to hear your experiences . What is the most interactive online event you’ve attended? What made it awesome? Let me know below in the comments.

What Can On-Premises Ops Teams Learn from Dev Processes?

Disclaimer: I consult for RackN, but I was not asked to write this post (or paid to do so), and it did not go through an editorial cycle with them. The following represents my own words and opinions.

Automation is a foundational pillar of digital transformation, but is it possible for on-premises ops teams to automate bare metal? Can ops teams adopt public cloud processes to advance on-premises processes? This post will define a few cloud infrastructure terms and discuss RackN Digital Rebar’s latest announcement.

Here are the RackN official announcement materials, if you’d like to skip to straight to that.

On-Premises Ops Teams Are Still Important!

One of the good things that came from developers operating in public cloud environments is that they developed a plethora of tools and methodologies. Many dev teams are developing applications that need to take advantage of on-premises data.

These teams are finding that data gravity is a real thing and they want to have the application close to where the data is (or is being created) to take advantage of data latency or to conform to security and other compliance regulations. On-premises operations teams are being asked to build environments for these apps that look more like public cloud environments than traditional 3-tier environments.

As on-premises ops folks, we need to understand the terms devs use to describe their processes, and see how we can learn from them. Once we understand what their desired end state for their environments are, we can apply all of our on-premises discipline to architect, deploy, manage, and secure a cloud-like environment on-premises that meets their end goals.

Instead of fighting with developers, we have the chance to blend concepts hardened in the public cloud with hardened data center concepts to create that cloud-like environment our developers would like to experience in physical datacenters.

Definitions

Before we get started, let’s define some terms. One thing I find curious is that people will posture on Twitter as experts, but when you dig into what they are talking about it isn’t clear if everyone is using the terms in the same way. So let’s set the stage with a shared understanding of Infrastructure as Code, Continuous Integration, and the Continually Integrated Data Center.

Infrastructure as Code (IAC)

This is the Wikipedia definition of IaC:

The process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this comprises both physical equipment such as bare-metal servers as well as virtual machines and associated configuration resources.

Via Wikipedia

Automating data center components is nothing new, I wrote kickstart and jumpstart scripts almost 20 years ago. Even back then, this wasn’t a simple thing, it was a process. In addition to maintaining scripts written in an arcane language, change control was ridiculous. If any element was changed by something like an OS update or patch, changing hardware (memory, storage, etc.) or there were changes in the network, you’d have to tweak the kickstart scripts, and then test and test until you were able to get them to work properly again. And bless your heart if you had something like an OS update across different types of servers, with different firmware or anything else.

Cloud providers were able to take the idea of automating deployments to a new level because they control their infrastructure, and normalize it (something most on-premises environments don’t have the luxury of doing). And of course, the development team or SREs never see down to the bare metal, they look for a configuration template that will fit end state goals and start writing code.

This AWS diagram from a 2017 document describes the process of IAC. Please note the 5 elements of the process of IaC:

via AWS Infrastructure as Code

There is an entire O’Reilly book written about IaC. The author (Kief Morris) defined IaC this way:

If our infrastructure is now software and data, manageable through an API, then this means we can bring tools and ways of working from software engineering and use them to manage our infrastructure. This is the essence of Infrastructure as Code.

via Infrastructure as Code website

Continuous Integration

Another important term to understand is Continuous integration (CI). CI is a software development technique. Here is how Martin Fowler defines it:

Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly. This article is a quick overview of Continuous Integration summarizing the technique and its current usage.

via martinfowler.com
via Kaiburr.com

If our infrastructures are now software and data, and we manage them via APIs, why shouldn’t on-premises ops teams adopt the lessons learned by software teams that use CI? Is there a way to continually integrate the changes to our infrastructure will absolutely require automatically, something that kickstart never really handled well? Is there a way to normalize any type of hardware or OS? What about day 1 and day 2 operations, things like changing passwords when admins leave, or rolling security certs?

Most importantly, is there a way to give developers the cloud-like environment they desire on-premises? Can developers work with on-premises ops teams to explain the desired end state so that the ops team can build this automation?

Continually Integrated Data Center – a New Methodology for On-Premises Ops Teams

RackN is a proponent of the Continually Integrated Data Center (CI DC). The idea behind CI DC is approaching data center management in a software CI approach, but down to the physical layer. RackN’s CEO Rob Hirschfeld explains it this way:

What if we look at our entire data center down to the silicon as a continuously integrated environment, where we can build the whole stack that we want, in a pipeline way, and then move it in a safe, reliable deployment pattern? We’re taking the concept of CI/CD but then moving it into the physical deployment of your infrastructure.”

To sum up, CI DC takes the principles from CI and IaC but pushes them into the bare metal infrastructure layer.

RackN Digital Rebar – a CI DC Tool for On-Premises Ops Teams

RackN’s goal is to change how datacenters are built, starting at the physical infrastructure layer, and automating things like raid/firmware/bios/oob management, OS system provisioning, no matter the vendor of any of these elements or the vendor of the hardware on which they are hosted.

Digital Rebar is deployed and managed by traditional on-premises ops teams. It is deployed on-premises, behind the firewall.

Digital Rebar is a lightweight digital service that runs on-premises behind the firewall and integrates deeply into a service infrastructure (DHCP, PXE, etc). It is able to manages *any* type of infrastructure, from a sophisticated enterprise server to a switch that can only be managed via APIs to a raspberry pi. It is a 100% API driven system and has the ability to provide multi-domain driven workflows.

Digital Rebar becomes the integration hub for all the infrastructure elements in your environment, from the bare metal layer up. Is the end state that has been requested to stand up and manage VMware VCF? RackN has workflows that help you build the physical infrastructure to VMware’s HCL, including hardening. Workflows are built of modular component that let you drive things to a final state. Since it is deployed on-premises, behind the firewall, it is air-gapable for high security environments.

What’s new in Digital Rebar v4.3

Here are the new features available in the 4.3 launch:

  • Distributed Infrastructure as Code – delivering a modular catalog that manages infrastructure from firmware, operating systems and cluster configuration.
  • Single API for distributed automation – providing both single pane-of-glass and regional views without compromising disconnected site autonomy.
  • Continuously Integrated Data Center (CIDC) workflow – enabling consistent and repeatable processes that promote from dev to test and production

Real Talk

Not all compute will be in the cloud, but developers have new expectations of what their experience with the data center should be. Most devs write in languages written for the public cloud. Traditional data center platforms like VMware vSphere are even embracing cloud native tools like Kubernetes. All of this is proof we’re in the midst of the digital transformation everyone has been telling us about.

Sysadmins, IT admins, even vAdmins, this is not a bad thing! On-premises ops teams can learn from the dev disciplines such as IaC and CI, and we can apply all the lessons we know about data protection, sovereignty, etc. to use new ops processes such as CI DC. It’s long past time to adopt a new methodology for managing data centers. Get your learn on, and get ahead of the curve. Our skills are needed, we just need to keep evolving them.

Project Nautilus emerged as Dell’s Streaming Data Platform

Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Yesterday, Dell EMC’s Project Nautilus emerged as Dell EMC’s Streaming Data Platform. I wrote this post based on the presentation we were given at #SFD19, and decided to keep the Project Nautilus name throughout my report.

I love it when presenters tell us what world they are coming from, and tie our shared past to new products. Ted Schachter started his career at Tandem doing real-time processing with ATM machines. But as he pointed out, these days there is the capacity to store much more info than he had to work with back in his Tandem days. I loved how he drew a line from past to the present. We really need more of that legacy, generational information shared in our presentations to help us ground new technologies as they emerge.

A screen shot of a person

Description automatically generated
From the Project Nautilus #SFD19 presentation

Data Structures are Evolving

Developers are using the same data structures they’ve used for decades. There is an emerging data type called a stream.  Log files, sensor data, and image data are elements you will find in a stream. Traditional storage people think in batches, but the goal with streams is to move to transacting and interacting with all available data in real time, along a single path. By combining them all these data types into a stream you can start to observe trends and do things like the ones shown on the slide above.

Since the concept of streams is pretty new, the implementations you’ll see now are DIY. There are “accidental architectures” based on kafka. Kafka is an open source Apache platform for building real-time data pipelines and streaming apps.

Project Nautilus Emerged to Work with Streams

Project Nautilus from Dell EMC Storage is a platform that uses open source tools. They want to build on tools like Spark and Kafka) to do real time and historical analytics and storage. Ingest and storage is via Pravega. Streams come in, they are automatically tiered to long-time storage. Then it is connected to analytic tools like Spark and Flink (which was written specifically for streams). Finally, everything is glued together with Nautilus software to achieve scale (this is coming from Dell EMC Storage after all), and is built on VMware and PKS. More details were to be announced at MWC, so hopefully we’ll have some new info soon.

Real Talk

Product Nautilus emerged as a streaming data platform. This is another example of Dell EMC Storage trying to help their customers tame unstructured data. In this case, they are tying older technology that customers already use to newer technology – data streams. They see so much value in the new technology that they created a way for customers to get out of DIY mode, while at the same time taking advantage of existing technical debt.

This is also a reminder that we’re moving away from the era of 3-tier architecture. There have been hardware innovations, that has led to software innovations. We are going to see more and more architectural innovations. Those who open to learning how tech is evolving will be best positioned apply the lessons learned of the past couple of decades.

How are you learning about the new innovations?

Taming Unstructured Data with Dell EMC Isilon

Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Taming Unstructured Data

A common thread discussed by almost every vendor we visited was the issue of taming unstructured data. Vendors are building products that their customers can use to turn massive amounts of unstructured data into information. They all told us that their customers are demanding intelligent insights that are available anytime accessible anywhere. The groups from Dell EMC Storage were no different, they are also tackling this problem.

Four storage product teams came to chat with us during SFD19: Isilon, the Project Nautilus team, a team building devops tools, and PowerOne. What’s interesting is that in addition to tackling the challenge of taming unstructured data, each of these product groups are working on the innovations to traditional storage products that enable them to integrate with products and services we usually think of as with cloud native solutions, for example Kubernetes.

I’ll tackle each of the areas that I mentioned above, and this post will concentrate on Isilon.

Taming Unstructured Data with Isilon

Isilon Systems was founded in 2001 and acquired by EMC in 2010. Dell EMC Isilon is a scale-out NAS that is run on a file system called OneFS. The team has even won an Emmy for its early development of HSMs (hierarchical storage management).

Isilon’s definition of scale out is policy-based management. Every node is independent and able to access data coherently. The files aren’t being split, but you can keep snapshots in a diff tier. Users write the policies and the system takes care of it from there.

A screenshot of a cell phone

Description automatically generated
via this slidedeck on slideshare

CloudIQ (Dell EMC’s SaaS infrastructure management tool) now supports Isilon. They also acquired a tool called ClarityNow which is included with an Isilon license (as is CloudIQ) although you are charged for non-DellEMC storage.

OneFS Gets Data Closer to Cloud Compute

Isilon OneFS is also available to run with compute in the public cloud. Dell EMC partners with service providers to offer Isilon OneFS on Dell EMC metal at their co-los that are located close to public cloud providers. It’s offered as a SaaS service and is great for current on-premises Isilon customers who want to extend their Isilon implementation to the cloud for  DR, replication, or even to perform new types of compute like machine or deep learning.

But *why* would customers want to do this? If you’ve stored your unstructured data in an Isilon for even 10 years, that is a tremendous amount of data gravity. It’s going to be hard to move this data to the cloud, even if the services and tools you’d like to use are there. Isilon’s OneFS structure allows you to extend this data to other locations, and if the locations are connected via a fast pipe in a co-lo center to a cloud, you can design to take advantage of the best of both worlds.

Real Talk

This is a great example of how traditional storage product teams are working with cloud product teams to create offerings to support the customers who are writing apps and taming unstructured data. Customers realize to do that, they have to go beyond polarizing architectural attitudes like “everything cloud” or “cloud is evil”.

These customers understand that when it comes to taming unstructured data, the devil is in the details. It is still the responsibility of the architect to understand what you’ll be signing up for with any of these types of solutions. Ask lots of questions, and weigh you’re the risks and benefits to be sure this type of solution will work for your organization.

Tiger Technology Brings the Cloud to You

Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

The first presentation of Storage Field Day 19 was Tiger Technology. They are a data management company that has been around since 2004, mainly providing solutions primarily for the media and entertainment industry.

This industry is interesting to modern storage because of their application requirements, in particular video. These applications are usually mission critical, and require high bandwidth and low latency. Because these applications are so diverse, there really isn’t a standard. One requirement they do all have in common is that they are intolerant of data loss. Think of video games suffering lag, or a live sporting event dropping frames or even pixels – these are just not acceptable performance in this industry.

The Tiger Technology team took us on the journey of how they built their new Tiger Bridge offering. Tiger Bridge is a cloud tiering solution for Windows (they are working on Linux) that brings cloud storage to current (and legacy) workflows in a way that is invisible to your workers.

Tiger Technology’s Journey to the Tiger Bridge

The customer problem that took them on their journey to create Tiger Bridge was surveillance for an airport. The airport wanted to upgrade their surveillance systems. They had 300 HD cameras with a retention time of 2 weeks and wanted to scale within 3 years to 10,000 4K cameras that would have a retention of 6 months. Tiger Technology computed that the capacity for this project would be ongoing at 15 petabytes of data.

Tackling this problem using standard file systems would be prohibitively expensive, not to mention that it wasn’t even possible to get Windows to that capacity at the time they started.  They knew object storage would work better. Because of the security implications, other requirements were no latency or BW impact, no tamper point, software only, and scalable.

If you think about surveillance cameras, you need a way to keep the data on-site for a while, then you need to send the data someplace that doesn’t cost as much to store it. But you need to be able to bring that data back with fidelity if you need to check the videos for something. These customer challenges are how they came up with the idea for Tiger Bridge.

What is Tiger Bridge?

Tiger Bridge is a hierarchical storage management (HSM) system. It installs in less than five minutes on a server. The agent installed on the servers is a Microsoft Filter Driver and sits between the application reads and writes and target storage.  Since it is integrated with the file system as a filter driver it also falls under Active Directory control, which is great for existing workloads and policies.

With Tiger Bridge, files are replicated and tiered automatically based on policies set on last access time and/or volume capacity. The agent does the tiering work in the background, so sending or retrieving the file from the cloud, even cold cloud storage, is transparent to the user.

Via the TigerBridge website

The team focused on providing this seamless experience to applications that are hosted on the Windows platform. Since they wanted this to also work for legacy apps, one thing they had to figure out is how to use all the commands that are common in a file system that aren’t replicated in the cloud, things like lock, move, rename, etc. They also wanted to support all the cloud storage features like versioning, soft delete, and global replication, since applications written for the cloud require these features.

The example they gave of bridging cloud and file system features was rename. You can rename any Windows file, no problem. But rename isn’t available on public cloud systems, you have to do a copy. For a couple of files, that’s probably no big deal. But if you rename a folder with lots of files in it, that could be a huge rename job. It may take time, and it will probably get expensive.

Their solution keeps track of where the files are, and any changes that have been made. This solves the problem of data being rendered useless because it’s no longer associated with its original application, a common issue that brings on lock-in anxiety. Files under the Tiger Bridge control maintain a link with the file system on premises and the public cloud. Users never know if they are hitting the data on premises or in the cloud.

Check out the demo from a user perspective:

What does Tiger Technology do for users?

What this means is that a user on their laptop can use the Windows file system they are familiar with, and the agent handles where the file actually is in the background.  Administrators can make rules that tier the data that make sense to the business. It allows organizations to use the cloud as an extension of their data storage.

Other use cases are disaster recovery. Having a location like the cloud so you can have a backup of your data in a different location without having to manage another site or tapes is a very attractive use case. Since it is so easy to bring files back from the cloud, Tiger Bridge is able to handle this use case as well.

Real Talk about Tiger Technology

 I think this is the year we’re going to see a lot more solutions bubble up that truly bridge on-premises and the cloud, and I think we’ll seem them from older companies like Tiger Technology. These companies understand application requirements and the technical debt that companies are battling with, and they are finding ways to make the cloud model fit into the reality of their customers’ current realities.

The Tiger Technology presentation reminded me of something we used to say at EMC: a disk, is a disk, is a disk. Users, and applications, don’t really care where the disk they are writing to is located, who manages it, and what it costs. They care about their application being easy to use, low latency, and security. Tiger Technology has figured out how to make that old storage saying work for public cloud and legacy applications.

What do you think? Let us know in the comments!

Should You Use Influencer Lists?

You’ve seen them before, influencer lists promising to deliver the names of the 100 Top Influencers for <insert trending new tech term here>. As a marketer, what are the best ways to use influencer lists? As an influencer, what does it really mean to be included on these lists?

Before I start, let me clarify that this post will focus on a B2B marketing perspective, and in particular B2B marketing for enterprise tech. Other marketing forms may not apply here.

This post got pretty long, but it’s important. TL;DR: What’s the history of influencer lists, who is making these lists, how are they compiled, a warning for influencers on these lists, and strategies for marketers when using these lists.

Why Do Influencer Lists Exist?

About thirteen years ago, social media really started to take off. People who really understood different technologies started blogging, creating videos online, and tweeting. Eventually, this labor finally started to be acknowledged as valuable by PR and traditional marketing (around 2010 or so).

The question for these traditonal keepers of the corporate message and reputation became: with all of these people creating content, who should we pay attention to? Should we brief these people? Can we ignore the annoying ones? Who is worthy of our time and attention? This last part is important because time and attention always come at a price.

In the very beginning people shared their RSS feeds on their blogs. If you really liked someone’s blog, you checked out their RSS feed. If that person was awesome, obviously who they liked to read was awesome as well. Sometimes it worked, sometimes you just ended up reading what the awesome person’s friends wrote.

By the time PR and traditional marketing decided to trust social media as a real information souce, no one was using RSS feeds anymore. So you had the perfect storm of internal organizations needing help to understand who was an influencer that they should trust, and having budget and initiatives to use social media to amplify their brands.

Who Publishes These Lists?

In the beginning, lists were driven by the influencers. This made the lists have an obvious credibility issue.

To get the answer on who publishes influencer lists these days, let’s go back to the history of social media in big companies. As PR and traditional marketing organizations started to get their arms around protecting their brands on social media, it quickly became apparent that they were going to need a platform to keep up with their brands across all types of social media. There was just too much data being created for one or two people to keep up with! An industry was born, and social media monitoring platforms were created to help firms keep an eye on what people were saying about their brands.

Since all the tweets, facebook posts, reddit tirades, and blog posts were being collected by these platforms, it was pretty easy to create methodologies to determine who was talking the most about any given subject. These tools assign different weights to things like affinity and sentiment, and when combined with frequency and a search term, lists of influencers can be created. This isn’t AI, it is pattern matching and sorting with human created weights. It’s math.

These days, the tools have evolved beyond monitoring tools. There are influencer marketing platforms to help PR and marketing organizations with their influencer marketing initiatives. If you see a “top 100 influencers in ….” list, there is a good chance that the company sharing the list is trying to sell a marketing team their influencer marketing program.

How Are These Lists Compiled?

Let’s take an Onalytica, a company that sells an influencer marketing platform (and training). I’m using them as an example because they are the most recent company with a big campaign to announce a Top 100 Cloud Influencers list. Those who made the 2020 list were more than happy to share Onalytica’s announcement tweet, which had a tracking code to the announcement page. To see the entire list you had to give up your information to Onalytica . Fair disclosure: folks who made this list are definitly cloud influencers.

There were obvious problems with the list. Many well-known influencers were missing. There were 9 women, and very few people of color. How was the list compiled?

According to the Onalytica announcement, their priority influence metric is what they call Topical Authority (reference). They come up with this by taking the amount and quality via influencer’s social engagement on Twitter. The quality portion of this weight is subjective and I didn’t see a definition for it. Next, they add in if the person has been referenced with the cloud terms used in the search other social platforms: Instagram, Facebook, YouTube, Forums, Blogs, News and Tumblr content. More commentary on this below.

Search for Definitions

Here is Onalytica’s formula for determining the top influencers (as stated in the announcement blog post). Notice that critical definitions for qualitative parameters were not given.

  1. Resonance: Topical engagement
    It is not stated explicitly, but I believe this is related to reference. If so, this is how much an influencer posts and engages about “cloud” on Twitter.
  2. Relevance: Number of posts on topic, and % relevance – the proportion of their social content on the topic.
    The number of posts on the topic is quantitative. I have to wonder – does this include paid posts? The % relevance is problematic as well. If an influencers talks 75% about security, or devops, or programimng, and 25% cloud, then they would rank lower than other influencers, even if they are core to the community discussion.
  3. Reach: Number of followers
    This is a quantitative weight. This is problematic as well, it narrows the field and eliminates many real influencers.

Influencer Lists Are Made From Statistics

Y’all, this is plain ole math. These weights are determined by what the company has deemed influential and important to the definition of an influencer. This isn’t a bad thing, it’s a place to start. But you need to figure out the math behind the process.

In the case of the Onalytica top 100 cloud influencers list, if someone isn’t active on Twitter they won’t make the top 100. Likewise if they aren’t being referenced from other social platforms, although it is not clear how this referencing is defined. Is it mentions? Is it links to their content out from others’ posts? Is it likes on their posts on these platforms? If you’re a marketer relying on tools like this, these are good questions to ask.

There is more info this report, which is behind another Onalytica lead gen form, and there are two calls to action (stuff they want you to do to be convinced to buy their tool) in the report itself. Here is how they describe the strategy they used for the top 100 cloud influencers (emphasis mine):

Onalytica sourced and analyzed over 200 Billion Posts from Twitter, Blogs, Instagram, YouTube, Facebook, LinkedIn in order to identify key influencers such as journalists, subject matter experts, thought leaders, authors, academics and politicians that are influential on specific topics.
Influencers were then ranked by REACH, RESONANCE, RELEVANCE, REFERENCE to analyze which influencers are most relevant for brands to engage with. Using this methodology we identified 1,000 influencers relevant to the global market and segmented influencers into categories relevant to the cloud sector.
A network map was created identifying the top 100 engaged influencers across the topic of cloud. Through the network map we were able to analyze the scale of a brand’s reach into the influencer community through the number of interactions it had with influencers over the past year.

If you’re an influencer, you should understand this report is a marketing tool. If you’re a marketing professional, you should understand that these influencer lists are marketing tools that may or may not have relevance for your mission.

Strategies for Using Influencer Lists

So are these lists bad? No, they’re not, as long as you recognize them for what they are, and try to understand the math behind the results. Should you use an influencer list created by one of these influencer marketing platforms? It depends. If you are a small team, and you need to get your arms around a market for the first time, or you are prepping for a big launch into a new market, these lists can give you a head start. They aren’t bad, but they require evaluation.

You should know your market enough to ask some hard questions, especially what search terms are being used, and the math used to come up with results. Once you know that, it is also important pay particular attention to influencers that land on the list.

Seperate the Influencers into Different Categories

Are there employees on the list? They can help you vet the rest of the list. When I was doing this circa 2011, the lists always contained our biggest competitors, or influencers of those competitors. That wasn’t obvious to marketers who weren’t active in our community, but we knew immediately.

You also should be giving your internal influencers as much love as you give your external influencers. Community building starts in-house, you cannot build a strong external community if you don’t have a strong internal community.

Are competitors on the list? Don’t cater to them, obviously. But be sure to keep tabs on what they are saying, and to whom they are connected. Remember, competitors’ influencers are your influencers too.

Are partners on the list? Show them love! That is a sure way to stregnthen your ties, promote the work that is important to them.

Who is missing from the list? It is unacceptable to use one of these tools and accept a list that is not diverse. There are so many documented reasons that people will not be picked up on the basis of an algorithyms definiton of rach, resonance, relevance, or reference. These tools reinforce stereotypical echo chambers.

Question who is missing if everyone on the list looks the same. We all have an obligation to build a future that represents everyone.

This is Ultimately About Community

Finding your influencers is a community building excercise. These lists are a great way to take a temperature of who is talking about the topics your organization is working on, but you still need to be protective of how you choose to engage in these conversations.

You will miss your best influencers if you rely on these algorythms. A solid feedback loop from your biggest influencers really will make a better product, but you have to put in the work to find the right list for your product.

Finaly, you must to tend that list, sometimes water it, sometimes weed it, sometimes cut it back. Don’t just accept an influencer list, do the work to build real community.

Is storage still relevant?

storage field day

Disclosure: I was invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in the Silicon Valley CA. My expenses, travel, accommodation and conference fees were covered by GestaltIT, the organizer. I was not obligated to blog or promote the vendors’ technologies. The content of this blog is of my own opinions and views.

Is storage still relevant in today’s cloud and serverless environments? At Storage Field Day 19 we spent several hours with Western Digital, and heard from ten different presenters. Did they show us that storage is still relevant?

Hardware Must Innovate for Software to Innovate

I think the industry often forgets that software innovation is impossible without hardware innovation. We’ve seen some pretty amazing hardware innovations over the last decade or so, and hardware companies are still at it.

You may be asking: how is an old hardware company able to keep up, let alone still be innovating? Well, Western Digital has 50 years of storage experience, and they are still innovating. Their heritage is highlighted in this slide.

Western Digital’s 50 year heritage via https://www.youtube.com/watch?v=Lqw3_HgiA9o

Western Digital is looking at how to solve the data storage challenges for emerging workloads. They already have tons of experience, so they know that the data must be stored, and that more data is being created now than ever before.

More data is being created today than ever before, and it all needs to be stored so it is available to have compute applied to it. Compute is what turns the data is turned into actionable information. But there is so much data now – how should it get stored? How will it be accessed? It’s becoming pretty obvious that the old ways of doing this will not be performant, or maybe not even scalable enough.

One workload they talked about throughout many of the presentations was video. Just think about what kinds of devices that now create streams of video. IoT devices, survellance cameras, cars, the general public, etc. Much of the new types of streaming video is being created at the edge. The edge cases are so diverse that even our understanding of “edge” may be antiquated.

So is storage still relevant? Maybe not the type I came up on – SANs and NASs. But the next evolution of storage has never been more relevant than now.

Composable Infrastructure

Western Digital also discussed composable infrastructure, and how technologies such as NVMe over Fabric make composable infrastructure possible. Don’t worry if you have no idea what I’m talking about – the standards for NVMe over Fabric weren’t pulled together until 2014, and the standard became real in 2016. Also, hardware standard boards are so peculiar – they don’t use the NVMe acronym, they use “NVM Express”. This makes it hard to find primary source information, so keep that in mind when you’re googling.

What can NVMe over Fabric do for composable infrastructure? First, let’s answer why would you need composable infrastructure?

Western Digital’s Scott Hamiliton walked us through this. First of all, new types of applications like machine learning and deep learning need the data to be close to where the compute is happening. Even after considering tradeoffs that must be made because of data gravity, traditional architecture slows things down because resources are locked in that traditional stack.

Composable infrastructure takes the resources trapped in traditional infrastructure, breaks them up and disaggregates them. After that’s done, the resources can be recreated into the leanest combination possible for a workload, virtually composed, creating a new type of logical server. The beauty is this can then be modified based on the dynamics of a workload.

According to Hamiliton, Western Digital believes NVMe will the foundation of next-gen infrastructures, and that eventually ethernet will be the universal backplane. It was an interesting session, check it out for yourself below.

Western Digital at Tech Field Day via https://www.youtube.com/watch?v=LuRI1TlBJgA

Zoned Storage

Western Digital is also championing the Zoned Storage initiative. This will be part of the NVMe standard. Zoned Storage creates an address space on disk (HDD or SSD) that is divided into zones. Data must be written sequentially to a zone, and can’t be overwritten sequentially. Here’s Western Digital’s explanation:

[Zoned Storage] involves the ability to store and retrieve information using shingled magnetic recording (SMR) in hard disk drives (HDDs) to increase the storage density and its companion technology called Zoned Name Spaces in solid state drives (SSDs).

via https://www.westerndigital.com/company/innovations/zoned-storage

Why does the industry need this? According to Swapna Yasarapu, Sr. Director of Product Marketing for Western Digital’s Data Center Business Unit, we’re moving into an era where large portions of unstructured data are being created. All of this data can’t be stored via traditional methods. Additionally, unstructured streams come from IoT edge devices, video, smart video, telemetry, and various other end devices. Many of these streams must be written sequentially to unlock the information the data contains.

Finally, this is an open source initiative that will help write this data in a more practical way for these types of data streams to HDDs and SSDs.

Watch the entire presentation here:

Acronyms as an innovation indicator

One way I can tell when there is innovation is when I come across acronyms I don’t know. After 3 years focusing on virtualization hardware, I found myself having a hard time keeping up with the acronyms thrown at us during the presentations.

The good news is that some of these technologies are brand new. So much for storage being old school! Plus, can you imagine what apps are waiting to be written on these new architectures that have yet to be built?

Here are the acronyms I didn’t know. How many can you define?

  • TMR: tunneling magnetoresistance
  • TPI: Track Per Inch (disk density)
  • PZT: Piezoelectric actuator (see this earlier Storage Field Day post)
  • VCM: Voice Coil Motor (see this video )
  • SMR: Shingled Magnetic Recording
  • SSA: Solid State Array
  • ZBC: SCSI Zoned Block Commands
  • ZAC: Zoned ATA Commands
  • ZNS: Zoned Named Storage

Is Storage Still Relevant? Final thoughts

I think you know my answer on the questions is storage still relevant: of course! We are just beginning to create the standards that will issue in the real digital transformation, so there is plenty of time to catch up.

Storage Field Day 19: Getting Back to My Roots

storage field day

I’m excited that I have been invited to be a delegate at Storage Field Day 19. This is a little different than the Tech Field Day I attended in 2019, because the focus of all the presentations at this event is data storage.

I am looking forward to this because I am a storage person. My career started as a Technical Trainer at EMC, I was a storage admin for a pharma company. I went back to EMC to develop technical training, I then went to work for Dell Storage, and then Inktank (a startup that provided services and support for Ceph). I guess you could say storage is in my blood, so Storage Field Day should be lots of fun.

What to expect at Storage Field Day

Here are the companies we’ll be visiting (in the order they will be presenting), and what I’m looking forward to hearing about from them. Remember, you can join in on this event too by watching the livestream and participating in the twitter conversation using the hastag #SFD19.  You can @ me during the livestream and I can ask a question for you.

Disclosure: I am invited by GestaltIT as a delegate to their Storage Field Day 19 event from Jan 22-24, 2020 in Silicon Valley. My expenses, travel, accommodation and conference fees will be covered by GestaltIT, the organizer and I am not obligated to blog or promote the vendors’ technologies to be presented at this event. The content of this blog represents my own opinions and views.

Tiger Technology

The first presentation we hear will be from Tiger Technology. Just looking at the website, they claim to do lots of stuff. When I look at their About page, they’ve been around since 2004 “developing software and designing high-performance, secure, data management solutions for companies in Enterprise IT, Surveillance, Media and Entertainment, and SMB/SME markets”. They are headquartered in Bulgaria and Alpharetta, and since my mom was born and raised in Alpharetta, they get extra points.

Skipping to their News page, it looks like they have a new solution that tiers data in the cloud. I’m looking forward to hearing how they do that!

NetApp

NetApp talked with us at TFD20 (my blog review of that presentation). They talked to us then a bit about their flavor of Kubernetes, and the work they are doing to make it easy for their customers to have data where they want it to be. Hoping they do a deeper dive on CVS and ANF, their PaaS offerings for the current public cloud offerings.

Western Digital

Western Digital has presented at previous Tech Field Day events, and have acquired many companies who are Tech Field Day presenting alums. The last time they presented back in February 2019 they talked about NVMe, and I love that topic.

One thing I think that doesn’t get enough attention is the incredible innovation that has happened over the last several years in storage hardware. The software is now catching up, and apps will follow. So there is cool tech stuff happening on prem too, not just in the public cloud domain.

I peeped their twitter account, and they have interesting things they are showing this week at CES. Like this 8TB prototype that looks like a cell phone battery bank.  That would be a pretty sweet piece of swag! 😊

Infrascale

This will be Infrascale’s first appearance at Storage Field Day. Their website says what they do right up front: they have a DRaaS (Disaster Recovery as a Service) solution that fails to a second site, booting from an appliance or the cloud.

After storage, the biggest time I’ve spent in my career has been with data protection and disaster recovery, so I’ll be looking forward to this presentation as well. Really looking forward to hear about how this solution can included in an architecture.

Dell EMC

Since I’ve worked in storage at Dell and EMC, and I’m just coming off a tour at VMware, of course I’m excited to sit in on presentations from my Dell Technologies federation homies! There will be presentations on Isilon and PowerOne, but the one I’m most curious about is one on DevOps.

Komprise

Komprise has presented at Storage Field Day before (in 2018). They are a data management and tiering solution. At AWS re:invent they unveiled a cloud data growth analytics solution. I hope we hear about that.

WekaIO

WekaIO’s  has presented at Tech Field Day a couple of times before. They have a distributed storage system for ML/AI, it looks like they directly access NVMe flash drives. It looks like they also have a solution on AWS. So this should be an interesting conversation. I’m just hoping we don’t have to listen to a “what is AI story” before they get to the good stuff.

Minio

This will be Minio’s first presentation at Tech Field Day. Minio sells high performance object storage. One of the other Tech Field day delegates, Chin-Fah Heoh, has already written a blog post about how Mineo is in a different class than other object storage providers. I’m really looking forward to this presentation.