Digital Sunshine Solutions

We're here to help with your product and community strategic initiatives, from complete strategy to help with tactical execution.

Product Strategy

Need help creating compelling product positioning and messaging? Just need extra hands to get a project across the line? We can help with all of your product needs, from strategy to launch to sales enablement.

Community Strategy

Would you like to use the power of social media to go beyond counting clicks? We can help you to build community of customers, analysts, and internal employees that helps you build better products.

Digital Literacy

A modern democracy requires a digitally literate citizenry. How is digital literacy different than traditional literacy? What tools are there to get everyone up to speed?

coming soon

Recent Blog Posts

Get the latest from our blog.

Evaluating Digital Sources During an Epidemic

Evaluating digital sources is an important component of being digitally literate. During an epidemic, this skill can literally mean life or death.

But how do you sift through sources to evaluate them when you are fatigued and stressed out from the information itself?

NSE: Never Stop Evaluating Digital Sources

You must look at the source of the information that you consume. Can you trust everything that the nightly news reports? Can you trust things your friends post on social media?

This exercise is great to template for evaluating digital sources:

  • Who wrote the article/post? Just because they have a title such as doctor, are they really a doctor? Use Google to find out!
  • Who do they quote? Sometimes articles will have great quotes from credible people, but they don’t link to the original source. Google the person’s name and the context of the quote to be sure it hasn’t been taken out of context.
    Extra Credit: Be careful about clicking on highlighted words. Many times these are ads, and the website is making money on every click you make.
  • What is the site’s purpose? Are they sharing information to enforce their point of view? This is important when you are looking at claims. Do they back their claims up with links to original sources like congressional hearings, interviews, and research?
    If you google the claim word-for-word, do other sites agree come to the same conclusion?

Disinformation is Warfare

Here’s is how Miriam Webster defines disinformation:

false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth.

Miriam Webster.com

Lea Gabrielle is the Special Envoy & Coordinator for the U.S. Department of State’s Global Engagement Center. She testified before the Senate Relations Subcommittee on State Department and USAID Management, International Operations, and Bilateral International Development on March 5, 2020 (10 days before this blog post was written, testimony transcript here).

She explained her mission, which is “leading and coordinating the interagency to decisively expose and counter foreign state and non-state disinformation and malign propaganda”. In her testimony, she discussed how China provides misinformation via censoring information:

Beijing attempts to censor the sheer extent of this global public health crisis – from downplaying the number of casualties, limiting criticism of the CCP’s response, and silencing Dr. Li Wenliang’s initial red flags about the deadly outbreak.

From testimony transcript, available here.

She also discussed the disinformation techniques that Russia uses:

These include cyber-enabled disinformation operations; propaganda campaigns that seek to rewrite history; coordinated social media swarms that inflame existing fault lines of societies, and an array of other actions that fall within the scope of their malign information operations.

From testimony transcript, available here.

The Guardian ran a story about several thousand Russian bots that had been “previously identified for airing Russian-backed messages on major events such as the war in Syria, the Yellow Vest protests in France and Chile’s mass demonstrations – are posting “near identical” messages about the coronavirus”. The goal seems to be to sow seeds of distrust between the US and China.

Fight Information Fatigue

Lexico.com is a mashup of dictionary.com and Oxford University Press. This is how they define Information Fatigue:

Apathy, indifference, or mental exhaustion arising from exposure to too much information, especially (in later use) stress induced by the attempt to assimilate excessive amounts of information from the media, the Internet, or at work.

Lexico.com

I don’t know about you, but this is exactly how I feel currently about the information available for the COVID-19 pandemic. I think the term information overload applies here as well. The danger with information fatigue is the same as any fatigue – it keeps you from taking action.

Right now there is so much information – and disinformation – about COVID-19 that getting fatigued can mean you don’t pay attention to the information you need to survive. After all, it takes time and effort to evaluate sources. Can you afford to get so fatigued you don’t evaluate your sources, and get sucked into reacting to disinformation?

Here’s how I’m trying to fight this fatigue:

  • I’m limiting my information intake to a couple of times a day, and timeboxing the time I spend on it.
  • If I get a direct message from a friend, I look critically at what they send me. It usually sets off a spirited text conversation or phone call (we love each other, so that’s ok).
  • I’m taking care of my mental health in my normal ways: yoga (most studios in Austin are teaching classes via Zoom), eating properly, getting enough sleep, walking my dog, working, and creating. I am always creating for work, which is incredible But bright side to social distancing: I’m home for the first time in a decade in the spring. So I have a REAL GARDEN going and I’m so happy about that.
    Bottom Line: take care of yourself so you have the strength to determine what information is important for you and your loved ones.

Real Talk

During a global epidemic you have to make sure you’re getting the information you need to survive, even if you’re overwhelmed by the sheer amount of information.

Be sure to take care of your mental and physical health so that you can avoid information fatigue. And NSE – never stop evaluating digital sources.

Tools for Interactive Online Events

What are the best tools for designing interactive online events? As I explained in my last post (Design Considerations in Designing Online Events), you can’t just lift and shift a face-to-face event to the digital world and expect interactive experiences to happen. You must deconstruct the face-to-face event and design an interactive online event.

This post will dive into various tools for designing interactive online events. I won’t be recommending specific tools, but providing advice for evaluating these tools.

You Probably Have the Tools You Need

How do you currently host live webinars? Think about the tools you already have, and leverage your investment in them. Your investment includes things like licensing costs, IT integration with your company’s identity systems, and integration with your marketing automation systems.

You will want to ensure that the platform can handle the size of the audience you’ll have if you convert a face-to-face event into an online event, and that you’re able to accommodate for the increased bandwidth.

It’s also a good idea to use this as an opportunity to investigate all options the platform provides, especially if you’ve been using it for a while. Does the platform allow for things like file sharing, built in polls, closed captioning, or even broadcasting to other platforms like YouTube.

Go where your audience congregates

The most overlooked tools for interactive events are the ones where your audience can communicate with you and other audience members. The way to encourage people to participate is to meet them where they are.

Where does your target audience interact online? Is it Twitter? Slack? LinkedIn? Reddit? Find a way to go to them. But be careful of making it weird — don’t show up in the tools your audience is comfortable using with your perfect messaging and expect people to want to interact. That’s just weird.

This is a great time to interview the experts who work in your company, who are probably already interacting in these spaces (yes, with your customers!). Work with them to figure out the best way to make these spaces a part of your architecture for interactive online events. Your internal experts are key to making this part not weird.

Our world is networked, your customers don’t interact with you in the hierarchical ways of the past. Some even say that now co-learning trumps marketing. Take advantage of this opportunity, and meet your customers where they are.

Make Your Content Interactive

Do you want the audience to be interactive? Make sure the content you are
creating for these events is interactive! Stop making sage on the stage
presentations, they are boring.

Also, no one wants to sit through 15 minutes of your marketing message before they can get to the real content they came to hear. If your downstream marketing teams are doing their jobs, everyone has heard this message before. In presentations. On product pages. In marketing emails. All over social media.

The quickest way to disconnect folks is to drone on about what YOU want them to hear. Create presentations that focus on problems your audience have, but be sure you’re telling the story in a way that your customers recognize their environments. Help them see how using your problem helps them solve their problems.

If you’re being honest, you know when your face-to-face audiences tune out. Whether it’s a keynote being delivered after a live band performs at 8 AM or a training class, people go to their phones the minute the content is irrelevant to their needs.

Get back to basics. Focus on your audience’s needs, not your need for
them to hear your polished messaging. Focus on what they came to hear, make sure that is bounded by your messaging, and provide a mechanism for live feedback if they do reach for their phones.

Staff Appropriately for Interactive Online Events

It is tempting to attempt to save money by cutting back on the staff assigned to support interactive online events. Give in to that temptation at your own peril! You need staff to monitor the online event and fan the flames of interest to get that roaring fire of interactivity going.

Bare minimum for staffing during the event is a speaker and perhaps a moderator. But you also need subject matter experts (SMEs) manning where you’ve planned to have interaction. That may mean having an SME in the chat in the webinar, but don’t forget to have SMEs manning social media.

These folks shouldn’t just answer questions. Be personable – just like you
would be in a face-to-face conference! For most of us, this isn’t unusual to do on social media. If you approach these areas openly, you’ll probably encounter some snark. But let’s be honest, if customers trust you they are going to be snarky to your face as well.

Having SMEs monitoring interactive areas can also help find problem areas. It may become apparent that the audience doesn’t understand or agree with the presentation. That could derail the presentation in the interactive space. Having a monitor that understands that language of your audience who can act as a mediator is critical.

This SME can also pass any problem areas to the host or speaker, so that the speaker can address the audience concern. This real time interactive
acknowledgement of the audience is something that can not be done in a live face-to-face keynote. Imagine the impact of really being heard could have on your customer audience.

Don’t underestimate the value of having your marketing teams also monitoring audience interactions. They can document questions to build an online FAQ, take measurements on which social platforms seemed to be most lively, and monitor discussions afterwards with social media tools. If your marketing team works with your SMEs to find keywords and create hashtags, this will help you keep the interactive fires warm until your next event, either online or face-to-face.

Real Talk

There’s no doubt about it – you will need to rely on tools for interactive online events. The good news is that you probably already have the tools you need. You’re going to need to evaluate these tools, create interactive content, and resist the urge to run these events with a minimal crew.

In our next post in this series, I’ll review an online event that was designed for interaction. I’d love to hear your experiences . What is the most interactive online event you’ve attended? What made it awesome? Let me know below in the comments.

What Can On-Premises Ops Teams Learn from Dev Processes?

Disclaimer: I consult for RackN, but I was not asked to write this post (or paid to do so), and it did not go through an editorial cycle with them. The following represents my own words and opinions.

Automation is a foundational pillar of digital transformation, but is it possible for on-premises ops teams to automate bare metal? Can ops teams adopt public cloud processes to advance on-premises processes? This post will define a few cloud infrastructure terms and discuss RackN Digital Rebar’s latest announcement.

Here are the RackN official announcement materials, if you’d like to skip to straight to that.

On-Premises Ops Teams Are Still Important!

One of the good things that came from developers operating in public cloud environments is that they developed a plethora of tools and methodologies. Many dev teams are developing applications that need to take advantage of on-premises data.

These teams are finding that data gravity is a real thing and they want to have the application close to where the data is (or is being created) to take advantage of data latency or to conform to security and other compliance regulations. On-premises operations teams are being asked to build environments for these apps that look more like public cloud environments than traditional 3-tier environments.

As on-premises ops folks, we need to understand the terms devs use to describe their processes, and see how we can learn from them. Once we understand what their desired end state for their environments are, we can apply all of our on-premises discipline to architect, deploy, manage, and secure a cloud-like environment on-premises that meets their end goals.

Instead of fighting with developers, we have the chance to blend concepts hardened in the public cloud with hardened data center concepts to create that cloud-like environment our developers would like to experience in physical datacenters.

Definitions

Before we get started, let’s define some terms. One thing I find curious is that people will posture on Twitter as experts, but when you dig into what they are talking about it isn’t clear if everyone is using the terms in the same way. So let’s set the stage with a shared understanding of Infrastructure as Code, Continuous Integration, and the Continually Integrated Data Center.

Infrastructure as Code (IAC)

This is the Wikipedia definition of IaC:

The process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. The IT infrastructure managed by this comprises both physical equipment such as bare-metal servers as well as virtual machines and associated configuration resources.

Via Wikipedia

Automating data center components is nothing new, I wrote kickstart and jumpstart scripts almost 20 years ago. Even back then, this wasn’t a simple thing, it was a process. In addition to maintaining scripts written in an arcane language, change control was ridiculous. If any element was changed by something like an OS update or patch, changing hardware (memory, storage, etc.) or there were changes in the network, you’d have to tweak the kickstart scripts, and then test and test until you were able to get them to work properly again. And bless your heart if you had something like an OS update across different types of servers, with different firmware or anything else.

Cloud providers were able to take the idea of automating deployments to a new level because they control their infrastructure, and normalize it (something most on-premises environments don’t have the luxury of doing). And of course, the development team or SREs never see down to the bare metal, they look for a configuration template that will fit end state goals and start writing code.

This AWS diagram from a 2017 document describes the process of IAC. Please note the 5 elements of the process of IaC:

via AWS Infrastructure as Code

There is an entire O’Reilly book written about IaC. The author (Kief Morris) defined IaC this way:

If our infrastructure is now software and data, manageable through an API, then this means we can bring tools and ways of working from software engineering and use them to manage our infrastructure. This is the essence of Infrastructure as Code.

via Infrastructure as Code website

Continuous Integration

Another important term to understand is Continuous integration (CI). CI is a software development technique. Here is how Martin Fowler defines it:

Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Many teams find that this approach leads to significantly reduced integration problems and allows a team to develop cohesive software more rapidly. This article is a quick overview of Continuous Integration summarizing the technique and its current usage.

via martinfowler.com
via Kaiburr.com

If our infrastructures are now software and data, and we manage them via APIs, why shouldn’t on-premises ops teams adopt the lessons learned by software teams that use CI? Is there a way to continually integrate the changes to our infrastructure will absolutely require automatically, something that kickstart never really handled well? Is there a way to normalize any type of hardware or OS? What about day 1 and day 2 operations, things like changing passwords when admins leave, or rolling security certs?

Most importantly, is there a way to give developers the cloud-like environment they desire on-premises? Can developers work with on-premises ops teams to explain the desired end state so that the ops team can build this automation?

Continually Integrated Data Center – a New Methodology for On-Premises Ops Teams

RackN is a proponent of the Continually Integrated Data Center (CI DC). The idea behind CI DC is approaching data center management in a software CI approach, but down to the physical layer. RackN’s CEO Rob Hirschfeld explains it this way:

What if we look at our entire data center down to the silicon as a continuously integrated environment, where we can build the whole stack that we want, in a pipeline way, and then move it in a safe, reliable deployment pattern? We’re taking the concept of CI/CD but then moving it into the physical deployment of your infrastructure.”

To sum up, CI DC takes the principles from CI and IaC but pushes them into the bare metal infrastructure layer.

RackN Digital Rebar – a CI DC Tool for On-Premises Ops Teams

RackN’s goal is to change how datacenters are built, starting at the physical infrastructure layer, and automating things like raid/firmware/bios/oob management, OS system provisioning, no matter the vendor of any of these elements or the vendor of the hardware on which they are hosted.

Digital Rebar is deployed and managed by traditional on-premises ops teams. It is deployed on-premises, behind the firewall.

Digital Rebar is a lightweight digital service that runs on-premises behind the firewall and integrates deeply into a service infrastructure (DHCP, PXE, etc). It is able to manages *any* type of infrastructure, from a sophisticated enterprise server to a switch that can only be managed via APIs to a raspberry pi. It is a 100% API driven system and has the ability to provide multi-domain driven workflows.

Digital Rebar becomes the integration hub for all the infrastructure elements in your environment, from the bare metal layer up. Is the end state that has been requested to stand up and manage VMware VCF? RackN has workflows that help you build the physical infrastructure to VMware’s HCL, including hardening. Workflows are built of modular component that let you drive things to a final state. Since it is deployed on-premises, behind the firewall, it is air-gapable for high security environments.

What’s new in Digital Rebar v4.3

Here are the new features available in the 4.3 launch:

  • Distributed Infrastructure as Code – delivering a modular catalog that manages infrastructure from firmware, operating systems and cluster configuration.
  • Single API for distributed automation – providing both single pane-of-glass and regional views without compromising disconnected site autonomy.
  • Continuously Integrated Data Center (CIDC) workflow – enabling consistent and repeatable processes that promote from dev to test and production

Real Talk

Not all compute will be in the cloud, but developers have new expectations of what their experience with the data center should be. Most devs write in languages written for the public cloud. Traditional data center platforms like VMware vSphere are even embracing cloud native tools like Kubernetes. All of this is proof we’re in the midst of the digital transformation everyone has been telling us about.

Sysadmins, IT admins, even vAdmins, this is not a bad thing! On-premises ops teams can learn from the dev disciplines such as IaC and CI, and we can apply all the lessons we know about data protection, sovereignty, etc. to use new ops processes such as CI DC. It’s long past time to adopt a new methodology for managing data centers. Get your learn on, and get ahead of the curve. Our skills are needed, we just need to keep evolving them.

Project Nautilus emerged as Dell’s Streaming Data Platform

Disclaimer: I recently attended Storage Field Day 19. My flights, accommodation and other expenses were paid for by Tech Field Day. There is no requirement for me to blog about any of the content presented and I am not compensated in any way for my time at the event. Some materials presented were discussed under NDA and don’t form part of my blog posts, but could influence future discussions.

Yesterday, Dell EMC’s Project Nautilus emerged as Dell EMC’s Streaming Data Platform. I wrote this post based on the presentation we were given at #SFD19, and decided to keep the Project Nautilus name throughout my report.

I love it when presenters tell us what world they are coming from, and tie our shared past to new products. Ted Schachter started his career at Tandem doing real-time processing with ATM machines. But as he pointed out, these days there is the capacity to store much more info than he had to work with back in his Tandem days. I loved how he drew a line from past to the present. We really need more of that legacy, generational information shared in our presentations to help us ground new technologies as they emerge.

A screen shot of a person

Description automatically generated
From the Project Nautilus #SFD19 presentation

Data Structures are Evolving

Developers are using the same data structures they’ve used for decades. There is an emerging data type called a stream.  Log files, sensor data, and image data are elements you will find in a stream. Traditional storage people think in batches, but the goal with streams is to move to transacting and interacting with all available data in real time, along a single path. By combining them all these data types into a stream you can start to observe trends and do things like the ones shown on the slide above.

Since the concept of streams is pretty new, the implementations you’ll see now are DIY. There are “accidental architectures” based on kafka. Kafka is an open source Apache platform for building real-time data pipelines and streaming apps.

Project Nautilus Emerged to Work with Streams

Project Nautilus from Dell EMC Storage is a platform that uses open source tools. They want to build on tools like Spark and Kafka) to do real time and historical analytics and storage. Ingest and storage is via Pravega. Streams come in, they are automatically tiered to long-time storage. Then it is connected to analytic tools like Spark and Flink (which was written specifically for streams). Finally, everything is glued together with Nautilus software to achieve scale (this is coming from Dell EMC Storage after all), and is built on VMware and PKS. More details were to be announced at MWC, so hopefully we’ll have some new info soon.

Real Talk

Product Nautilus emerged as a streaming data platform. This is another example of Dell EMC Storage trying to help their customers tame unstructured data. In this case, they are tying older technology that customers already use to newer technology – data streams. They see so much value in the new technology that they created a way for customers to get out of DIY mode, while at the same time taking advantage of existing technical debt.

This is also a reminder that we’re moving away from the era of 3-tier architecture. There have been hardware innovations, that has led to software innovations. We are going to see more and more architectural innovations. Those who open to learning how tech is evolving will be best positioned apply the lessons learned of the past couple of decades.

How are you learning about the new innovations?