I was ecstatic when I was invited to be a delegate at the very first Edge Field Day. Edge is a weird term. Is it COLO 2.0, or about IoT? How do you get your arms around requirements for edge computing architecture? We found out at Edge Field Day 1.
Can the edge be defined?
It has always been hard to agree on a definition of the edge. Honestly, have you ever heard a single story of the edge? What if all the definitions about the edge are missing the point? Maybe we should look at “the edge” another way.
Remember all that talk about Digital Transformation over the last several years? In my opinion, the transformation has happened, and now it’s time to make it real. We have new types of compute, so servers now have the capability of driving workloads with insane compute requirements. Of course I’m talking about machine learning and deep learning algorithms. Those algorithms are fed by massive data lakes, and modern storage capacities are now able to support this requirement for massive amounts of data.
Additionally, the field of data science is over 50 years old. They know what it takes to make AI applications real, and finally the infrastructure has caught up.
Maybe the edge is hard to define because not every edge implementation will be the same, and you can’t expect to just replicate what you’ve built in a central datacenter like we used to do with COLO implementations. Carl Fugate put this nicely in the delegate wrap-up video.
Perhaps the best way to start thinking about how use edge computing is to consider what it will take to successfully run an application in the place where it can perform at peak optimalization. After all, isn’t our job as the operations side of the house to ensure the best user experience possible? Of course, we have to do that reliably, repeatably, securely, and in compliance with any regulations. The companies at Edge Field Day are building products and solutions to help organizations deploy applications to “the edge” to do just that.
Think different to get edge computing infrastructure right
We had several hardware companies at the first Edge Field Day.
Opengear presented their Smart Out of Band solutions, including a Thousand Eyes integration and a demo of their IP Access solution. If you’ve ever been an admin, you’ve probably used Opengear at some time in your career. They are definitely doing the work to create resiliency for infrastructure deployed at the edge.
Mako Networks has been around for twenty years, and have put that experience into their solutions for the edge. One of Mako Networks’ claim to fame is their entire system is PCI-DSS certified. The deliver networking as a service with a range of networking devices that are completely cloud-managed. This video was a great technical overview of what the system looks like for distributed environments.
I was really interested in the presentation that was geared towards Managed Service Providers (MSPs). Because of the way the Mako Networks networking as a service is built, MSPs can bring the technology to their customers without needing to invest in hardware, expert personnel, etc.
It is undeniable that reliable, secure networks that can be managed out of band are critical to the success of edge deployments. Mako Networks They also has a great partnership with Scale Computing. This gives reliable, secure network baseline for the Scale Computing customers.
Scale Computing shipped their first HCI product in 2012. They’ve certified thousands of devices, including the Intel NUCs designed for Edge computing. Their SC Hypercore software then runs on the edge compute devices, it pulls storage, compute, hypervisor and some disaster recovery capabilities. This is paired with their SC Fleet Manager, a SaaS visibility, monitoring, and orchestration offering that manages an entire edge fleet.
They showed some cool demos on zero-touch provisioning and their container management and orchestration with the SC Platform. Scott Loughmiller, co-founder and Chief Product Officer, also walked us through AIME – the Autonomous Infrastructure Management Engine. It is pretty interesting to understand the sauce behind the Scale product. AIME is a state machine that can see the complete state of the system. Since this is a near-real time, self-healing system, it’s almost like Scale already has AI ops built into their system.
So far, we’ve covered the vendors that provide networking, out-of-band access, compute/storage, provisioning, and container management and orchestration for edge sites. So the infrastructure is covered, but what about the applications that need to run on the edge?
It’s (still) all about the application
We talked about the edge infrastructure, but the only reason for building infrastructure is to host applications. In a modern environment, those applications are most likely going to be container-based.
Avassa is a Scale Computing partner. They provide an application management and operations program for on-site edges. They realized that there were not really tools for the teams that needed to provide operations and application management for their edge locations. In most cases, existing tools just didn’t work.
Even worse, they realized there was a tooling void. App teams were just lifting and shifting to apps to the edge, since they were able to do this in the cloud. They weren’t reusing the CI/CD platform, and definitely were not using the tools for monitoring and observability that they IT teams use. So Avassa set out to build the tools that these teams needed.
They had great personas – Applifer Developez and Platrick McEngine. They definitely understand who will use their platform. They demo’d managing on-site edge container infrastructure, deploying and updating container apps, and monitoring and observing container aps. Plot twist: they use docker!
ZEDADA provides an edge management and orchestration solution. Traditionally, they focused on non-standard IT environments (oil fields, retail, etc.) but their platform can be used for any industry or use case. ZEDADA is a control plane solution that delivers the infrastructure software required to run edge workloads. Their edge virtualization engine is installed on commodity hardware and managed via a cloud API.
An administrator defines the desired state for applications including app infrastructure (containers, Kubernetes, VMs, etc.), application services needed, and finally the desired applications. Once installed, the ZEDEDA Cloud does configurations and Day 2 updates.
I really enjoyed their security presentation.
Listen to your heart
You may be wondering why it took me several months to write this post. I felt off the entire trip, but I put it up to the time difference. But after lunch on the last day I had to be rushed to the ER.
The diagnosis was SVT (supraventricular tachycardia), which is an arrhythmia. Since then, I’ve been diagnosed with AFIB (atrial fibrillation) which is when your heart’s electrical signals are a little crazy and make your heart beat very fast. I’m actually in SVT in this delegate picture.
The Tech Field Day team took very good care of me. I’ve been working with awesome specialists and working a plan to stay healthy. It is fascinating to hear how your heart handles signals. Maybe we could have a medical field day?
AFIB never goes away, and I suspect mine is hereditary. Let me be your cautionary tale NOT to work through feeling weird, and definitely don’t self-medicate with alcohol.