Untangling Fog, Cloud, and Edge Computing

MCA CNS TeamCradlepoint, MCA News

Untangling Fog Computing, Cloud Computing & Edge Computing

Fog Computing

Fog Computing Provides More Flexibility for Storing & Processing Data

One of the most exciting developments of the IoT is the innovation it fosters. While cloud computing existed prior to the IoT really taking hold, new types of computing are becoming increasingly relevant in the IoT ecosphere: Edge Computing and Fog Computing.

Let’s take a look at the similarities and differences between cloud computing, Edge Computing, and Fog Computing.

Cloud Computing

Cloud technologies themselves have existed for years, enabling trends such as Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS). When data is stored, managed, and processed in the cloud via remote servers hosted on the Internet, it’s called Cloud Computing — and it can save enterprises a lot of time, money, and resources.

The cloud has democratized computing, giving organizations an alternative to large data centers. Cloud computing represents a step away from legacy networking, but transferring data to and from the cloud is still expensive, even if it’s less expensive than the traditional architecture.

Edge Computing

Edge Computing pushes data processing out to the edge of the network, where the sources of the data are generated. All edge devices — routers, sensors, smart devices, and much more — can do Edge Computing. Depending on the situation, Edge Computing may or may not be affiliated with a cloud or server; it can exist as a standalone machine, for example.

Edge Computing helps address the challenge of data build-up, mostly in closed M2M/IoT systems. Companies use it for data aggregation, denaturing, filtering, data scrubbing, and more — with the ultimate goal to minimize costs and latency and control network bandwidth.

Fog Computing

With a shared AP and communication standard, Fog Computing aggregates data at its original source, before it hits the cloud or any other kind of service. It enables intelligence and processing closer to where the data is being created, rather than the other way around.

Fog Computing and Edge Computing devices perform the same tasks, but fog offers the ability to spread out computational tasks — things like data filtering, data removal for privacy, data packaging, and real-time analytics — between a cloud provider and the edge. It does this in a simple manner such that the programmer developing a solution has a seamless experience designing in the cloud or pushing code to the fog using the same framework and APIs. In essence, the fog allows you to move compute to where the data is, versus moving the data to where the compute is.

Applying Fog Computing

Fog Computing fosters a more flexible environment for storing and processing data, which helps enterprises address cost, bandwidth, and latency issues. Latency is a good example, especially in mission-critical situations. If a city needed to issue an intelligent Amber Alert, it could enact a citywide deployment of cameras, sensors, and other remote tools. Hypothetically, the alert may be for a missing 4-foot-tall child last seen wearing a red hat.

The traditional way to utilize remote tools would be to have all of the cameras and sensors begin streaming data into the cloud. However, this carries high costs and presents a significant latency problem. Running that much data through various algorithms and engines — especially when most of the data contain neither the image of a child nor the image of a red hat — can prove burdensome. It’s imprudent to stream all that redundant data to the cloud, burdening the WAN networks. It would be better to process the data close to the source, filtering out irrelevant data.

With a programmatic interface available via Fog Computing, the city can instead tell the remote cameras to do a first-pass inference of 4-foot-tall children wearing red hats. If they find a possible match through this filter, they can pass the data back up into the cloud for a second-level integrity check. From there, further analysis can determine whether there is a match. That analysis can be done more quickly and more cost-effectively than if the entire batch of raw data were being streamed up to the cloud for analysis.

The broader need for Fog Computing is becoming increasingly apparent. Industry trends and research show that exponentially larger amounts of data are being generated globally. The ability to store and move this data, however, is becoming problematic. In 2017, for every 150 bytes of data that are produced, 149 bytes of data have to be filtered or thrown away. It is simply impossible to move that amount of data around in real-time.

As the IoT continues to develop, some estimates predict up to 50 billion connected devices will be in use. Individual users and enterprises will not be able to move the massive amount of data generated by the IoT through network infrastructures and into the cloud, much less normalize the cost of doing so. Companies must innovate how to push intelligence and computing farther out toward the source of the data itself.

Fog Computing With Router SDK

With the right solution in place, Fog Computing allows IT teams to design and deploy software to edge devices through the cloud, giving cloud developers more flexibility than ever.

For More Information:

Share this Post