Skip to main content

Edge computing

Edge computing is a distributed information technology (IT) architecture in which client data is processed at the periphery of the network, as close to the originating source as possible. The move toward edge computing is driven by mobile computing, the decreasing cost of computer components and the sheer number of networked devices in the internet of things (IoT). Depending on the implementation, time-sensitive data in an edge computing architecture may be processed at the point of origin by an intelligent device or sent to an intermediary server located in close geographical proximity to the client.  Data that is less time sensitive is sent to the cloud for historical analysis, big data analytics and long-term storage.

Transmitting massive amounts of raw data over a network puts tremendous load on network resources. In some cases, it is much more efficient to process data near its source and send only the data that has value over the network to a remote data centre. Instead of continually broadcasting data about the oil level in a car's engine, for example, an automotive sensor might simply send summary data to a remote server on a periodic basis. Or a smart thermostat might only transmit data if the temperature rises or falls outside acceptable limits. Or an intelligent Wi-Fi security camera aimed at an elevator door might use edge analytics and only transmit data when a certain percentage of pixels significantly change between two consecutive images, indicating motion.

Edge computing can also benefit remote office/branch office (ROBO) environments and organizations that have a geographically dispersed user base. In such a scenario, intermediary micro data centres or high-performance servers can be installed at remote locations to replicate cloud services locally, improving performance and the ability for a device to act upon perishable data in fractions of a second. Depending upon the vendor and technical implementation, the intermediary may be referred to by one of several names including edge gateway, base station, hub, cloudlet or aggregator.

A major benefit of edge computing is that it improves time to action and reduces response time down to milliseconds, while also conserving network resources. The concept of edge computing is not expected to replace cloud computing, however. Despite its ability to reduce latency and network bottlenecks, edge computing can pose significant security, licensing and configuration challenges.


Security challenges: Edge computing's distributed architecture increases the number of attack vectors. The more intelligence an edge client has, the more vulnerable it becomes to malware infections and security exploits.

Licensing challenges: Smart clients can have hidden licensing costs. While the base version of an edge client might initially have a low-ticket price, additional functionalities may be licensed separately and drive the price up.

Configuration challenges: Unless device management is centralized and robust, administrators may inadvertently create security holes by failing to change the default password on each edge device or neglecting to update firmware in a consistent manner, causing configuration drift.

The name "edge" in edge computing is derived from network diagrams; typically, the edge in a network diagram signifies the point at which traffic enters or exits the network. The edge is also the point at which the underlying protocol for transporting data may change. For example, a smart sensor might use a low-latency protocol like MQTT to transmit data to a message broker located on the network edge, and the broker would use the hypertext transfer protocol (HTTP) to transmit valuable data from the sensor to a remote server over the Internet.

The OpenFog consortium uses the term fog computing to describe edge computing. The word "fog" is meant to convey the idea that the advantages of cloud computing should be brought closer to the data source. (In meteorology, fog is simply a cloud that is close to the ground.) Consortium members include Cisco, ARM, Microsoft, Dell, Intel and Princeton University.


Comments

Popular posts from this blog

Black swan

A  black swan event  is an incident that occurs randomly and unexpectedly and has wide-spread ramifications. The event is usually followed with reflection and a flawed rationalization that it was inevitable. The phrase illustrates the frailty of inductive reasoning and the danger of making sweeping generalizations from limited observations. The term came from the idea that if a man saw a thousand swans and they were all white, he might logically conclude that all swans are white. The flaw in his logic is that even when the premises are true, the conclusion can still be false. In other words, just because the man has never seen a black swan, it does not mean they do not exist. As Dutch explorers discovered in 1697, black swans are simply outliers -- rare birds, unknown to Europeans until Willem de Vlamingh and his crew visited Australia. Statistician Nassim Nicholas Taleb uses the phrase black swan as a metaphor for how humans deal with unpredictable events in his 2007...

A Graphics Processing Unit (GPU)

A graphics processing unit (GPU) is a computer chip that performs rapid mathematical calculations, primarily for the purpose of rendering images. A GPU may be found integrated with a central processing unit (CPU) on the same circuit, on a graphics card or in the motherboard of a personal computer or server. In the early days of computing, the CPU performed these calculations. As more graphics-intensive applications such as AutoCAD were developed; however, their demands put strain on the CPU and degraded performance. GPUs came about as a way to offload those tasks from CPUs, freeing up their processing power. NVIDIA, AMD, Intel and ARM are some of the major players in the GPU market. GPU vs. CPU A graphics processing unit is able to render images more quickly than a central processing unit because of its parallel processing architecture, which allows it to perform multiple calculations at the same time. A single CPU does not have this capability, although multi...

6G (sixth-generation wireless)

6G (sixth-generation wireless) is the successor to 5G cellular technology. 6G networks will be able to use higher frequencies than 5G networks and provide substantially higher capacity and much lower latency. One of the goals of the 6G Internet will be to support one micro-second latency communications, representing 1,000 times faster -- or 1/1000th the latency -- than one millisecond throughput. The 6G technology market is expected to facilitate large improvements in the areas of imaging, presence technology and location awareness. Working in conjunction with AI, the computational infrastructure of 6G will be able to autonomously determine the best location for computing to occur; this includes decisions about data storage, processing and sharing.  Advantages of 6G over 5G 6G is expected to support 1 terabyte per second (Tbps) speeds. This level of capacity and latency will be unprecedented and wi...