A graphics processing unit (GPU) is a computer chip that performs rapid mathematical calculations, primarily for the purpose of rendering images. In the early days of computing, the central processing unit (CPU) performed these calculations. As more graphics-intensive applications such as AutoCAD were developed, however, their demands put strain on the CPU and degraded performance. GPUs came about as a way to offload those tasks from CPUs and free up processing power.
Today,
graphics chips are being adapted to share the work of CPUs and train deep
neural networks for AI applications. A GPU may be found integrated with a CPU
on the same circuit, on a graphics card or in the motherboard of a personal
computer or server. NVIDIA, AMD, Intel and ARM are some of the major players in
the GPU market.
GPU vs. CPU
A GPU is
able to render images more quickly than a CPU because of its parallel
processing architecture, which allows it to perform multiple calculations
at the same time. A single CPU does not have this capability, although
multicore processors can perform calculations in parallel by combining more
than one CPU onto the same chip.
A CPU
also has a higher clock speed, meaning it can perform an individual
calculation faster than a GPU so it is often better equipped to handle basic
computing tasks.
In
general, a GPU is designed for data-parallelism and applying the same operation
to multiple data-items (SIMD). A CPU is designed for task-parallelism and doing
different operations.
GPU (graphics processing unit)
A graphics
processing unit (GPU) is a computer chip that performs rapid mathematical
calculations, primarily for the purpose of rendering images. In the early days
of computing, the central processing unit (CPU) performed these
calculations. As more graphics-intensive applications such as AutoCAD were
developed, however, their demands put strain on the CPU and degraded
performance. GPUs came about as a way to offload those tasks from CPUs and free
up processing power.
Today,
graphics chips are being adapted to share the work of CPUs and train deep
neural networks for AI applications. A GPU may be found integrated with a CPU
on the same circuit, on a graphics card or in the motherboard of a personal
computer or server. NVIDIA, AMD, Intel and ARM are some of the major players in
the GPU market.
GPU vs. CPU
A GPU is
able to render images more quickly than a CPU because of its parallel
processing architecture, which allows it to perform multiple calculations
at the same time. A single CPU does not have this capability, although
multicore processors can perform calculations in parallel by combining more
than one CPU onto the same chip.
A CPU
also has a higher clock speed, meaning it can perform an individual
calculation faster than a GPU so it is often better equipped to handle basic
computing tasks.
In
general, a GPU is designed for data-parallelism and applying the same operation
to multiple data-items (SIMD). A CPU is designed for task-parallelism and doing
different operations.
How a GPU works
CPU and
GPU architectures are also differentiated by the number of cores. The core is
essentially the processor within the processor. Most CPUs have between four and
eight cores, though some have up to 32 cores. Each core can process its own
tasks, or threads. Because some processors have multithreading capability -- in
which the core is divided virtually, allowing a single core to process two
threads -- the number of threads can be much higher than the number of cores.
This can be useful in video editing and transcoding. CPUs can run two threads
(independent instructions) per core (the independent processor unit). GPUs can
have four to 10 threads per core.
Comments
Post a Comment