The clock frequency and single‑core computing power of general‑purpose processors topped out years ago. Heat dissipation and energy constraints forced chip makers to stop pushing the MHz race and instead put multiple cores on a die. A quad‑core CPU can, in theory, execute four tasks at once; in practice a legacy single‑threaded program will only ever occupy one of those cores, leaving the rest idle.

Taking full advantage of the hardware means doing more work in software. Informatics — the science of information systems — already offers many of the building blocks, but there are still deep challenges to getting parallel programs right and efficient. The most important ones are:

1. Parallelizability

Not every problem can be split into independent pieces. Some embarrassingly parallel tasks, like ray‑tracing a 3‑D scene or applying the same transformation to every pixel, map cleanly onto multiple cores: divide the image into tiles and let each thread render its own piece. Other problems are inherently sequential. I/O‑bound workloads spend most of their time waiting for a disk or network, so adding more cores buys nothing. Cryptographic hash functions and other algorithms are deliberately linear — “nine women cannot deliver a baby in one month.”

Sometimes the barrier is data dependency: one step needs the result of another (think producer‑consumer), so you can’t run them simultaneously. In other cases parallelization is possible but the parallel version is much harder to reason about or even slower than the sequential one. Finding the shortest path in a graph is a classic example where naive parallelism adds coordination overhead that swamps the gain.

2. Concurrency control

Once multiple threads or processes are active, they frequently need to communicate and share resources. Memory is the most common shared medium. Without proper synchronization, two threads might read and write the same data at the same time, leading to race conditions and corrupted state.

Mutual‑exclusion mechanisms such as locks and semaphores prevent races but introduce contention; the critical sections they protect become bottlenecks, diminishing the benefits of parallelism. Designing correct and efficient concurrency primitives is a major research area in its own right, and using them correctly is one of the biggest headaches for everyday developers.

3. Portability

Software portability has always been hard, but multi‑core systems introduce new wrinkles. The number of cores, cache hierarchy, memory consistency model and even support for atomic instructions vary across architectures. A program tuned to run well on a desktop CPU may behave poorly on an ARM‑based smartphone or a many‑core server.

High‑level languages and runtime environments provide abstractions (std::thread, OpenMP, the Go scheduler, etc.) that mask some differences, but they don’t eliminate the need for testing and tuning on every target platform. The space of possible hardware configurations is vast — as painful as testing a website across every browser, only you can’t install thousands of different CPUs.

4. Load balancing

Parallel performance is limited by the slowest worker. If one thread finishes its share early and sits idle while another grinds away on a large chunk of work, you lose the potential speed‑up. Distributing work evenly is trivial when the workload is homogeneous (e.g. processing an array of fixed‑size records), but much harder when tasks vary in size or dependencies prevent further subdivision.

Dynamic scheduling algorithms and work‑stealing runtimes help, but they add complexity and overhead. Writing a well‑balanced parallel algorithm often requires insight into the problem structure and, sometimes, an entirely different approach than the sequential version.

Despite all of these difficulties, parallel computing underpins modern software from AI and simulation to databases and search engines. Informatics continues to evolve tools and theories to make concurrency more accessible, but building fast, correct, portable multi‑core programs remains one of the discipline’s most fascinating challenges.