01-09-2021 | | By Sam Brown
Recently, the Open Compute Project announced that it would look towards developing open-source silicon packaging and AI systems. What is the Open Compute Project, what did it announce, and how will this help future data centre services?
The Open Compute Project is an organisation comprised of data-centric companies, including ASUS, ARM, Facebook, IBM, Google, and Cisco, with the goal of developing open-source server equipment. Facebook started the Open Compute Project in 2009 under the name “Project Freedom”.
As Facebook would outsource their data centres to third parties who would use proprietary equipment, it became apparent that such hardware would be inappropriate as Facebook scaled. Therefore, Facebook decided to explore the concept of open-source hardware in developing servers that could be easily expanded, used commonly available parts, and accessible to anyone. Developments here would eventually lead to the public announcement of the project, which in turn became the Open Compute Project.
With interest in developing open-source hardware and keeping with the times, the Open Source Project recently announced that it would look towards silicon packaging, optical device integration, and AI. The idea behind the new direction is to create standardisation of hardware that will become critical in the next generation of data centres and servers alike while also ensuring that cooling, racking, and networking are not interfered with.
Of the announcements, the concept of open silicon is most interesting as it relates to developing silicon devices that use an open architecture. Anyone involved with PCB design understands the trouble caused by semiconductor packages being unique and following no standard layout.
As soon as a new piece of silicon is designed, it can sometimes find itself in a new package (many CPU companies such as Intel and AMD do this). This also means that older systems may not be upgradable with new processors, modules, and devices, thus forcing data centres to replace perfectly functional racks.
Also outlined by the Open Compute Project is the desire for standard interfaces between different packages. Standard interfaces could be referring to optical links that have been proposed as an alternative method for data transmission on PCBs. An optical link is not subject to the same challenges faced by PCIe and other electrical busses being far more resilient to interference, transmitting a wide spectrum, and having exceedingly high data rates. Optical links between processors could allow for high-speed communication at the core level between various devices, creating a massive performance boost as data is directly shared between processes.
Generally speaking, industries have been able to accelerate in development after unified standards are laid out. An excellent example is the IBM PC architecture; by multiple manufacturers following the IBM standard, hardware and software became standardised. Instead of only targeting one specific computer developed by a specific company, targeting the IBM PC platform provided companies with a vast customer base. IBM PC architecture also led to the development of systems that could exchange information with ease, allow files to easily be transferred between machines and develop applications that all could access.
In the case of the Open Compute Project, creating common hardware and interfaces helps datacenters scale and upgrade in a way never before possible. Racks that require processing upgrades could swap out their CPUs, new optical interfaces can be connected with ease, and the use of standard rack sizes allows for modular design. Considering how every open-source movement has shown great success, doing the same for datacenters will only present a golden opportunity for standardisation and rapid development.