Deep learning applications like ChatGPT or Stable Diffusion are very popular, but they require a huge amount of computing power for their impressive capabilities. This not only requires expensive hardware, but also results in high energy consumption. This combination of problems should be solved not only by further development of existing semiconductor computing chips, but also by new approaches.
Light as a solution
One of them is this Visual computing, where the calculations are not done with electrons, but with light, that is, photons. MIT has now published a paper on exactly this area, called an accelerometer map Lightning foot. This aims to solve a fundamental problem in previous approaches: integration into a standard electronic system.
This means that although photons can be used to perform calculations quickly and efficiently, data cannot be stored or read using light. These work steps continue to be performed using conventional electrical storage cells and control structures, which, however, cannot take full advantage of the optical computing unit. This is exactly where the lightning should come: through the so-called Stripping work count Obviously, the calculator is supposed to perform the following output immediately as long as the data is available. The researchers behind the project want to eliminate electrical control as a bottleneck.
“Our synthesis and simulation studies show that Lightning reduces the power consumption of machine learning inference by an order of magnitude compared to modern accelerators.”
“Our synthesis and simulation studies show that Lightning reduces the power consumption of machine learning by a significant amount compared to modern accelerators.”
According to Mingran Yang, one of the co-authors, this approach should currently be able to outperform accelerators in efficiency by about a decade. The paper has even higher values: in testing, Lightning was said to consume 352 times less power than Nvidia’s A100, the predecessor to Nvidia’s current flagship H100 AI accelerator, while at the same time being 337 times faster.
Equally interesting: Intel Core Ultra: Meteor Lake comes as a 3D desktop hybrid
However, the new approach is not completely perfect, as there are also drawbacks compared to previous electronic solutions. During the researchers’ tests, for example, it was not possible to train deep learning networks to the accuracy of classical silicon semiconductors, as optical calculations can lead to computational errors due to noise. In image recognition, the best value with the visual solution was lost by 2.25 percent. In practice, this may lead to many additional misclassifications that should be avoided. More research is still needed before an optical computing breakthrough can happen, even if it happens in the field of artificial intelligence.
source: With (Paper, PDF) via Daily Technical Science
“Subtly charming coffee scholar. General zombie junkie. Introvert. Alcohol nerd. Travel lover. Twitter specialist. Freelance student.”
More Stories
Crucial discovery in a neighboring galaxy: ‘direct evidence’
Microsoft is working on advanced Windows settings for Dev Home
Players reported bans due to Geforce Now