When presenting the new iPhone, Apple engineers often juggle incomprehensible terms, one of which is “neural processor”: every year it is improved, the number of operations increases from several hundred billion to several trillion per second. However, for many there is no visible benefit from the neural coprocessor, so not every user knows why this module is needed and what it is responsible for. By the way, without them your smartphone would not have the lion’s share of functions in 2022: we explain, what is neural engine and why is it needed.
What is Neural Engine
Apple started developing its own processors in 2010, when the iPhone 4 came out with the A4 chip. But the era of neural coprocessors only began in 2017 with the release of the iPhone X – it’s not considered one of the most iconic in Apple history for nothing: the company introduced the new A11 Bionic chip with the first-generation Neural Engine. Consisting of two cores and performing about 600 billion operations per second – A16 Bionic has 16 cores and performance has increased to 17 trillion operations per second.
Many did not understand why it was necessary – after all, the iPhone has somehow worked without it for all these years. However, Apple noted that the Neural Engine is based on neural networks and is used for Face ID, augmented reality elements like AniMoji and Memodji, and other resource-intensive tasks. It turned out that for some processes it is not necessary to use the main processing core or the graphics card at all.
Why you need a Neural Engine
The first feature the Neural Engine was used for was of course Face ID: due to the coprocessor, the system built a system of points on a person’s face for the most accurate determination when unlocking. There were attempts to create something similar from other Android smartphone manufacturers: it was quite tight, but it worked differently (just because of the camera) and not entirely secure, since the system could easily unlock the phone even from a photo.
In the future, Neural Engine has been customized for other cool features, such as portrait photography, using Siri, voice recognition, photo matching, and reminders – yes, all of these features work through neural networks and machine learning.
As mentioned above, neither the CPU nor the GPU was suitable for running neural networks. But why? The peculiarities are such that when AI works, it is necessary to perform a large number of fairly simple calculations in parallel. With the help of the Neural Engine, this is done in parallel by the CPU and video core, so that they are not overloaded with unnecessary processes and the charge of the iPhone is used more efficiently.
Neural Engine Function also in that it works autonomously without sending data anywhere to the server. This allows you to stay offline while Face ID, Night Snapshot, photo trays, background blur and more work offline.
Of course, Apple regularly updates iOS with neural networks in mind, so when choosing an iPhone, it’s important to focus not only on the number of CPU and GPU cores, but also on the Neural Engine – due to this taking the lion’s share of the everyday ones Functions you use work much faster.
Artificial intelligence in the iPhone
The appearance of a neural coprocessor is another trend set by Apple. Since then, each mobile chip has had its own engine responsible for the calculation, so in 2022 it’s not at all surprising that a cheap Android flagship would have some kind of AI-assisted photo enhancement.
By the way even Neural engine in the A16 Bionic – far from the fastest out there: even the Snapdragon 8+ Gen 1 has a built-in Hexagon Neural Coprocessor capable of up to 27 trillion operations per second, while the latest Neural Engine only does up to 17 trillion.
One of the main pursuers Apple neural processor is a development by Google – Next generation tensor processing unit, which is included in the proprietary Tensor G2 chip. It is thanks to him in particular that Pixel smartphones have cool features that the iPhone will never have.
As mentioned above, initially Features of the Neural Engine was rather scarce and AI was only used for embedded applications. In the meantime, however, third-party software providers can also use it for their work: for example for speech recognition, face recognition, images and much more.
In iOS 16, great emphasis is placed on the use of neural networks: the system has a large number of functions that work through artificial intelligence. Text recognition on photos and videos, the ability to evenly crop an object from a photo, adjust the depth effect on the lock screen background image – it all needs Neural Engine.