Apple’s “By Innovation Only” Event … the silicon PoV


Another Apple Keynote has come and gone.  There seemed to be more “padding” this time.  Yes, not everything is going to be ground breaking; but the oohs and aahs at every turn were a bit much.  That said, there were two things on the semiconductor front that deserve a few thought cycles.

We saw the first glimpses of the A13 Bionic i.e. the new A-series processor.  We were told the performance of the various blocks was 20 to 30% better and these blocks used up to 40% less energy.  The interesting caveat here; the A13 is fabricated with the same 7 nm process geometry as the A12.  What does this mean?  First, changing to a smaller feature size (e.g. moving to 5 nm) would bring about improvements in both performance and power.  This is not the case here.  Instead, we were told there are “new” transistors and it is a 2nd generation 7 nm process.  This suggests the materials used to fabricate transistors may have changed, there may be tweaks in the metal interconnect dimensions or materials, or maybe, the inter-metal dielectric has evolved and there is decreased resistive-capacitive delay.  All of these could improve the various metrics, but probably not by the stated amounts.

There is another option.  The performance and efficiency gains must have a large design component.  An example of this would be the discussed ability to power down whole blocks or bits of circuitry within blocks.

The last few iPhones have really focused (no pun intended) on the camera.  This is even more evident with the iPhone 11.  We were repeatedly told about the coordinated efforts of the CPU, GPU and NE (Neural Engine) within the A13 during photo/ image processing.  For example, at one point it was stated that something like nine images are captured for each photo.  Then, each of the 24 million pixels of one image are compared to the corresponding pixel of the other images.  From here the “best” pixel is selected for a final composite image.  A photograph has become an exercise in computing more than an artistic statement.  The key take-away; there is an incredible amount of computing and image manipulation in the background.

The long and the short is that the A13 is designed for this functionality.  Some of the co-ordination will be done with software, and some will be done with “general” processing power, but more than a few aspects of the A13 will be specifically designed for such work. Apple has been going down this road for some time now.

OK … that is all well and good.  Is there anything wrong with a design tailored to its functionality?  No.  That is exactly what the design should be.  But, the iPhone 11 and A13 made me think of the never ending discussion about moving the Mac ecosystem to Apple-designed processors.

The A13 is an iOS processor.  It is designed with knowledge of iOS and the end iDevice functionality.  Quite possibly, it would not work all that well in a Mac.  While iOS was originally based on Mac OS it is unclear if a processor designed for iOS would be interchangeable with Mac OS.  Also, the Mac ecosystem offers markedly different functionality than an iDevice.  We only have to look at cameras and photo manipulation to see this.  At most a Mac OS device has one front facing camera, and it seems rather utilitarian.  It is hard to see when, if ever, Macs would need the same degree of photo processing as an iDevice.  At the moment the same can be said for machine learning.

It really strikes that there has to be a whole new effort.  Any team would certainly have access to, learn from and borrow from the A-series designs, but any Mac processor would be a different beast.  Any Mac processor would be customized to the Mac.  Apple would be embarking on a new journey.  Apple has some serious design chops, so we will have to wait and see what, if anything, is created.