Raghavendra BK
2 min readFeb 8, 2021

--

Tiny AI

Researchers have reduced the size of algorithms (roughly by a scale of 10), especially the ones that require huge datasets and computations power; all this without significant compromise on their computational capabilities. These reduced algorithms are termed as Tiny AI or Tiny ML.

Let’s take the example of Google’s BERT. BERT is a neural network based natural language processing algorithm that can help computers understand a language like humans do. It can make better writing suggestions to help users frame and complete a sentence. This algorithm however contains 340 million parameters of data. The power requirement for a single training session of BERT is equal to 50 days of computational power consumption of the average American household!

Now, this is where the need for Tiny BERT (in google’s case) or Tiny AI emerged . Researchers were successful in reducing the BERT algorithm 7.5 times while improving the speed 9.4 times and retaining 96% of the original accuracy.

Tiny AI can aid researchers deploy complex algorithms onto an edge device. This could allow, for example, an individual to analyse complex medical images using a smartphone. Tiny AI can also allow for autonomous driving without the use of the cloud. With so many of these possibilities deployed on edge devices, users would have greater data security and privacy.

Tiny AI can also be deployed in cloud computing by making hyper-efficient AI systems like Tiny Data, Tiny Hardware and Tiny Algorithms.

In summary, Tiny AI reduces computational cost and time while retaining high levels of accuracy. It also ensures data protection and privacy.

The possibilities are limitless while contributing for a greener future.

--

--