Artificial Intelligence (AI) created by humans is quite believable but an AI created by another AI seems like a thing straight out of a futuristic science fiction movie.
But researchers at Google Brain, Google's deep learning AI project, made an announcement in May 2017 that seemed to defy logic. They announced the creation of an AI that's capable of generating its own AIs. Don't believe us? Read on.
The team of researchers at Google Brain recently presented AutoML, the AI progenitor capable of creating its own kind, with its biggest challenge to date, to give birth to another AI, and the result was an AI "child" that outperformed all of its human-made counterparts.
Now, an AI created by another mega-AI that's far better than man-made AIs evokes a feeling of great achievement, but also feels destructive at the same time. Considering what it can do if it becomes so smart that it could create hundreds of such AIs. But let's get back to the reality and stop fantasising.
AutoML which quite understandably stands for "Automated" Machine Learning uses an approach called "Reinforcement learning" wherein AutoML acts as a controller neural network (much like the human brain itself) that develops a child AI network for a specific task (more like the peripheral nervous system).
This particular child AI, fondly called 'NASNet' by the researchers, was tasked with recognising objects which include - people, cars, traffic lights, handbags, backpacks etc in a video in real-time.
AutoML would evaluate NASNet's performance, like how a teacher or parent would, and use that information to improve its child AI. Only, unlike humans, the process of evaluation and improvisation was repeated several thousand times over.
Also Read: AI could destroy us in 'the worst event' in human history if not controlled, says Stephen Hawking
NASNet's recognition skills were tested on ImageNet Image Classification and COCO object detection data sets, which the Google researchers call "two of the most respected large-scale academic data sets in computer vision", and ultimately NASNet outperformed all other computer vision systems.
According to the researchers, NASNet was 82.7 percent accurate at predicting/recognizing images on ImageNet's validation set. NASNet's results are 1.2 percent better than any previously published results, and the system is 4 percent more efficient with a mean Average Precision (mAP) rate of 43.1 percent. Meanwhile, a less demanding (requiring lesser computation) version of NASNet that was developed for mobile, outperformed the best similarly sized models by 3.1 percent.
Note: mAP = a measure of information retrieval or recalling or recollection.
How does NASNet do that?
Machine Learning, that's what gives many AI systems the ability to perform specific tasks. The concept is fairly simple, there's an algorithm that learns by being fed lots and lots of data, but the process of feeding and making it learn requires a lot of time and effort. Automation is making it feed itself and learn all by itself. AutoML is an automated Machine Learning algorithm, so what this means is that AutoML could open up a field of machine learning and AI to non-experts as well.
Talking about NASNet, since it is a visual machine learning AI it could be employed to help visually impaired people regain sight, as one researcher suggested. Such AIs could also help designers improve self-driving car technologies. The faster an autonomous vehicle/self-driving recognizes objects in its path, the faster it can react to them thereby increasing the safety of the vehicle.
"We hope that the larger machine learning community will be able to build on these models to address multitudes of computer vision problems we have not yet imagined," Google brain researchers wrote in their blog post.
However big the application of NASNet and AutoML may be, but the creation of an AI that can build an AI does raise some concerns. For example, what if AutoML creates systems so fast that human society can't keep up? But one application where NASNet could safely be applied in the future is automated surveillance systems.
Sourced from the original Futurism article by Dom Galeon and Christian Houser