Making Artificial Intelligence Even Smarter

This year, Rostock’s Eleven – the workshop bringing together eleven scientists and eleven journalists – once again featured a participant from Fraunhofer IGD: Mohamad Albadawi. The computer science graduate came from Syria to the Hanseatic city to pursue his master’s degree in Visual Computing and worked at Fraunhofer IGD in parallel. Since successfully defending his master’s thesis, he has been dedicating himself full-time to his favorite topic there: deep learning. He introduced the journalists to the depths of this subject with the following lecture:

© Fraunhofer IGD
Deep Learning Expert: Mohamad Albadawi

How We Can Make Artificial Intelligence Even Smarter…

Machine vision already delivers outstanding results today when it comes to clearly recognizing (detection) and categorizing (classification) objects. However, the performance of neural networks reaches its limits when two objects exhibit a striking similarity. In his master’s thesis, Mohamad Albadawi developed visual concepts within the framework of deep learning – the optimization of artificial intelligence – to enable much better differentiation between similar objects: Is it an edible mushroom or its deceptively similar poisonous “brother”? Is the object in the underwater image an old sea mine, or merely a stone of the same shape and size?

In his lecture, Mohamad Albadawi explains the advantages and possibilities of working with hierarchically structured neural networks. To this end, different categories and classifications are defined manually, and multiple neural networks are trained, each responsible for exactly one sub-classification. The advantage is that the more narrowly defined a network’s task area is, the more accurately it can perform it. Unlike conventional neural networks with multiple layers of equal rank, a hierarchical structure can also detect even minimal differences between two images.

Another advantage of hierarchical structures is their modularity: if an additional category is added to the previously defined set, it can be integrated into the system like a modular building block. With traditional neural networks, training must be repeated entirely, even for categories that are already known. This makes hierarchical networks more flexible when changes occur.