skip to content

A Domain Decomposition-Based CNN-DNN Architecture for Model Parallel Training Applied to Image Recognition Problems

Axel Klawonn, Martin Lanser, and Janine Weber, Department of Mathematics and Computer Science, University of Cologne

Deep neural networks (DNNs) and, in particular, convolutional neural networks (CNNs) have brought significant advances in a wide range of modern computer application problems. However, the increasing availability of large amounts of datasets as well as the increasing available computational power of modern computers lead to a steady growth in the complexity and size of DNN and CNN models, respectively, and thus, to longer training times. Hence, various methods and attempts have been developed to accelerate and parallelize the training of complex network architectures. In this talk, a novel domain decomposition-based CNN-DNN architecture is presented which naturally supports a model parallel training strategy. Experimental results for different 2D image classification problems are shown as well as for a face recognition problem, and for a classification problem for 3D computer tomography (CT) scans. The results show that the proposed approach can significantly accelerate the required training time compared to the global model and, additionally, can also help to improve the accuracy of the underlying classification problem.