Titelangaben
Grüne, Lars ; Kleinberg, Konrad ; Kruse, Thomas ; Sperl, Mario:
Convexity and strict convexity for compositional neural networks in high-dimensional optimal control.
Bayreuth
,
2025
. - 21 S.
DOI: https://doi.org/10.48550/arXiv.2511.05339
Angaben zu Projekten
| Projekttitel: |
Offizieller Projekttitel Projekt-ID Nichtlineare optimale Feedback-Regelung mit tiefen neuronalen Netzen ohne den Fluch der Dimension: Räumlich abnehmende Sensitivität und nichtglatte Probleme 463912816 |
|---|---|
| Projektfinanzierung: |
Deutsche Forschungsgemeinschaft |
Abstract
Neural networks (NNs) have emerged as powerful tools for solving high-dimensional optimal control problems. In particular, their compositional structure has been shown to enable efficient approximation of high-dimensional functions, helping to mitigate the curse of dimensionality in optimal control problems. In this work, we build upon the theoretical framework developed by Kang & Gong (SIAM J. Control Optim. 60(2):786-813, 2022), particularly their results on NN approximations for compositional functions in optimal control. Theorem 6.2 in Kang & Gong (SIAM J. Control Optim. 60(2):786-813, 2022) establishes that, under suitable assumptions on the compositional structure and its associated features, optimal control problems with strictly convex cost functionals admit a curse-of-dimensionality-free approximation of the optimal control by NNs. We extend this result in two directions. First, we analyze the strict convexity requirement on the cost functional and demonstrate that reformulating a discrete-time optimal control problem with linear transitions and stage costs as a terminal cost problem ensures the necessary strict convexity. Second, we establish a generalization of Theorem 6.2 in Kang & Gong (SIAM J. Control Optim. 60(2):786-813, 2022) which provides weak error bounds for optimal control approximations by NNs when the cost functional is only convex rather than strictly convex.

bei Google Scholar