报告题目:Function and derivative approximation by shallow neural networks
报 告 人:陆帅教授(复旦大学)
报告时间:2025年5月9日 15:30-16:30
报告地点:格物楼528
报告摘要:
We investigate a Tikhonov regularization scheme tailored explicitly for shallow neural networks within the context of solving a classic problem: approximating an unknown function and its derivatives in a unit cubic domain based on noisy measurements.
The proposed Tikhonov reqularization scheme incorporates a penalty term that takes three distinct yet intricately related network (semi) norms: the extended Barron norm, the variation norm, and the Radon-BV seminorm. These choices of the penalty terms are contingent upon the specific architecture of the neural network being utilized, We establish the connection between various network norms and particularly trace the dependence of the dimensionality index, aiming to deepen our understanding of how these norms interplay with each other.
We revisit the universality of function approximation through various norms, establish rigorous error-bound analysis for the Tikhonov regularization scheme, and explicitly elucidate the dependency of the dimensionality index, providing a clearer understanding of how the dimensionality affects the approximation performance and how one designs a neural network with diverse approximating tasks. The numerical experiments verify the theoretical analysis. It is a joint work with Yuanyuan Li, Fudan University.
报告人简介:
陆帅,复旦大学数学科学学院教授、博导,主要从事数学物理反问题计算方法与数学理论的研究,特别是反问题正则化方法收敛性分析及偏微分方程反问题稳定性理论等。至今在Inverse Problems、SIAM系列、Numer. Math.、Math. Comp.等权威期刊共发表学术论文六十余篇,合作出版英文学术专著一本。2019年获得国家杰出青年科学基金资助,2020年获上海市自然科学奖一等奖。现任Inverse Problems、Inverse Problems and Imaging等多个国际期刊编委,国际反问题联盟执行委员会委员,2025国际应用反问题大会科学委员会委员。