Robustness and Interpretability of Machine Learning Models in Financial Forecasting

Authors

  • Samir Vinayak Bayani Broadcom Inc, USA
  • Ikram Ahamed Mohamed Salesforce, USA
  • Selvakumar Venkatasubbu New York Technology Partners, USA

DOI:

https://doi.org/10.47672/ejt.2005

Keywords:

Boosting, Bagging, Forecasting, Neural Networks, Deep Learning, Penalized Regressions, Regularization, Nonlinear Models, Sieve Approximation, And Statistical Learning Theory.

Abstract

Purpose: The practice of financial forecasting is an essential component of contemporary financial management and the process of making decisions on investments. As a result of the complexity and volatility of financial markets, traditional financial forecasting techniques often fail to adequately capture the situation.

Methodology/ Findings: One of the most effective methods for improving the precision and effectiveness of financial forecasting is the use of machine learning techniques. Through an examination of its potential, limitations, and future possibilities. They take linear and nonlinear possibilities into account in their study.  Our focus is on linear methods, namely penalized regressions, and ensemble of models. The study considers a range of nonlinear techniques, including tree-based methods like boosted trees and random forests, and deep and shallow neural networks in both feed-forward and recurrent forms. They also think about ensemble and hybrid models, which combine features from several kinds of alternatives.

Implications to Theory, Practice and Policy:  A brief overview of the tests used to measure outstanding predictive ability is provided. In the last part of this article, they focus on the possible applications of machine learning in the fields of economics and finance, and we provide an example that makes use of high-frequency financial data (Benti, Chaka, and Semie, 2023).

Downloads

Download data is not yet available.

References

Benti, N.E., Chaka, M.D. and Semie, A.G. (2023). Forecasting Renewable Energy Generation with Machine Learning and Deep Learning: Current Advances and Future Prospects. Sustainability, [online] 15(9), p.7087. doi:https://doi.org/10.3390/su15097087.

Carvalho, D.V., Pereira, E.M. and Cardoso, J.S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8(8), p.832. doi:https://doi.org/10.3390/electronics8080832.

Li, X., Xiong, H., Li, X., Wu, X., Zhang, X., Liu, J., Bian, J. and Dou, D. (2022). Interpretable deep learning: interpretation, interpretability, trustworthiness, and beyond. Knowledge and Information Systems, 64(12), pp.3197-3234. doi:https://doi.org/10.1007/s10115-022-01756-8.

Linardatos, P., Papastefanopoulos, V. and Kotsiantis, S. (2020). Explainable AI: A Review of Machine Learning Interpretability Methods. Entropy, [online] 23(1), p.18. doi:https://doi.org/10.3390/e23010018.

Masini, R.P., Medeiros, M.C. and Mendes, E.F. (2021). Machine learning advances for time series forecasting. Journal of Economic Surveys, 08(06). doi:https://doi.org/10.1111/joes.12429.

Nourah Alangari, El, M., Mathkour, H. and Ibrahim Almosallam (2023). Exploring Evaluation Methods for Interpretable Machine Learning: A Survey. Information, 14(8), pp.469-469. doi:https://doi.org/10.3390/info14080469.

Shah, V. and Konda, S.R. (2021). Neural Networks and Explainable AI: Bridging the Gap between Models and Interpretability. INTERNATIONAL JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, [online] 5(2), pp.163-176. Available at: https://ijcst.com.pk/index.php/IJCST/article/view/387 [Accessed 10 Mar. 2024].

Downloads

Published

2024-05-06

How to Cite

Bayani, S. V. ., Mohamed, I. A. ., & Venkatasubbu, S. . (2024). Robustness and Interpretability of Machine Learning Models in Financial Forecasting. European Journal of Technology, 8(2), 54–66. https://doi.org/10.47672/ejt.2005

Issue

Section

Articles