[1] Cohn JF, Kruez TS, Matthews I, et al. Detecting depression from facial actions and vocal prosody[C]// 2009 3rd International Con-ference on Affective Computing and Intelligent Interaction and Workshops.Amsterdam,Netherlands:IEEE, 2009:1-7. [2] Yang L, Jiang DM, Sahli H. Integrating deep and shallow models for multi-modal depression analysis — hybrid architectures[J]. IEEE Transactions on Affective Computing, 2018,1(12):239-253. [3] Wang ZY, Chen LX, Wang LF, et al. Recognition of audio depres-sion based on convolutional neural network and generative antago-nism network model[J]. IEEE Access, 2020,8:101181. [4] France DJ, Shiavi RG, Silverman S, et al. Acoustical properties of speech as indicators of depression and suicidal risk[J]. IEEE Trans-actions on Biomedical Engineering, 2000, 47(7): 829-837. [5] Schumanna I, Schneidera A, Kanterta C, et al. Physicians' attitudes, diagnostic process and barriers regarding depression diagnosis in primary care: a systematic review of qualitative studies[J]. Family Practice, 2012, 29(3): 255-263. [6] Quatieri TF, Malyska N. Vocal-source biomarkers for depression: A link to psychomotor activity[C]// Interspeech. Portland, Oregon:ISCA,2012:1059–1062. [7] Hnig F, Wagner J, Batliner A, et al. Classification of user states with physiological signals: On-line generic features vs. specialized feature sets[C].2009 17th European Signal Processing Conference. Glasgow, Scotland:EUSIPCO,2014:2357-2361. [8] Horwitz R, Quatieri TF, Helfer BS, et al. On the relative importance of vocal source, system, and prosody in human depression[C]//2013 IEEE International Conference on Body Sensor Networks. Cam-bridge, MA:IEEE, 2013:1-6. [9] Valstar M, Gratch J, Ringeval F, et al. Depression, mood, and emo-tion recognition workshop and challenge[C]// Proceedings of the 6th international workshop on audio/visual emotion challenge. Am-sterdam, Netherlands:AVEC, 2016:3-10. [10] Ma XC, Yang HY, Chen Q, et al. DepAudioNet: an efficient deep model for audio based depression classification[C]// Proceedings of the 6th international workshop on audio/visual emotion chal-lenge. New York,NY:Association for Computing Machinery, 2016:35-42. [11] 李金鸣, 付小雁. 基于深度学习的音频抑郁症识别[J]. 计算机应用与软件, 2019, 36(9): 161-167. Li J M, Fu X Y. Audio depression recognition based on deep learn-ing[J]. Computer Applications and Software, 2019, 36(9): 161-167. [12] 曹欣怡, 李鹤, 王蔚. 基于语料库的语音情感识别的性别差异研究[J]. 南京大学学报(自然科学版), 2019, 55(5):758-764. Cao X Y, Li H, Wang W. A study on gender differences in speech emotion recognition based on corpus[J]. Journal of Nanjing Univer-sity(Natural Sciences), 2019, 55(5):758-764. [13] Dhingra SS, Kroenke K, Zack MM, et al. PHQ-8 Days: a meas-urement option for DSM-5 major depressive disorder (MDD) sever-ity[J]. Population Health Metrics, 2011, 9(1): 11. [14] Theodoros G, Gianni P. pyAudioAnalysis: an open-source python library for audio signal analysis[J]. PLoS ONE, 2015, 10(12): 1-17. [15] Gu XQ, Ni TG, Wang HY. New fuzzy support vector machine for the class imbalance problem in medical datasets classification[J]. The Scientific World Journal, 2014, 2014: 536434. [16] Cawley G C, Talbot NLC. On over-fitting in model selection and subsequent selection bias in performance evaluation[J]. Journal of Machine Learning Research, 2010, 11(1):2079-2107. [17] Nilsonne A, Sundberg J. Differences in ability of musicians and nonmusicians to judge emotional state from the fundamental fre-quency of voice samples[J]. Music Perception: An Interdisciplinary Journal, 1985, 2(4): 507-516. [18] Breiman L , Friedman JH , Olshen RA , et al. Classification and Regression Trees (CART)[J]. Biometrics, 1984, 40(3):358.
|