[1] |
ZHU X. Semi-supervised learning literature survey[J]. Computer Science, 2008, 37(1):63-77.
|
[2] |
HINTON G E, OSINDERO S, TEH Y W. A fast learning algorithm for deep belief nets[J]. Neural Computation, 2006, 18(7):1527-1554.
|
[3] |
KRIZHEVSKY A, SUTSKEVER I, HINTON G E. ImageNet classification with deep convolutional neural networks[C]//Proceedings of the 25th International Conference on Neural Information Processing Systems. 2012:1097-1105.
|
[4] |
SZEGEDY C, LIU W, JIA Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015:1-9.
|
[5] |
GIRSHICK R, DONAHUE J, DARRELL T, et al. Rich feature hierarchies for accurate object detection and semantic segmentation[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2014:580-587.
|
[6] |
REN S, HE K, GIRSHICK R, et al. Faster R-CNN:towards real-time object detection with region proposal networks.[J]. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2015, 39(6):1137-1149.
|
[7] |
VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders:learning useful representations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010, 11(12):3371-3408.
|
[8] |
RASMUS A, VALPOLA H, HONKALA M, et al. Semi-supervised learning with ladder networks[J]. Computer Science, 2015, 9(Suppl. 1):1-9.
|
[9] |
VALPOLA H. From neural PCA to deep unsupervised learning[M]//Advances in Independent Component Analysis and Learning Machines. Academic Press, 2015:143-171.
|
[10] |
HUANG G, LIU Z, WEINBERGER K Q, et al. Densely connected convolutional networks[J]. arXiv preprint arXiv:1608.06993, 2016.
|
[11] |
SCHÖLKOPF B, PLATT J, HOFMANN T. Efficient learning of sparse representations with an energy-based model[C]//Advances in Neural Information Processing Systems. 2006:1137-1144.
|
[12] |
PEZESHKI M, FAN L, COURVILLE A, et al. Deconstructing the ladder network architecture[C]//International Conference on Machine Learning. JMLR Org., 2016:2368-2376.
|
[13] |
IOFFE S, SZEGEDY C. Batch normalization:accelerating deep network training by reducing internal covariate shift[C]//International Conference on Machine Learning. 2015:448-456.
|
[14] |
NAIR V, HINTON G E. Rectified linear units improve restricted boltzmann machines[C]//Proceedings of the 27th International Conference on Machine Learning (ICML-10). 2010:807-814.
|
[15] |
SRIVASTAVA R K, GREFF K, SCHMIDHUBER J. Training very deep networks[C]//Advances in Neural Information Processing Systems. 2015:2377-2385.
|
[16] |
LARSSON G, MAIRE M, SHAKHNAROVICH G. FractalNet:ultradeep neural networks without residuals[J]. arXiv preprint arXiv:1605.07648, 2016.
|
[17] |
HE K, ZHANG X, REN S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:770-778.
|
[18] |
DOWNS J J, VOGEL E F. A plant-wide industrial process control problem[J]. Computers & Chemical Engineering, 1993, 17(3):245-255.
|
[19] |
BLAKE C L, MERZ C J. UCI repository of machine learning databases[J]. Department of Information and Computer Science, 1998, doi:10.1234/12345678.
|
[20] |
SHEN Y, DING S X, HAGHANI A, et al. A comparison study of basic data-driven fault diagnosis and process monitoring methods on the benchmark Tennessee Eastman process[J]. Journal of Process Control, 2012, 22(9):1567-1581.
|