About
News
Academic
BIOGRAPHY

Welcome to visit Wujie Zhou's Homepage!

个人简介:

周武杰,1983年9月出生,副教授/博士后,硕士生导师,省级青年人才,浙江省电子学会理事,IEEE Senior Member,通信学会高级会员,CCF Member,中国人工智能学会会员,浙江省“计算机科学与技术”一流学科B类方向负责人。2012年入选“青年骨干教师”,2015年入选“优秀青年教师资助计划”,2016年入选“科大青年英才”,2022年和2023年连续两年入选斯坦福大学发布的全球前2%顶尖科学家榜单,2024年入选浙江省高层次人才计划青年拔尖人才。浙江大学信息与通信工程专业博士后,国家留学基金委公派新加坡南洋理工大学访问学者(导师:Weisi Lin, Fellow IEEE)。主要从事人工智能与深度学习、机器视觉与模式识别、图像处理等方面的研究;近几年以第一作者在AAAI、TIP、TNNLS、TCSVT、TMM、TII、TITS、JSTSP、TSMC、TBC、TGRS、IEEE IoT Journal、TASE、TCI、TIM、MIS、TCDS、TETCI、TIV、IEEE Sensors Journal、JSTARS、PR、Information Fusion和中国科学等国际权威SCI期刊或核心期刊上发表学术论文70多篇,其中SCI收录60多篇(中科院一区37篇, IEEE Journal/Transactions/Magazine 47篇,CAA-A类期刊34篇, CCF-A和B类期刊/会议33篇,ESI热点论文6篇,10余篇论文入选TIP、TCSVT、TMM、MIS和TETCI 等期刊Top 50 Popular Articles),H指数 (h-index)31 (Google Scholar),被引频次总计3500+ (Google Scholar));申请国家发明专利70多项,授权50多项,多项已转让投产;获市科学技术奖二等奖,浙江省青年科技工作者优秀论文奖;担任国家基金通讯评审专家,浙江省科技专家库专家,广东省基金项目评审专家;担任TIP、TNNLS、TCSVT、TCYB、TMM、TBC、JSTSP、TSMC、SPL等国外权威SCI期刊稿件评审人。目前,主持国家自然科学基金2项(面上和青年各1项),省自然科学基金2项,中国博士后基金1项,企业重大横向课题3项,重中之重实验室开放基金2项和教育厅科研项目1项。指导学生获中国服务外包创新创业大赛二等奖1项。

E-mail: wujiezhou@163.com

招收研究生(含联合培养、转专业等):

视觉智能感知与理解实验室(中央支持地方高校改革发展专项资助建设,项目编号:303011-2019-0008)招收硕士研究生(学硕:先进制造与信息化,专硕:机械、应用统计),主要研究方向:人工智能与深度学习、机器视觉与模式识别、图像处理、视觉大数据统计与应用。实验室部分毕业生选择人工智能相关企业工作(起薪15K/月以上)、部分毕业生选择攻读国内外名校博士学位(北京大学、University of Liverpool、University of North Texas、University of Technology Sydney、同济大学、北京科技大学、中南大学、西北大学、南京理工大学、上海大学、湘潭大学和宁波大学等高校)。目前指导的研究生中10名获国家奖学金(奖金2万/人),4名获卓越学子奖学金(奖金3万/人),1名获校“大学生年度人物”。所指导的研究生都准时毕业(已毕业10多人),未出现延毕。预加入实验室请发个人简历和本科成绩(可系统截图)到E-mail: wujiezhou@163.com

实验室"卓越学子"视频(视频中第2位同学--吴君委):https://mp.weixin.qq.com/s/vYokNzDeHmtVKmIkOcpnnw

实验室"大学生年度人物"(视频中第8位同学--刘劲夫):https://mp.weixin.qq.com/s/ALDUnCtIs8dbnKoGHvDd3Q

科研项目

1、国家自然科学基金面上项目,62371422 ,视觉认知启发式双目视觉显著性物体检测模型研究,主持

2、国家自然科学基金青年项目,61502429 ,基于数据挖掘与感知分析的非对称失真视觉质量评 价模型研究,主持

3、国家重点研发计划项目,2022YFE0196000,数据和知识联合驱动城市易腐垃圾炭化与绿色可持续利用的关键技术及应用,主研

4、浙江省自然科学基金青年基金项目,LQ15F020010,基于立体感知特性分析的非对称失真视觉质量 客观评价模型研究,主持

5、浙江省自然科学基金一般项目,LY18F020012,基于双目视觉机理挖掘的立体视频质量评价模型研 究,主持

6、中国博士后基金面上项目,2015M581932 ,基于视觉感知挖掘的非对称失真视觉质量评价模型, 主持

7、企业委托项目,2020KJ073,智慧海洋渔船信息智能化管理系统开发项目,主持

8、企业委托项目,2021KJ005,生活垃圾投放智能化监管系统开发,主持

9、企业委托项目,2021KJ130 ,基于机器视觉的晶振相关产品缺陷图像识别算法,主持

代表作(中科院一区或IEEE Trans.或CCF A类)

[1] W. Zhou*(周武杰), J. Liu, J. Lei, L. Yu and J.-N. Hwang, “GMNet: Graded-Feature Multilabel-Learning Network for RGB-Thermal Urban Scene Semantic Segmentation,” IEEE Transactions on Image Processing, vol. 30, pp. 7790–7802, 2021. (CCF A类)

[2] W. Zhou*(周武杰), Y. Zhu*, J. Lei, R. Yang, L. Yu, “LSNet: Lightweight Spatial Boosting Network for Detecting Salient Objects in RGB-Thermal Images,” IEEE Transactions on Image Processing, vol. 32, pp. 1329–1340, 2023. (CCF A类)

[3] W. Zhou(周武杰), F. Sun, Q. Jiang, R. Cong, J.-N. Hwang, “WaveNet: Wavelet Network with Knowledge Distillation for RGB-T Salient Object Detection,” IEEE Transactions on Image Processing, vol. 32, pp. 3027–3039, 2023. (CCF A类)

[4] W. Zhou*(周武杰), L. Yu, Y. Zhou, W. Qiu, M.-W. Wu, and T. Luo, “Local and Global Feature Learning for Blind Quality Evaluation of Screen Content and Natural Scene Images,” IEEE Transactions on Image Processing, vol. 27, no. 5, pp. 2086–2095, May 2018. (CCF A类)

[5] W. Zhou*(周武杰), Y. Zhu, J. Lei, J. Wan, and L. Yu, “CCAFNet: Crossflow and cross-scale adaptive fusion network for detecting salient objects in RGB-D images,” IEEE Transactions on Multimedia, vol. 24, pp. 2192–2204, 2022. 

[6] W. Zhou*(周武杰), J. Wu, J. Lei, J.-N. Hwang and L. Yu, “Salient Object Detection in Stereoscopic 3D Images Using a Deep Convolutional Residual Autoencoder,” IEEE Transactions on Multimedia, vol. 23, pp. 3388–3399, 2021. 

[7] W. Zhou*(周武杰), X. Lin, J. Lei, L. Yu and J.-N. Hwang, “MFFENet: Multiscale Feature Fusion and Enhancement Network for RGB–Thermal Urban Road Scene Parsing,” IEEE Transactions on Multimedia, vol. 24, pp. 2526–2538, 2022. 

[8] W. Zhou*(周武杰), E. Yang, J. Lei, J. Wan, and L. Yu, “PGDENet: Progressive Guided Fusion and Depth Enhancement Network for RGB-D Indoor Scene Parsing,” IEEE Transactions on Multimedia, vol. 25, pp. 3483–3494, 2023.

[9] W. Zhou*(周武杰), L. Yu, “Binocular Responses for No-Reference 3D Image Quality Measurement,” IEEE Transactions on Multimedia, vol. 16, no. 6, pp. 1077–1084, 2016. 

[10] W. Zhou*(周武杰), Y. Cai, L. Zhang, W. Yan and L. Yu, "UTLNet: Uncertainty-aware Transformer Localization Network for RGB-Depth Mirror Segmentation," IEEE Transactions on Multimedia, vol. 26, pp. 4564–4574, 2024.

[11] W. Zhou*(周武杰), Q. Guo, J. Lei, L. Yu and J.-N. Hwang, “ECFFNet: Effective and Consistent Feature Fusion Network for RGB-T Salient Object Detection,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 32, no. 3, pp. 1224–1235, March 2022.

[12] W. Zhou*(周武杰), H. Zhang, W. Yan, and W. Lin, “MMSMCNet: Modal Memory Sharing and Morphological Complementary Networks for RGB-T Urban Scene Semantic Segmentation,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 12, pp. 7096–7108, Dec. 2023. 

[13] W. Zhou(周武杰), J. Hong, W. Yan and Q. Jiang, "Modal Evaluation Network via Knowledge Distillation for No-Service Rail Surface Defect Detection," IEEE Transactions on Circuits and Systems for Video Technology, early access, 2023, doi: 10.1109/TCSVT.2023.3325229.

[14] W. Zhou (周武杰), B. Jian, X. Dong and Q. Jiang, “DGPINet-KD: Deep Guided and Progressive Integration Network with Knowledge Distillation for RGB-D Indoor Scene Analysis,” IEEE Transactions on Circuits and Systems for Video Technology, doi: 10.1109/TCSVT.2024.3382354.

[15] W. Zhou(周武杰), C. Ji, and M. Fang, “Transmission Line Detection through Bidirectional Guided Registration with Knowledge Distillation,” IEEE Transactions on Industrial Informatics, vol. 20, no. 4, pp. 5671–5682, April 2024.

[16] W. Zhou*(周武杰), Q. Guo, J. Lei, L. Yu and J.-N. Hwang, “IRFR-Net: Interactive Recursive Feature-reshaping Network for Detecting Salient Objects in RGB-D Images,” IEEE Transactions on Neural Networks and Learning Systems, doi: 10.1109/TNNLS.2021.3105484. 

[17] W. Zhou*(周武杰), Y. Lv, J. Lei and L. Yu, “Global and Local-Contrast Guides Content-Aware Fusion for RGB-D Saliency Prediction,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 51, no. 6, pp. 3641–3649, June 2021. 

[18] W. Zhou*(周武杰), T. Gong, J. Lei and L. Yu, “DBCNet: Dynamic Bilateral Cross-Fusion Network for RGB-T Urban Scene-Understanding in Intelligent Vehicles,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 53, no. 12, pp. 7631–7641, Dec. 2023.

[19] W. Zhou*(周武杰), E. Yang, J. Lei, and L. Yu, “FRNet: Feature Reconstruction Network for RGB-D Indoor Scene Parsing,” IEEE Journal of Selected Topics in Signal Processing, vol. 16, no. 4, pp. 677–687, June 2022. 

[20] W. Zhou*(周武杰), J. Jin, J. Lei, and L. Yu, “CIMFNet: Cross-layer Interaction and Multiscale Fusion Network for Semantic Segmentation of High-Resolution Remote Sensing Images,” IEEE Journal of Selected Topics in Signal Processing, vol. 16, no. 4, pp. 666–676, June 2022.

[21] W. Zhou*(周武杰), Y. Zhang, W. Yan, L. Ye, “An Efficient RGB-D Indoor Scene-Parsing Solution via Lightweight Multi-flow Intersection and Knowledge Distillation,” IEEE Journal of Selected Topics in Signal Processing, doi: 10.1109/JSTSP.2024.3400030.

[22] W. Zhou*(周武杰), Y. Pan, L. Y, J. Lei, and L. Yu, “DEFNet: Dual-Branch Enhanced Feature Fusion Network for RGB-T Crowd Counting,” IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 12, pp. 24540–24549, Dec. 2022. 

[23] W. Zhou*(周武杰), Y. Lv, J. Lei, and L. Yu, “Embedded Control Gate Fusion and Attention Residual Learning for RGB–Thermal Urban Scene Parsing,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, no. 5, pp. 4794–4803, May 2023. 

[24] W. Zhou*(周武杰), X. Yang, J. Lei, W. Yan and L. Yu, "MC3Net: Multimodality Cross-Guided Compensation Coordination Network for RGB-T Crowd Counting," IEEE Transactions on Intelligent Transportation Systems, doi: 10.1109/TITS.2023.3321328.

[25] W. Zhou(周武杰), J. Hong, X. Ran, W. Yan and Q. Jiang, "DSANet-KD: Dual Semantic Approximation Network via Knowledge Distillation for Rail Surface Defect Detection," IEEE Transactions on Intelligent Transportation Systems, doi: 10.1109/TITS.2024.3385744.

[26] W. Zhou*(周武杰), J. Jin, J. Lei, and J.-N. Hwang, “CEGFNet: Common Extraction and Gate Fusion Network for Scene Parsing of Remote Sensing Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1–10, 2022, Art no. 5405110.

[27] W. Zhou(周武杰), X. Fan, W. Yan, S. Shan, Q. Jiang, and J.-N. Hwang, “Graph Attention Guidance Network with Knowledge Distillation for Semantic Segmentation of Remote Sensing Images,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–15, 2023, Art no. 4506015.

[28] W. Zhou(周武杰), Y. Li, J. Huang, W. Yan, M. Fang and Q. Jiang, “GSGNet-S*: Graph Semantic Guidance Network via Knowledge Distillation for Optical Remote Sensing Image Scene Analysis,” IEEE Transactions on Geoscience and Remote Sensing, vol. 61, pp. 1–12, 2023, Art no. 4508512.

[29] W. Zhou (周武杰), Y. Li, J. Huang, Y. Liu and Q. Jiang, "MSTNet-KD: Multilevel Transfer Networks Using Knowledge Distillation for the Dense Prediction of Remote-Sensing Images," IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-12, 2024, Art no. 4504612.

[30] W. Zhou(周武杰), X. Yang, X. Dong, “MJPNet-S*: Multistyle Joint-perception Network with Knowledge Distillation for Drone RGB-Thermal Crowd Density Estimation in Smart Cities,”  IEEE Internet of Things Journal, doi: 10.1109/JIOT.2024.3369642. 

[31] W. Zhou(周武杰), Y. Xiao, W. Yan, and L. Yu, “CMPFFNet: Cross-Modal and Progressive Feature Fusion Network for RGB-D Indoor Scene Semantic Segmentation,” IEEE Transactions on Automation Science and Engineering, 2023, doi: 10.1109/TASE.2023.3313122.

[32] W. Zhou (周武杰), J. Yang, et al. “RDNet-KD: Recursive Encoder, Bimodal Screening Fusion, and Knowledge Distillation Network for Rail Defect Detection,” IEEE Transactions on Automation Science and Engineering, 2024, doi: 10.1109/TASE.2024.3374387.

[33] W. Zhou*(周武杰), W. Qiu, M. Wu, “Utilizing Dictionary Learning and Machine Learning for Blind Quality Assessment of 3D Images,” IEEE Transactions on Broadcasting, vol. 63, no. 2, pp. 404–415, June 2017.

[34] W. Zhou*(周武杰), S. Dong, J. Lei, and L. Yu, “MTANet: Multitask-Aware Network with Hierarchical Multimodal Fusion for RGB-T Urban Scene Understanding,” IEEE Transactions on Intelligent Vehicles, vol. 8, no. 1, pp. 48–58, Jan. 2023. 

[35] W. Zhou(周武杰), S. Dong, M. Fang and L. Yu, "CACFNet: Cross-Modal Attention Cascaded Fusion Network for RGB-T Urban Scene Parsing," IEEE Transactions on Intelligent Vehicles, vol. 9, no. 1, pp. 1919–1929, Jan. 2024. 

[36] W. Zhou*(周武杰), J. Lei, T. Luo, “TSNet: Three-stream Self-attention Network for RGB-D Indoor Semantic Segmentation,” IEEE Intelligent Systems, vol. 36, no. 4, pp. 73–78, July-Aug. 2021.

[37] W. Zhou*(周武杰), S. Lv, J. Lei, and L. Yu, “RFNet: Reverse Fusion Network with Attention Mechanism for RGB-D Indoor Scene Understanding,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 7, no. 2, pp. 598–603, April 2023.

[38] W. Zhou*(周武杰), Y. Zhu, J. Lei, J. Wan, and L. Yu, “APNet: Adversarial-Learning-Assistance and Perceived Importance Fusion Network for All-Day RGB-T Salient Object Detection,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 4, pp. 957–968, Aug. 2022.

[39] W. Zhou*(周武杰), S. Pan, J. Lei, and L. Yu, “TMFNet: Three-Input Multilevel Fusion Network for Detecting Salient Objects in RGB-D Images,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 6, no. 3, pp. 593–601, June 2022.

[40] W. Zhou(周武杰), G. Xu, “ACENet: Auxiliary Context-Information Enhancement Network for RGB-D Indoor Scene Semantic Segmentation,” IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 8, no. 2, pp. 1125–1129, April 2024.

[41] W. Zhou*(周武杰), W. Liu, J. Lei, T. Luo, L. Yu, “Deep Binocular Fixation Prediction Using Hierarchical Multimodal Fusion Network,” IEEE Transactions on Cognitive and Developmental Systems, vol. 15, no. 2, pp. 476–486, June 2023.

[42] W. Zhou*(周武杰), J. Lei, Q. Jiang, L. Yu and T. Luo, “Blind Binocular Visual Quality Predictor Using Deep Fusion Network,” IEEE Transactions on Computational Imaging, vol. 6, pp. 883–893, 2020.

[43] W. Zhou*(周武杰), and J. Hong, “FHENet: Lightweight Feature Hierarchical Exploration Network for Real-Time Rail Surface Defect Inspection in RGB-D Images,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–8, 2023, Art no. 5005008.  

[44] W. Zhou(周武杰), C. Ji and M. Fang, “Effective Dual-Feature Fusion Network for Transmission Line Detection,” IEEE Sensors Journal, vol. 24, no. 1, pp. 101–109, 1 Jan.1, 2024.

[45] W. Zhou*(周武杰), X. Fan, L. Yu, and J. Lei, “MISNet: Multiscale Cross-layer Interactive and Similarity Refinement Network for Scene Parsing of Aerial Images,” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 16, pp. 2025–2034, 2023. 

[46] W. Zhou*(周武杰), C. Liu, J. Lei, and L. Yu, “Remaking learning: A Lightweight Network for Saliency Redetection on RGB-D Images,” SCIENCE CHINA Information Sciences, vol. 65, no. 5, Art. no. 160107, 2022. (CCF A类)

[47] W. Zhou*(周武杰), S. Dong, C. Xu, Y. Qian, “Edge-aware Guidance Fusion Network for RGB–Thermal Scene Parsing,” in Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI), vol. 36, no. 3, pp. 3571–3579, 2022. (CCF A类, 人工智能顶级会议)

[48] W. Zhou*(周武杰), Y. Yue, M. Fang, X. Qian, R. Yang, L. Yu, “BCINet: Bilateral Cross-Modal Interaction Network for Indoor Scene Understanding in RGB-D Images,” Information Fusion, vol. 94, pp. 32–42, 2023.

[49] W. Zhou*(周武杰), Y. Cai, X. Dong, F. Qiang, W. Qiu, “ADRNet-S*: Asymmetric depth registration network via contrastive knowledge distillation for RGB-D mirror segmentation,” Information Fusion, vol. 108, 2024, Art no. 102392.

[50] W. Zhou*(周武杰), L. Yu, Y. Zhou, W. Qiu, M.-W. Wu, Ting Luo, “Blind quality estimator for 3D images based on binocular combination and extreme learning machine,” Pattern Recognition, vol. 71, pp. 207–217, Nov. 2017. 

[51] W. Zhou*(周武杰), L. Yu, W. Qiu, Y. Zhou, M. Wu, “Local Gradient Patterns (LGP): an Effective Local Statistical Features Extraction Scheme for No-Reference Image Quality Assessment,” Information Sciences, vol. 397–398, pp. 1–14, Aug. 2017.

[52] S. Dong (研究生), W. Zhou*, C. Xu, and W. Yan, "EGFNet: Edge-aware guidance fusion network for RGB–thermal urban scene parsing," IEEE Transactions on Intelligent Transportation Systems, vol. 25, no. 1, pp. 657–669, Jan. 2024.

[53] B. Wang (研究生), W. Zhou*, W. Yan, Q. Jiang and R. Cong, “PENet-KD: Progressive Enhancement Network via Knowledge Distillation for Rail Surface Defect Detection,” IEEE Transactions on Instrumentation and Measurement, vol. 72, pp. 1–11, 2023, Art no. 5032811.

[54] X. Yang (研究生), W. Zhou, W. Yan, X. Qian, “CAGNet: Coordinated attention guidance network for RGB-T crowd counting,” Expert Systems with Applications, vol. 243, 2024, Art no. 122753.

[55] X. Fan(研究生), W. Zhou, X. Qian, W. Yan, “Progressive adjacent-layer coordination symmetric cascade network for semantic segmentation of multimodal remote sensing images,” Expert Systems with Applications, vol. 238, 2024, Art. no. 121999

[56] J. Jin (研究生), W. Zhou, L. Ye, J. Lei, L. Yu, X. Qian, T. Luo, “DASFNet: Dense-Attention–Similarity-Fusion Network for scene classification of dual-modal remote-sensing images,” International Journal of Applied Earth Observation and Geoinformation, vol. 115, 2022, Art. no. 103087.

[57] X. Guo (研究生), W. Zhou, T. Liu, “Contrastive Learning-Based Knowledge Distillation for RGB-Thermal Urban Scene Semantic Segmentation,” Knowledge-Based Systems, doi: 10.1016/j.knosys.2024.111588.

[58] J. Wu (研究生), W. Zhou, T. Luo, L. Yu, and J. Lei, “Multiscale multilevel context and multimodal fusion for RGB-D salient object detection,” Signal Processing, vol. 178, 2021, Art. No. 107766.

CONTACT BY SCHOLAT
You can communicate with other scholars through Inbox , and you can also communicate by Instant Messaging .
https://www.scholat.com/zhouwujie
杭州市西湖区留和路318号
SCAN the QR Code
Visit My Homepage
SCHOLAT.com 学者网
ABOUT US | SCHOLAT