پرو مجازی از طریق نگاشت سهبعدی همراه با بخشبندی مدل سهبعدی
محورهای موضوعی : انرژی های تجدیدپذیرحامد فتحی 1 , علیرضا احمدی فرد 2 , حسین خسروی 3
1 - دانشکده مهندسی برق- دانشگاه صنعتی شاهرود، شاهرود، ایران
2 - دانشکده مهندسی برق- دانشگاه صنعتی شاهرود، شاهرود، ایران
3 - دانشکده مهندسی برق- دانشگاه صنعتی شاهرود، شاهرود، ایران
کلید واژه: پرو مجازی, نگاشت سهبعدی, لاپلاس بلترامی, توصیفگر انحنا,
چکیده مقاله :
پرو مجازی یک گزینه مناسب برای صنعت پوشاک آنلاین است. در این مقاله روشی برای ترسیم مدل سهبعدی لباس انتخابی بر روی مدل سهبعدی مشتری پیشنهاد میگردد. برای وصول به این هدف ابر نقاط مشتری و مانکن توسط دوربین کینکت ثبت میشود. برای سهولت تطابق، این مدلها با استفاده از توصیف گرهای رویه، به بخشهای متناظر تقسیم میشوند. سپس بخشهای متناظر از مانکن به مشتری بهطور جداگانه نگاشت داده می شود. درنهایت، اطلاعات رنگ از ابر نقاط لباس به روی ابر نقاط مشتری انتقال مییابد. روش پیشنهادی دو مزیت اصلی نسبت به روشهای موجود دارد. اول اینکه برای طراحی مدلهای سهبعدی در نرمافزارهای گرافیکی نیازی به متخصص نیست. دوم اینکه هر بافت لباسی میتواند توسط مشتری انتخاب شود. نتایج آزمایشها نشاندهنده توانایی روش پیشنهادی این مقاله در مقایسه با روشهای موجود است.
Virtual clothing try-on can be a great option for the online clothing industry. In this paper, we propose a method to map the 3D model of selected clothes on the customer's 3D model. For this purpose, the point clouds of the customer and mannequin are captured by the Kinect camera. These models are segmented into corresponding parts using surface descriptors to ease the matching. Then, individual parts of the mannequin are mapped on the corresponding parts of the customer. Finally, the color information from the clothes on the mannequin is transformed to the customer's body point cloud. The proposed method has two main advantages over the existing methods. First, no need for an expert to design 3D models in graphic software. Second, any style and texture of clothes can be chosen by the customer. The results of the experiments show the ability of the proposed method compared to existing methods.
[1] A. Hilsmann, P. Eisert, "Tracking and retexturing cloth for real-time virtual clothing applications", Proceeding of the MIRAGE, pp. 94-105, Rocquencourt, France, May 2009 (doi:10.1007/978-3-642-01811-4_9).
[2] B.K.P. Horn, B.G. Schunck, "Determining optical flow", Artificial Intelligence, vol. 17, no. 1-3, pp. 185-203, 1981 (doi:10.1016/0004-3702(81)90024-2).
[3] A. Hilsmann, P. Eisert, "Deformable object tracking using optical flow constraints", Proceeding of the IEEE/CVMP, London, Nov. 2007.
[4] W. Zhang, T. Matsumoto, J. Liu, M. Chu, "An intelligent fitting room using multi-camera perception", Proceeding of the ICIUI, pp. 60-69, Gran Canaria Spain, Jan. 2008 (doi:10.1145/1378773.1378782).
[5] C.W. Hsieh, C.Y. Chen, C.L. Chou, H.H. Shuai, J. Liu, W.H. Cheng, "FashionOn: Semantic-guided image-based virtual try-on with detailed human and clothing information", Proceeding of the ACM/ICM, pp. 275-283, Nice France, Oct. 2019 (doi:10.1145/3343031.3351075).
[6] C. Ge, Y. Song, Y. Ge, H. Yang, W. Liu, P. Luo, "Disentangled cycle consistency for highly-realistic virtual try-on", Proceedings of the IEEE/CVF-CVPR, pp. 16928-16937, 2021.
[7] H. Dong, X. Liang, X. Shen, B. Wu, B.C. Chen, J. Yin, "FW-GAN: flow-navigated warping GAN for video virtual try-on", Proceedings of the IEEE/CVF-ICCV, pp. 1161-1170, Seoul, Korea, 2019.
[8] K.M. Lewis, S. Varadharajan, I. Kemelmacher-Shlizerman, "TryOnGAN: body-aware try-on via layered interpolation", ACM Trans on Graphics, vol. 40, no. 4, pp. 1-10, 2021 (doi:10.1145/3450626.3459884).
[9] K. Li, M. Jin Chong, J. Liu, D. Forsyth, "Toward accurate and realistic virtual try-on through shape matching and multiple warps", ArXiv e-prints, pp. 1–17, Mar. 2020 (doi:10.48550/arXiv.2003.10817).
[10] M.R. Minar, T.T. Tuan, H. Ahn, P. Rosin, Y.K. Lai, "Cp-vton+: clothing shape and texture preserving image-based virtual try-on", Proceedings of the IEEE/CVF-CVPR Workshops, June 2020.
[11] S. Choi, S. Park, M. Lee, J. Choo, "VITON-HD: high-resolution virtual try-on via misalignment-aware normalization", Proceedings of the IEEE/CVF-CVPR, pp. 14131-14140, 2021.
[12] M. Kotan, C. Öz, "Virtual dressing room application with virtual human using kinect sensor", Journal of Mechanics Engineering and Automation, vol. 5, pp. 322-326, 2015 (doi:10.17265/2159-5275/2015.05.008).
[13] M. Yuan, I.R. Khan, F. Farbiz, S. Yao, "A mixed reality virtual clothes try-on system", IEEE Trans on Multimedia, vol. 15, no. 8, pp. 1958 - 1968, Dec. 2013 (doi:10.1109/TMM.2013.2280560).
[14] P. Volino, N. Magnenat-Thalmann, F. Faure, "A simple approach to nonlinear tensile stiffness for accurate cloth simulation", ACM Trans on Graphics, vol. 28, no. 4, August 2009 (doi:10.1145/1559755.1559762).
[15] S. Milborrow, F. Nicolls, "Locating facial features with an extended active shape model", Proceedings of the ECCV, pp. 504-513, Heidelberg Berlin, Oct. 2008.
[16] E. Reinhard, M. Adhikhmin, B. Gooch, "Color transfer between images", IEEE Computer Graphics and Applications , vol. 21, no. 5, pp. 34-41, July 2001 (doi:10.1109/38.946629).
[17] S.B. Adikari, N.C. Ganegoda, R.G.N Meegama, "Applicability of a single depth sensor in real-time 3D clothes simulation: augmented reality virtual dressing room using kinect sensor", Advances in Human-Computer Interaction, May 2020 (doi:10.1155/2020/1314598).
[18] K.W. Mok, C.T. Wong, S.K. Choi, L.M. Zhang, "Design and development of virtual dressing room system based on kinect", International Journal of Information Technology and Computer Science, pp. 39-46, Sept. 2018.
[19] S. Yang, Z. Pan, T. Amert, K. Wang, L. Yu, T. Berg, "Physics-Inspired garment recovery from a single-view image", ACM Transactions on Graphics, vol. 37, no. 5, pp. 1-14, Oct. 2018.
[20] G. Pons-Moll, S. Pujades, S. Hu, M.J. Black, "ClothCap: seamless 4D clothing capture and retargeting", ACM Transactions on Graphics, vol. 36, no. 4, pp. 1-15, July 2017 (doi:10.1145/3072959.3073711).
[21] Y. Xu, S. Yang, W. Sun, L. Tan, K. Li, H. Zhou, "3D virtual garment modeling from RGB images", Proceedings of the IEEE/ISMAR, pp. 37-45, China, Oct. 2019 (doi:10.1109/ISMAR.2019.00-28).
[22] C. Li, F. Cohen, "In-home application (App) for 3D virtual garment fitting dressing room", Multimedia Tools and Applications, vol. 80, pp. 5203-5224, Oct. 2020 (doi:10.1007/s11042-020-09989-x).
[23] F. Zhao, Z. Xie, M. Kampffmeyer, H. Dong, S. Han, T. Zheng, T. Zhang, X. Liang, "M3D-VTON: a monocular-to-3D virtual try-on network", Proceedings of the IEEE/CVF-ICCV, pp. 13239-13249, Oct. 2021.
[24] V. Gabeur, J.S Franco, X. Martin, C. Schmid, G. Rogez, "Moulding humans: non-parametric 3D human Shape estimation from single images", Proceedings of the IEEE/CVF-ICCV, pp. 2232-2241, Seoul, Korea, 2019.
[25] A. Mir, T. Alldieck, G. Pons-Moll, "Learning to transfer texture from clothing images to 3D humans", Proceedings of the IEEE/CVF-CVPR, pp.7023-7034, 2020.
[26] H. Fathi, A.R Ahmadyfard, H. Khosravi, "Deformable 3D shape matching to try on virtual clothes via laplacian-beltrami descriptor", Journal of Artificial Intelligence & Data Mining, vol. 10, no. 1, pp. 63-74, Jan. 2022 (doi:10.22044/JADM.2021.10749.2212).
[27] C. Yuan, X. Yu, Z. Luo, "3D point cloud matching based on principal component analysis and iterative closest point algorithm", Proceedings of the ICALIP, pp. 404-408, Shanghai, China , July 2016 (doi:10.1109/ICALIP.2016.7846655).
[28] S. Rusinkiewicz, M. Levoy, "Efficient variants of the ICP algorithm", Proceedings of the 3DIM, pp.145-152, Quebec City, Canada, May 2001 (doi:10.1109/IM.2001.924423).
[29] D. Nogneng, M. Ovsjanikov, "Informative descriptor preservation via commutativity for shape matching", Computer Graphics Forum, vol. 36, no. 2, pp. 259-267, May 2017 (doi:10.1111/cgf.13124).
[30] M. Ovsjanikov, M. Ben-Chen, J. Solomon, "Functional maps: a flexible representation of maps between shapes", ACM Trans on Graphics, vol. 31, no. 4, pp. 1-11, July 2012 (doi:10.1145/2185520.2185526).
[31] J. Ren. A. Poulenard. P. Wonak. M. Ovsjanikov, "Continuous and orientation-preserving correspondences via functional maps", ACM Trans on Graphics, vol. 37, no. 6, pp. 1-16, Dec. 2018 (doi:10.1145/3272127.3275040).
[32] Y. Kleiman, M. Ovsjanikov, "Robust structure‐based shape correspondence", Computer Graphics Forum, vol. 38, pp. 7-20, Oct. 2019 (doi:10.1111/cgf.13389).
[33] U. Dierkes, S. Hildebrandt, F. Sauvigny, "Minimal surfaces", Springer, Berlin, Heidelberg, pp. 53-88 1992.
[34] X. David Gu, R. Guo, F. Luo, W. Zeng, "Discrete laplace-beltrami operator determines discrete riemannian metric", ArXiv e-prints, Oct. 2010 (doi:10.48550/arXiv.1010.4070).
[35] J. Huang, H. Wang, T. Birdal, M. Sung, F. Arrigoni, S.M. Hu, L. Guibas, "MultiBodySync: Multi-body segmentation and motion estimation via 3D scan synchronization", Proceedings of the IEEE/CVF-CVPR, pp. 7108-7118, Nashville, TN, USA, June 2021.
[36] S. Yuan, Y. Fang, "ROSS: robust learning of one-shot 3D shape segmentation", Proceedings of the IEEE/WACV, pp. 1961-1969, Snowmass, CO, USA, March 2020.
[37] C. Zhu, K. Xu, S. Chaudhuri, L. Yi, L.J. Guibas, H. Zhang, "AdaCoSeg: adaptive shape co-segmentation with group consistency loss", Proceedings of the IEEE/CVF-CVPR, pp. 8543-8552, Seattle, WA, USA, June 2020.
[38] D. George, X. Xie, G.K. Tam, "3D mesh segmentation via multi-branch 1D convolutional neural networks", Graphical Models, vol. 96, pp. 1-10, 2018 (doi:10.1016/j.gmod.2018.01.001).
[39] J. Ren, J. Schneider, M. Ovsjanikov, P. Wonka, "Joint graph layouts for visualizing collections of segmented meshes", IEEE Trans. on Visualization and Computer Graphics, vol. 24, no. 9, pp. 2546 - 2558, Sept. 2018 (doi: 10.1109/TVCG.2017.2751473).
[40] A. Poulenard, M. Ovsjanikov, "Multi-directional geodesic neural networks via equivariant convolution", ACM Trans on Graphics (TOG), vol. 37, no. 6, pp. 1-14, Dec. 2018 (doi:10.1145/3272127.3275102).
[41] L. Yi, H. Su, X. Guo, L.J. Guibas, "SyncSpecCNN: synchronized spectral CNN for 3D shape segmentation", Proceedings of the IEEE/CVPR, pp. 6584-6592, Honolulu, HI, USA, July 2017.
[42] E. Kalogerakis, M. Averkiou, S. Maji, S. Chaudhuri, "3D shape segmentation with projective convolutional networks", Proceedings of the IEEE/CVPR, pp. 2282-2290, Honolulu, HI, USA, July 2017.
[43] P. Skraba, M. Ovsjanikov, F. Chazal, L. Guibas, "Persistence-based segmentation of deformable shapes", Proceeding of the IEEE/CVPRW, pp. 45-52, San Francisco, CA, USA, June 2010 (doi: 10.1109/CVPRW.2010.5543285).
[44] R.M. Rustamov, "Laplace-beltrami eigenfunctions for deformation invariant shape representation", Proceedings of the ESGP, pp. 225-233, July 2007.
[45] A. Sharma, "Representation, segmentation and matching of 3D visual shapes using graph laplacian and heat-kernel", Ph.D. Thesis, Institut National Polytechnique de Grenoble-INPG, Grenoble, France 2012.
[46] A. Myronenko, X. Song, "Point-set registration: coherent point drift", IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 12, pp. 2262-2275, Mar. 2009 (doi:10.1109/TPAMI.2010.46).
_||_