About Me
Hi! My name is Jiacheng Zhang. I am a second-year PhD student advised by Professor Steve Oney.
My research aims to improve Human-AI interaction by identifying and addressing usability and conceptual gaps between users and the AI systems they interact with.
My recent research explores AI-assisted web agents that collaborate with users to handle online tasks such as information retrieval and decision-making.
I earned my Bachelor's degree in Computer Science from University of Michigan. I also obtained another degree in Electrical and Computer Engineering from Shanghai Jiao Tong University.
During my undergraduate years, I was fortunate to work with Prof. Xinyu Wang and
Prof. Tianyi Zhang in web automation systems. I was also advised by
Prof. Andrew Owens in multimodal learning and computer vision.
I'm open to research collaborations. Feel free to drop me an email (jiache [at] umich [dot] edu) if you are interested in my research.
News
- [2023/07] "Generating Visual Scenes from Touch" is accepted by ICCV 2023.
- [2023/06] "MIWA: Mixed-Initiative Web Automation for Better User Control and Confidence" is accepted by UIST '23.
- [2022/09] "Touch and Go: Learning from Human-Collected Vision and Touch" is accepted by Neurips'22.
Education
-
Ph.D. in Information Science
University of Michigan, Ann Arbor, USB.S.E in Computer Science | Minor. Statistics
(Summa Cum Laude)University of Michigan, Ann Arbor, USB.S.E in Electrical and Computer EngineeringShanghai Jiao Tong University, Shanghai, ChinaResearch
Generating Visual Scenes from Touch
Fengyu Yang, Jiacheng Zhang, Andrew Owens
ICCV, 2023We use diffusion to generate images from a touch signal (and vice versa).
[project page] [paper]Touch and Go: Learning from Human-Collected Vision and Touch
Fengyu Yang*, Chenyang Ma*, Jiacheng Zhang, Jing Zhu, Wenzhen Yuan, Andrew Owens NeurIPS (Datasets and Benchmarks Track), 2022A dataset of paired vision-and-touch data collected by humans. We apply it to: 1) restyling an image to match a tactile input, 2) self-supervised representation learning, 3) multimodal video prediction.
[project page] [paper] [dataset]MIWA: Mixed-InitiativeWeb Automation for Better User Control and Confidence
Weihao Chen, Xiaoyu Liu, Jiacheng Zhang, Zhicheng Huang, Ian Long Lam, Rui Dong, Xinyu Wang, Tianyi Zhang
UIST, 2023We provide MIWA, a mixed-initiative web automation system that enables users to create web scraping programs by demonstrating what contents they want from the targeted websites.
[paper]