Curriculum Vitae
Vishal Chauhan
Doctoral Candidate (D3) · Tsukada Laboratory · Department of Creative Informatics
The University of Tokyo · Tokyo, Japan
vishalchauhan@outlook.sg · Scholar · GitHub · LinkedIn · Tsukada Lab
Summary
I work on human–machine interaction, autonomous vehicles, and robotics — designing and prototyping pedestrian–AV interaction approaches, then validating them through VR/AWSIM simulation and real-world experiments. In parallel I support the Autoware engineering team at Tier IV, integrating the latest Autoware nearly daily, running scenario tests with physical robots, and contributing upstream improvements that strengthen scenario-test reliability across GitHub Actions CI.
Education
Graduate School of Information Science and Technology, The University of Tokyo
Fellowship for Creation of Intelligent World. Advisor: Prof. Manabu Tsukada.
The University of Tokyo · GPA 3.0/3.0
Todai Fellowship.
Vel Tech University (Chennai, India) + Nanyang Technological University (Singapore) · GPA 8.37/10
Prime Minister Scholarship · NTU-Fellowship.
Experience
- Autoware Universe: created JSON schemas, updated docs/README, resolved PR feedback across multiple packages, maintained PR tracking and delivered merged updates.
- Pilot.Auto S1: updated pilot-auto.s1 workspace; executed planning simulation and scenario tests with physical robot; synced parameters with awf-latest and verified SWB3 simulations; implemented health-check / debug scripting.
- Evaluation: ran repeatable planning checks and documented failure patterns and reproduction notes to speed up triage and debugging.
- Upstream CI: contributed merged fixes to Autoware scenario-test GitHub Actions (workflow stability, dependency installation, scenario-test workflow / version updates), improving CI reliability.
- Implemented an end-to-end audio recording and transcription proof-of-concept in Flutter/Dart using flutter_sound, with OpenAI Whisper API integration.
- Built a SoundVisualizer widget for real-time amplitude visualization; delivered a tested feature branch plus setup documentation for pilot users on NAM mic and Even Reality AR glasses.
- github.com/vish0012/EvenDemoApp
- Built VR scenarios and evaluated human vs. LLM decision models for safety-critical pedestrian–AV interaction.
- Related publication: Towards the Future of Pedestrian–AV Interaction (IJHCS, 2025).
- Explored tensor-based learning methods (e.g., AlphaTensor) for advanced data processing.
- Studied applications of tensor learning to pedestrian–AV interaction and system performance.
- Analyzed and visualized satellite operations data using Grafana dashboards.
- Wrote automation scripts to optimize satellite capture scheduling and reporting.
- Developed walkable-region detection for mobile robots using Intel RealSense D435 RGB-D sensing.
- Improved tracking accuracy through ICP fusion of color and depth for hospital and commercial robot scenarios.
Open-source contributions
- Improved scenario-test CI reliability — workflow stability, health-check, dependency fixes — making regression runs more reproducible.
- Created and updated configuration schemas and developer documentation across Autoware Universe to support consistent setup and onboarding.
- Built Unity / AWSIM scenarios and scoring criteria to compare interaction behaviors and evaluation outcomes for eHMI research.
- Packaged the pipeline into a reusable, reproducible simulation testbed — scenarios, metrics, and documentation — enabling repeatable experiments and cross-scenario comparison.
Publications
Reverse chronological. See the publications page for thumbnails, abstracts, and links — or Google Scholar for the full list with citation metrics.
2026
2025
2024
2023
Awards & honours
Don't Worry, Just Follow Me
Peeking Ahead of the Field Study (VLM Personas)
Conference & seminar participation
Lake Tahoe, USA (2022) and Blonay, Switzerland (2025)
Cooperative perception, motion planning, AI/data for mobility, implicit and explicit human–AV interaction, infrastructure eHMI.
UTokyo × Doshisha, with industry mentors from Tier IV, Digital Agency, NeoCareer, ASMobility.
Teaching & mentoring
VR experiment design, AWSIM scenario authoring, field-study logistics for pedestrian–AV research.
Skills
- Programming
- Python, C++, C#
- Robotics & simulation
- ROS 2, Autoware, CARLA, AWSIM, Unity
- Tools
- Git, Linux, Docker
- Data & analysis
- Pandas, NumPy, Matplotlib, evaluation analysis, LLM
- Testing & debugging
- Scenario regression testing, GitHub Actions (CI/CD), parameter validation
- Research
- Experimental design, eHMI prototyping, human-subject study preparation (ethics / IRB), questionnaire design & analysis
Certifications
Languages
- English
- Fluent
- Hindi
- Native
- Japanese
- Basic
Updated May 2026.