ITS students in the Masters of Engineering (MEng) program had a chance to share the results of their 9-month capstone project at the Engineering Today to Lead Tomorrow 8th annual UC Berkeley Master of Engineering Capstone Showcase on Friday, May 10, 2019 from 5-7pm at the Pauley Ballroom in ASUC Student Union.
The event featured 10 tracks and 85 teams, with ITS Faculty advised and student created dynamic and collaborative engineering research projects from the academic year featured in the Autonomous Vehicles and Transportation: Aero-Space & Vehicular Traffic tracks.
Congratualtions to ITS student Fangyu Wu for earning the M.Eng. Technical Contribution Award for demonstrating superb leadership skills, combining research while excellent in technical and leadership coursework, demonstrating technical leadership in the field, and working across multiple teams(link to follow), and Farah Hassan for earning an honorable mention for the Fung Institute Technical Contributions Award.
Other tracks include: Advances in Manufacturing, Robotics, IoT & Wireless Communication, Health & Well-Being: Data-Driven Insights, Health & Well-Being: New Tools & Therapies, Cryptocurrency, Financial Technologies & Strategy, Energy and Environment, Machine Learning: New Tools & Applications, and VR: New Tools & Applications. View project abstracts: https://funginstitute.berkeley.edu/capstone-showcase-2019/
“The MEng program was designed for students of all different backgrounds to collaborate with each other and apply their expertise to a wider scope of issues,” says Coleman Fung, FI Founder and Chairman, at the opening of the event which highlighted social impact solutions from building hands for children to creating fuel for deep-space space-travel at your fingertips.
In addition to the student researchers and their advisers, the event drew attendees from across several industries, Berkeley students, faculty, and alumni to hear about the latest innovations in engineering technology.
Groups with ties to ITS include:
Track efficacy of airport infrastructure upgrades to see if they reduced outage occurrences, flight delays, and cancellations under extreme weather conditions
Team: Eric Lo [IEOR], Bryan Li [CEE], Youyang Zhang [IEOR]
Advisor: Jasenka Rakas [CEE]In this project, the team is dedicated to find whether the vulnerability of critical aviation infrastructure system was improved by fortifications under severe weather events such as lightning strikes. The team started off by conducting a case analysis on Baltimore-Washington International Airport (BWI), where facility reliability and flight performances were examined. To obtain more convincing results, the team gradually expanded their expertise into the area of data science and network science. Using these concepts, the project team proposed investigating the relationships among all of the facilities located within the airports. The results of this project are building strong foundations for a prediction model in the future to help the FAA make smarter investment decisions.
An operating system to expand the ground transport network into the sky Team: Anirudh Chatty [IEOR], Hugo Roucau [IEOR], Jad Osseiran [IEOR], Ambroise Levrard [IEOR], Carl Mario Britto [IEOR], Arthur Aumont [CEE]
Advisor: Jasenka Rakas [CEE]
Due to the ever-increasing population in urban environments, the transportation system needs to be redesigned to mitigate the congestion and create an environment that is more city-centric and needs-driven. A promising, yet unexplored, alternative is to consider the air as a new means of transit. Our project tackles one of the biggest issues in Urban Air Mobility, which is to create a scalable optimization model that would give the user locations for vertiports — platforms that allow aircrafts to take off and land vertically — in the Bay Area. The model will then optimize the air traffic using these locations in addition to other constraints set forth.
Transforming the way we travel to make it safer, faster and more environment friendly
Team: Farah Hassan [CEE], Leon Wu [CEE], Simon Zhu [CEE], Will Xu [ME] and Xiaoran Wang [CEE]
Advisors: Alexander Skabardonis [CEE] and David Kan [CEE]
Around 1.25 million people die in road crashes every year with 20-50 million being injured or disabled. 333 million tons of CO2 is released into the atmosphere by cars annually, and people are spending up to 210 additional hours on the road due to congestion. It is time to revolutionize urban transportation to ease the traffic woes. Connected and Automated Vehicles (CAVs) provides an exciting opportunity to enhance urban mobility while reducing the environmental impact of transportation. Our team has developed a microsimulation model of CAVs behavior along a stretch of signalized intersections to validate and quantify the host of benefits that CAVs bring about in terms of improved safety, reduced travel time and delays and less vehicle emissions.
Enhancing the public transit system by reducing travel time and improving reliability.
Team: Paula Carlosena [CEE], Ariel Jarvis [CEE], Priyanka Pandey [IEOR], Sherry Qian [CEE]
Advisor: Alexander Skabardonis [CEE]
Our team’s objective is to improve the public transportation service in the city of San Francisco by giving buses priority over other modes of transportation at traffic signals, also known as Transit Signal Priority (TSP). Active TSP modifies signal timings to benefit buses. We have partnered with the San Francisco Municipal Transportation Agency (SFMTA) to simulate active TSP along Geary Street, one of San Francisco’s busiest corridors. Successful implementation of active TSP would reduce delay for public transit buses without significantly delaying other modes of transportation. With simulations that effectively improve bus service, the SFMTA can move forward with real world implementation starting on Geary Street followed by other locations across the city. Successful implementation would benefit bus riders by reducing travel time and increasing bus reliability. By improving the bus service, the city is encouraging people to choose public transit over cars which results in a smaller carbon footprint.
Smart transportation: Deep reinforcement learning to optimize traffic through autonomous vehicle and traffic light control
Team: Kaila Cappello [EECS], Lucas Fischer [IEOR], Anna Matsokina [CEE], Umang Sharaf [EECS], Fangyu Wu [EECS], Crystal Yan [EECS], Xiao Zhao [EECS]
Advisor: Alexandre M. Bayen [EECS and CEE]
With the introduction of autonomous driving, we can begin to explore the impacts of this new technology and the existing problems that can be solved as a result. Our team has narrowed its focus to solving the issues of traffic and safety. Using reinforcement learning techniques, we have developed AI algorithms that reduce traffic congestion and increase safety by controlling traffic lights and autonomous vehicles. By developing a city-scale scenario, our team was able to train our network to optimize throughput that can be scaled to improve traffic flow and address safety in an urban setting.
Saving your precious time through efficient traffic control strategies
Team: Jeff Peng [IEOR], Samuel Lin [IEOR], I-Chen Lee [IEOR], Uday Kumar Rathenahalli Krishnappa [IEOR], Zhe Zhou [ME], Qifan Yao [ME]
Advisor: Gabriel Gomes [ME]
Inefficient traffic management of freeways based on outdated traffic models significantly contributes to traffic congestion. Utilizing traffic flow data available from the California Department of Transportation, we fed it into our Open Traffic Model to create a simulation model for a portion of the I-210 highway most prone to congestion. Upon using Reinforcement Learning to control the model, we have been able to devise better traffic control strategies to optimize traffic flow.
Street sign recognition: make driving easier and safer
Team: Robbie Hu [ME], Meng Pan [ME], Haoyuan Tan [ME], Louis Tilloy [IEOR]
Advisors: Gabriel Gomes [ME], Lars Tornberg [Volvo], Sohini Roy Chowdhury [Volvo]
Imagine that you are driving on the highway against the sun, and sunlight shines brightly in your eyes. You did not see the speed limit sign and a policeman on the side of the road with a speed detector found you speeding. You did not do it on purpose but you have to pay the $200 fine. It could be worse: on average, about 3000 fatalities are caused by stop-sign accidents alone. In other words, missing a street sign problem can be much more serious. Our project is to teach AI in a robust way to help driver detect street signs better in all weather conditions which will help safe autonomous driving become a reality. With our project, we will be able to reduce traffic accidents and save your money, and more importantly, your life.
Accelerating autonomous driving research through improved data collection
Team: Jack He [EECS], Chester Mu [EECS], Hasith Rajakarunanayake [EECS], Antoine Roux [EECS], Likai Wei [EECS] ,Craig Young [EECS]
Advisors: Trevor Darrell [EECS], Gary Chen [EECS], Fisher Yu [EECS]
Autonomous vehicles have the potential to make transportation safer and more efficient. However, one challenge to their development is that research requires huge amounts of data to train and test the algorithms. Furthermore, the process of collecting this data is complicated and resources-demanding. To address this challenge, we have developed a system that enables researchers to collect data by remotely piloting a vehicle while viewing real-time images and information from the sensors on the car. Our solution encompasses the whole data collection process where sensor information is forwarded through the car’s onboard computer to the user via an online web server. It collects data such as the GPS position, live videos taken by the cameras, and the measurements of the LiDAR. This enables the user to see what the car is experiencing in a human-friendly way, and without the cost and and inconvenience of running the experiments in person. Our hope is that researchers such as those without enough funding to get an autonomous car prototype can benefit from our system. More efficient data collection will accelerate research in autonomous driving and bring us safer, faster transportation.
Creating a faster and more cost efficient open source pipeline for the creation of larger scale image sets.
Team: Louis Renaux [EECS], Matthew Yu [EECS], Griffin Wu [EECS], Ryan Peck [EECS], Junsheng Pei [EECS], Daniel Benniah John [EECS]
Advisors: Trevor Darrell [EECS], Gary Chen [EECS], Fisher Yu [EECS]
The technology of autonomous driving stands to greatly benefit humanity. Figuring out how to teach the on-board computer to classify objects from pictures of the car’s surroundings is one of the most important technical challenge in autonomous driving. Current machine learning algorithms are good at this but need more data to continue to improve. Publicly available image sets such as ImageNet cannot meet the demands of the industry, so we are working to create a large-scale database for the open-source community. Our project uses active learning to partially automate our data collection. With this magnification of human effort, this data-set will be unique in its size relative to other open-source projects and provide the community with an important base upon which they can customize their own systems to further the field of autonomous driving.
Smarter Sensing for Autonomous Urban Driving
Team: Scout Heid [ME], Byron Ho [ME], Haonan Xu [ME], Layton Hu [ME]
Advisor: Francesco Borrelli [ME]
The two most difficult aspects of autonomous driving are obstacle detection and control. We have developed a platform on the Robotic Operating System (ROS) that combines radar, lidar and camera data to categorize and analyze objects in real time; then feed the data into a Model Predictive Controller (MPC). We work closely with the Berkeley MPC lab to integrate our system with their research goals and use a G80 Hyundai car to test and validate results. The end result is a real time pedestrian and vehicle detection system and controller with interactive UI to guide the car autonomously around a city block. We hope that this platform and data can be used to enhance future controller snad work towards cars sharing this data to further optimize their path planning and safety features.
Leave Google Map errors behind–autonomous driving using sensor fusion for localization and plotting destination
Team: Patrick Tomsky [ME], Mattia Pelissou [ME], Jiakai Liu [ME], Jiawei Song [ME], Alfred Zhao [ME]
Advisor: Francesco Borrelli [ME]
Many current autonomous driving vehicles suffer from poor obstacle detection by single sensor as well as the incorrect navigation information brought by different map applications. Our team is dedicated to develop sensor fusion technology, which analyzes and integrates the data from LIDAR, radar and camera. This approach will improve the accuracy and reliability of object detection and localization for autonomous vehicles.