UC Berkeley
Electrical Engineering and Computer Science
Software Engineering | 2018 - Present
Building products for transparency in digital advertising.
UC Berkeley
Undergraduate Student Instructor (CS 188) | Spring 2018
I lead discussion sections, hold office hours and manage homework administration for CS188: Artificial Intelligence.
Software Engineering Intern | Summer 2017
I worked on the Data Center Tools team to build a comprehensive debugging tool for our suite of data center planning and design applications. The project supersedes unit tests by playing prerecorded actions covering main use cases and validating the states of the applications afterwards. This allows developers to immediately see if their software changes compromise application functionality.
Software Engineering Intern | Summer 2016
I worked on the optical network design team, developing software that integrates new optical components and architectures into existing design tools. My project models and visualizes the power and noise propogation of signals through proposed optical link designs.
Stony Brook University
Research Intern | 2014-2015
At the SBU Computer Vision Lab, I conducted research on automatic action classification in images using human gaze data. See the "Research" section for more information.
Personal Website
You're at my personal website right now! This was my first dive into web development and contains links to my resume and other relevant documents, descriptions of my projects and research, and contact information.
PleaseTutorMe is a web application designed to bring available tutors to clients within minutes. Users can search by subject for tutors, or offer to tutor others in subjects they are qualified to teach. A Google Maps UI allows users to find nearby available tutors and decide on a convenient meetup location. When users request a tutoring session, the tutor is notified via text using Twilio. Created at HackingEDU 2015 in San Mateo, CA.
Bearmaps is a mapping and route-finding web application I created for my data structures and algorithms course. It maps the city of Berkeley and surrounding areas in the East Bay using tile images and location data from the OpenStreetMap project.
Action Classification in Still Images Using Human Gaze
Under the guidance of Kiwon Yun, Profs. Dimitris Samaras and Greg Zelinsky at the SBU Image Analysis Lab, I researched the application of human gaze data to computer vision algorithms for action recognizing in still images. Using gaze data collected with a tower-mounted eye tracker, I created novel gaze features that, when used to train SVM classifiers, improved on those trained with state-of-the-art visual features. Moreover, I identified behaviorally meaningful groups of action classes that elicit similar gaze patterns. By blurring the border between psychology and computer vision, I uncovered new insights into the way humans interpret images while opening a new direction for the advancement of image analysis algorithms.
Convolutional neural network (CNN) features were derived from images to train a baseline action classifier, while novel gaze features were derived from subject data to create a gaze action classifier. Confidence scores from both classifiers were weighted and combined to make a final classification decision.
Gaze data collected from human subjects were used to derive novel features and train a gaze action classifier. Confidence scores from the gaze classifier were combined with those from a baseline classifier to make a final classification decision.

Action Recognition in Still Images Using Human Eye Movements
Gary Ge, Kiwon Yun, Dimitris Samaras, and Gregory J. Zelinsky
The 2nd Vision Meets Cognition Workshop at Conference on Computer Vision and Pattern Recognition (CVPR) 2015 (Boston/USA)

How We Look Tells Us What We Do: Action Recognition Using Human Gaze
Kiwon Yun, Gary Ge, Dimitris Samaras, and Gregory J. Zelinsky
Vision Sciences Society (VSS) 2015 (Florida/USA)
[Abstract] [Poster]


Ask about my work and plans, meet me for coffee, or just say hi. Send a message to connect, or explore the links below to find out more.