-
Notifications
You must be signed in to change notification settings - Fork 4
Projects
Welcome to the project page of Brainhack Western 2022!
Check out projects from 2021 here.
The projects are ordered based on when participants added them. Here's a list if you're here browsing.
Safer Multimodal Teleoperation of (Simulated) Robots
Pranshu Malik (@pranshumalik14)
Being confused, freezing, or panicking while trying hard to stop, re-direct, or stabilize a drone (or any such robot/toy) in sudden counter-intuitive poses or environmental conditions is likely a relatable experience for all of us. The idea here is about enhancing the expression of our intent while controlling a robot remotely — either in real life or on a computer screen (simulation) — while not replacing the primary instrument of control (modality) but instead by also integrating our brain states (thought) in the control loop as measured, for example, through EEG. Specifically, for the scope of the hackathon, this could mean developing a brain-machine interface for automatically assisting the operator in emergency cases with “smart” control command reflexes or “takeovers”. Such an approach can be beneficial in high-risk cases such as remote handling of materials in nuclear facilities or it can also aid the supervision of autonomous control, say in the context of self-driving cars, to ultimately increase safety.
For now, we could pick a particular type of simulated robot (industrial arms, RC cars, drones, etc.) and focus on designing and implementing a paradigm for characterizing intended motion and surprise during undesired motion in both autonomous (with no user control but robot’s self- and environmental influences) or semi-autonomous cases (including user’s control commands), i.e., we can aim to measure intent and surprise given the user’s control commands, the brain states, and robot states during algorithmically curated cases of robot motion. This will help us detect such situations and also infer desired reactions to accordingly adjust control commands to achieve desired reactions during emergencies and, more generally, to augment real-time active control to match the desired motion. We can strive to keep the approach suitable for generalizing well enough to robots of other types and/or morphologies and to more unusual environments.
Brain3DVis: AR/MR MRI Visualizer
Liam Bilbie (@LiamBilbie)
Application that uses a web-based front end where users can submit brain MRI volumes. The data is then processed on the application backend, a 3D model is generated, and is then accessible in a AR/VR environment. We are using the MERN stack for the web component and C#/Unity for the AR/VR application. We would love to work with people who have experience with MRI volumetric segmentation or are interested in VR/AR data visualization. 🙂
[Placeholder text]
[Placeholder text]