Skip to content

AbhikChowdhury6/MLPortfolio

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 

Repository files navigation

Table of contents

  1. Data and ML
    1. Fine-tuning LLMs for Improved Emotional Reasoning
    2. Future Task Rewards in Multiagent Reinforcement Learning
    3. Continuous Glucose Monitoring Modeling
    4. Wrist-worn Camera Object Identification
    5. Sleep Tracker Data Analysis
    6. CPR Practice App Algorithm Development
  2. Talks and Awards
    1. Vision Hack Computer Vision Hackathon in Moscow (Russia) - Most Innovative Award
    2. Bias in NLP Talk for LA Tech for Good Equity in Data Workshop Series
    3. “DIY Wearables” Workshop Conducted at Media Lab
  3. Stakeholder Engagement and Business
    1. Eyes On - Augmenting Nonvisual Travel Using Haptic Interfaces
    2. Wearable Technology Class in The Fashion School at ASU
  4. Other Work on Wearable Devices
    1. Resonea - Wearable Microphone for Sleep Analysis
    2. Compression Sleeve for Basketball Analysis
    3. Sound Responsive Dress (writeup in progress)
  5. Data Engineering (writeup in progress)
    1. Kubernetes Network Configuration
    2. Hybrid Cloud Setup
  6. Modeling Dynamics(writeup in progress)
    1. Unsafe Trajectory Modeling
    2. Optimal Stock Policy Simulation

Machine Learning and Data Analysis

Fine-tuning LLMs for Improved Emotional Reasoning

TL;DR We found that fine-tuning LLMs on responses from empathetic human listeners improved their performance on emotional reasoning tasks and that fine-tuning on poor listeners degraded their performance.

Emotionally contextualized responses from chatbots have been shown to improve users’ trust in systems. In this work, we fine-tuned 4 different LLMs on the PAIR (Prompt-Aware margIn Ranking) dataset, which consists of conversational responses annotated by expert counselors as reflecting “good listening” or “bad listening”

We then evaluated those models’ performance on a random sample of 500 questions from the SocialIQA dataset. We found that fine-tuning on interactions with good listeners improved accuracy across all models, and accuracy was degraded when fine-tuning on interactions with poor listeners.

The code can be found here: https://github.com/AbhikChowdhury6/LLMsModelingGoodListeners

Future Task Rewards in Multiagent Reinforcement Learning (RL)

TL;DR We extended an RL paper from DeepMind, which proposed incorporating rewards from potential future tasks to incentivize an agent not to generate negative side effects. Our extension was to apply this technique to multiagent environments.

When building agents to carry out complex real-world tasks, it is often difficult to program how to complete a desired task, and so instead, the field of Reinforcement Learning (RL) provides a framework for agents to discover how to complete tasks given their capabilities in an environment and get rewards. While RL can be a powerful tool that enables agents to discover skills that would be intractable to program manually, the learned behaviors can also lead to a number of unintended consequences. One category of these unintended consequences is referred to as Negative Side Effects (NSE).

The problem of NSEs arises when trying to describe what change you would like an agent to cause, and in this case, one must also ensure that the agent also knows all of the changes you would not like to cause. For example, if the shortest path to accomplishing a described goal has the potential of knocking over a vase, breaking a wall, or crashing a car in the process, then the agent would do those things since they may not have explicitly been told everything not to do.

To address this the original paper proposed the future task approach which considers the current task as part of a sequence of unknown tasks and provides an auxiliary reward to the current agent for the potential to complete various future tasks. We rebuilt the environment and agents and reward functions using OpenAI Gym, utilizing their PettingZoo multiagent framework.

We found that introducing another agent into the environment did not degrade the agents ability to avoid certain negative side effects, but aspects of the original environment reward system, casted doubt on the validity of those results to the team.

The code can be found here: https://github.com/wgrier-asu/teamSAi

The presentation can be found here: https://docs.google.com/presentation/d/1iIqESR-mCCwjwUynaICx2lK2DRKtYrp4ygw-gEcjrXI

Continuous Glucose Monitoring Modeling

This project aimed to classify a window of blood sugar data with whether there was a meal during that time.

The training data was originally formatted as a time series of blood sugar levels and markers for meal times, segmented windows of meals and non meals. I generated some standard features for the meal and not meal data to assist the classifier with the small amount of data. I tested an SVM classifier with various kernels using k-fold cross-validation.

The code can be found here https://github.com/AbhikChowdhury6/CGMcomp

Wrist-worn Camera Object Identification

I implemented a concept into a series of devices that could capture camera views from the wrist. We developed an end-to-end object recognition demo using TensorFlow, validating the device's use as a memory augmentation. This work enabled $500k+ in grants and counting to continue research and multiple patents.

The code can be found here: https://github.com/AbhikChowdhury6/handcam

Sleep Tracker Data Analysis

Aligned sleep data from 3 wrist-based sleep trackers, a smart ring, and an under-mattress tracker. Analyzed inconsistencies in predicted sleep stages and performance for tracking polyphasic sleep.

At the time, the Amazon halo band (purple) and pillow app for sleep tracking on the Apple watch (blue) didn’t register naps, the Oura ring 2 (green) did not attempt to predict sleep stages during naps, the Fitbit Charge 4 (orange) would attempt sleep stage tracking for longer naps around two hours or more, and the Withings sleep pad (red) was good at tracking, but of course could not sense naps not taken in bed.

The presentation can be found here: https://docs.google.com/presentation/d/1ItZ8jAxl0TGqR20lN6VzB8wObBdyKx0Q8bgiz4pZ7_0/edit?usp=sharing and code here: https://github.com/AbhikChowdhury6/qsPrez

CPR Practice App Algorithm Development

A mobile app used to train hundreds of people on how to do hands-only CPR, my contribution being the accelerometer signal processing to derive the depth and frequency of compression.

Talks and Awards

Vision Hack Computer Vision Hackathon in Moscow (Russia) - Most Innovative Award

Represented Arizona State University for the VIsionHack computer vision hackathon in Moscow Russia. The competition focused on identifying objects in video clips relevant to self-driving cars. Our team prioritized using classical machine learning techniques for explainability and won the most innovative award.

Bias in NLP Talk for LA Tech for Good Equity in Data Workshop Series

In 2021, I presented common Biases in Natural Language Processing systems and common approaches to address them for LA Tech For Good’s spring cohort. The presentation can be found here. https://docs.google.com/presentation/d/1zwXn2MowSf9qtVjamijvSSwvJ7sJleuu-Hu3Di8ZAOw/

“DIY Wearables” Workshop Conducted at Media Lab

For the 2019 Community Biosummit at the MIT Media Lab, I ran a workshop where participants made a breathing sensor chest strap. The sensor had a variety of transient effects and I wrote code for processing the signal.

Stakeholder Engagement and Business

Eyes On - Augmenting Nonvisual Travel Using Haptic Interfaces

I implemented multiple devices In collaboration with Bryan Duarte, whose PhD research involved building and testing haptic wearables to augment non visual travel. During the course of the research we ran human factors studies with individuals who were blind.

Wearable Technology Class in The Fashion School at ASU

In Spring of 2020 I co-instructed a wearable technology class in Fashion department at Arizona State University. Funded with a grant from the Global Sports Institute, students developed minimum viable products and pitched solutions for sport-related problems.

Other Work on Wearable Devices

Resonea - Wearable Microphone for Sleep Analysis

Resonea, a biosignal processing company, developed an algorithm to detect various sleep features from audio data from users smart phones. I built a wearable sensor platform that integrated into their existing signal processing pipeline.

Compression Sleeve for Basketball Analysis

I built a data logging platform into a compression sleeve as a platform for algorithms to analyze free throws. I wrote software for firmware and data validation. (pictured is a different motion sensor I designed using similar sensors.) basketball

Data Engineering

Kubernetes Network Configuration

Hybrid Cloud Setup

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors