Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
154 commits
Select commit Hold shift + click to select a range
0a5156b
headfixed2FC_task
seasonqiu May 3, 2023
3e8f75c
headfixed2FC_task
seasonqiu May 4, 2023
4e87eb0
Merge pull request #50 from SjulsonLab/deviation
seasonqiu May 11, 2023
7d90e4b
new tube_fit after new tubing replacement
seasonqiu May 16, 2023
14b5934
calibration
seasonqiu May 23, 2023
7da41ca
new task with headfixed_independent_reward
seasonqiu May 23, 2023
4d4dc00
new task with headfixed_independent_reward
seasonqiu May 23, 2023
15c9b59
new task with headfixed_independent_reward
seasonqiu May 23, 2023
a8b3355
new task with headfixed_independent_reward
seasonqiu May 23, 2023
88e9ff5
new task with headfixed_independent_reward
seasonqiu May 23, 2023
17efbb6
new task with headfixed_independent_reward
seasonqiu May 23, 2023
52d3c1c
new task with headfixed_independent_reward
seasonqiu May 23, 2023
2756158
new task with headfixed_independent_reward
seasonqiu May 23, 2023
cb1bdee
new task with headfixed_independent_reward
seasonqiu May 23, 2023
84fd937
calibration issue
seasonqiu May 23, 2023
b7cde61
calibration issue
seasonqiu May 23, 2023
f17c7b0
current_card
seasonqiu May 23, 2023
651fc66
current_card
seasonqiu May 23, 2023
1f109f5
changes related to exit reward
seasonqiu May 28, 2023
631be27
error repeat restructure
seasonqiu May 28, 2023
6f1bbda
first_trial_of_the_session
seasonqiu May 28, 2023
8eca2e2
session_info['wait_for_choice']
seasonqiu May 28, 2023
a57f9c3
HeadfixedIndependentRewardTask
seasonqiu May 29, 2023
c3cd6de
current_card index out of bound
seasonqiu May 29, 2023
b84968c
self.side_mice
seasonqiu May 29, 2023
b14e7b4
self.side_choice
seasonqiu May 29, 2023
74c2d47
self.side_mice
seasonqiu May 29, 2023
35d7ba1
reward_size
seasonqiu May 29, 2023
8c17372
reward_size
seasonqiu May 29, 2023
b903c59
reward_size
seasonqiu May 29, 2023
0b391f5
reward_size
seasonqiu May 29, 2023
fbf2cea
cue_state
seasonqiu May 29, 2023
f8ae77b
cue_state
seasonqiu May 29, 2023
b190995
cue_state
seasonqiu May 29, 2023
81b625d
reward_check print wrong variable reward_size
seasonqiu May 29, 2023
46d54f1
reward_check wrong choice error
seasonqiu May 29, 2023
e014bc1
self.wrong_choice_error
seasonqiu May 30, 2023
36d245b
no_choice_error
seasonqiu May 30, 2023
0bade5a
all reward_size
seasonqiu May 30, 2023
0766fe0
all reward_size
seasonqiu May 30, 2023
1f2b1fd
Merge pull request #54 from SjulsonLab/main
seasonqiu Jun 2, 2023
e777da2
Merge pull request #55 from SjulsonLab/independent_reward
seasonqiu Jun 2, 2023
02eeebc
new calibration branch
seasonqiu Jun 2, 2023
b6c6eea
Merge pull request #57 from SjulsonLab/main
seasonqiu Jun 2, 2023
50790cd
Merge pull request #58 from SjulsonLab/new_calibration
seasonqiu Jun 2, 2023
4d95df3
Merge pull request #59 from SjulsonLab/independent_reward
seasonqiu Jun 5, 2023
56f25bd
run_headfixed_independent_reward_task: avoid negative value emerge fr…
seasonqiu Jun 5, 2023
6182782
add log for reward coefficient
seasonqiu Jun 6, 2023
b98d5ae
show current trial reward condition
seasonqiu Jun 6, 2023
a344719
new round for reward delivery
seasonqiu Jun 6, 2023
1a98913
new round for reward delivery
seasonqiu Jun 6, 2023
a9b817e
new round for reward delivery
seasonqiu Jun 6, 2023
72fbd03
session_info reward coeff
seasonqiu Jun 6, 2023
24a8abe
session_info reward coeff
seasonqiu Jun 6, 2023
9c7ed2e
Merge pull request #60 from SjulsonLab/independent_reward
seasonqiu Jun 6, 2023
e931730
syringe_pump_debug
seasonqiu Jun 7, 2023
36a5484
self.reward_size_offset
seasonqiu Jun 8, 2023
3a4df90
reward_check bypass issue correction, please refer to the bug report …
seasonqiu Jun 11, 2023
00ad25d
redundant cue_off
seasonqiu Jun 11, 2023
e5dc5ed
behavbox keypress logging event specification
seasonqiu Jun 12, 2023
c300623
including the reward offset in the current_reward logging information
seasonqiu Jun 12, 2023
b985c71
session_info include the key_reward_amount
seasonqiu Jun 12, 2023
fb0f6c5
behavbox keypress logging event specification
seasonqiu Jun 12, 2023
49a7b7f
Merge pull request #61 from SjulsonLab/main
seasonqiu Jun 12, 2023
0793829
eliminating the bracket in reward logging information
seasonqiu Jun 14, 2023
af7e542
Merge branch 'independent_reward' of https://github.com/SjulsonLab/RP…
seasonqiu Jun 14, 2023
641c6d6
eliminating the bracket in reward logging information
seasonqiu Jun 14, 2023
83174bc
probability of drawing both side: 75%
seasonqiu Jun 16, 2023
9e4c641
redundant task.trial_number += 1
seasonqiu Jun 21, 2023
e6fd755
current_reward_size print on the error repeat trial
seasonqiu Jun 21, 2023
c56446b
Merge pull request #62 from SjulsonLab/independent_reward
seasonqiu Jun 21, 2023
ca10297
self.error_count += 1
seasonqiu Jun 22, 2023
e711063
early_lick_error
seasonqiu Jun 22, 2023
903d81a
punishment timeout and early_lick_error + cue_state_error conflict
seasonqiu Jun 22, 2023
fe767b7
incooperate the forced choice phase into the headfixed_independent_re…
seasonqiu Jun 22, 2023
1f3fc93
free choice 50%
seasonqiu Jun 23, 2023
28a9a03
free choice 50%
seasonqiu Jun 23, 2023
3921482
task_information_independent_reward list out of bound error
seasonqiu Jun 23, 2023
e100cdf
sine wave reward function incooperation
seasonqiu Jun 26, 2023
bc90a41
headfixed_independent_reward_task sine wave reward
seasonqiu Jun 27, 2023
cd3126b
headfixed_independent_reward_task sine wave reward
seasonqiu Jun 27, 2023
37f42ab
headfixed_independent_reward_task sine wave reward
seasonqiu Jun 27, 2023
9f1bc4b
Merge pull request #63 from SjulsonLab/independent_reward
seasonqiu Jun 27, 2023
afab496
reward_timeout
seasonqiu Jun 28, 2023
59993d5
Merge pull request #64 from SjulsonLab/independent_reward
seasonqiu Jun 28, 2023
4db05f9
correct trial number and total trial number differentiation
seasonqiu Jun 28, 2023
cc41b91
Merge pull request #65 from SjulsonLab/independent_reward
seasonqiu Jun 29, 2023
7174b6c
organization change
seasonqiu Jun 29, 2023
4318bd7
reorganization
seasonqiu Jun 29, 2023
54fa7df
reorganization
seasonqiu Jun 29, 2023
873d242
__init__.py in every directory
seasonqiu Jun 29, 2023
7339ad2
reorganization update: import essential
seasonqiu Jul 11, 2023
d65cbf6
ADS1x15
seasonqiu Jul 11, 2023
2fecc20
updating all the tasks with new organization
seasonqiu Jul 11, 2023
92c43df
update the debug directory with new reorganization
seasonqiu Jul 11, 2023
7a5dc3d
Update README.md
seasonqiu Jul 11, 2023
f135ef1
README update
seasonqiu Jul 12, 2023
5e58e55
README update
seasonqiu Jul 12, 2023
944f0a0
README update
seasonqiu Jul 12, 2023
11c31d4
adding foraging reward condition
soyounk Jul 12, 2023
651b4b4
Update run_headfixed_independent_reward_task.py
soyounk Jul 12, 2023
96fde11
Update run_headfixed_independent_reward_task.py
soyounk Jul 12, 2023
878ce68
Update run_headfixed_independent_reward_task.py
soyounk Jul 12, 2023
2905011
Update session_info_headfixed_independent_reward.py
soyounk Jul 13, 2023
362b61b
Update session_info_headfixed_independent_reward.py
soyounk Jul 13, 2023
520bbb6
Update run_headfixed_independent_reward_task.py
soyounk Jul 13, 2023
c1b506b
Update run_headfixed_independent_reward_task.py
soyounk Jul 13, 2023
7f94503
add fraction of free choices in the session info
soyounk Jul 13, 2023
596a941
Update run_headfixed_independent_reward_task.py
soyounk Jul 13, 2023
169b74e
Update task_information_independent_reward.py
soyounk Jul 13, 2023
000d691
adding free choice fraction
soyounk Jul 13, 2023
9d9c9a1
Update session_info_headfixed_independent_reward.py
soyounk Jul 13, 2023
11fa4a7
Update run_headfixed_independent_reward_task.py
soyounk Jul 14, 2023
7a40fd0
Merge pull request #66 from SjulsonLab/headfixed_soyounk
seasonqiu Jul 26, 2023
2bce35c
merging new changes Soyoun added for the independent_reward_task
seasonqiu Jul 27, 2023
6347726
merging new changes Soyoun added for the independent_reward_task
seasonqiu Jul 27, 2023
7d94047
Merge branch 'reorganization' into headfixed_soyounk
seasonqiu Jul 29, 2023
bfc6bc8
Merge pull request #67 from SjulsonLab/headfixed_soyounk
seasonqiu Jul 29, 2023
b4c190a
Merge pull request #68 from SjulsonLab/reorganization
seasonqiu Jul 29, 2023
1df9b75
gitattributes file for line ending normalization
mattchin35 Oct 26, 2023
e66574d
Remove obsolete directory
mattchin35 Oct 27, 2023
764462d
refactored for separation of different task types, matching main bran…
mattchin35 Nov 13, 2023
f7002e6
removed unneccesary TODO
mattchin35 Nov 14, 2023
fb6c422
dummy behavbox to run code without physical setup
mattchin35 Nov 17, 2023
3686434
updating with sample session files
mattchin35 Dec 8, 2023
d284db7
Debugging of saving .mat file and slight refactor of alternating late…
mattchin35 Jan 23, 2024
87eaa46
Minor updates; incomplete refactoring
mattchin35 Jan 24, 2024
9d31317
device-free debugging of the alternating_latent and latent_inference_…
mattchin35 Jan 26, 2024
a892649
More debugging of latent_inference task. Prep for physical debugging.
mattchin35 Jan 29, 2024
faf5a97
requirements file added
mattchin35 Jan 30, 2024
17a8aa1
Merge pull request #71 from mattchin35/matt-behavior
mattchin35 Jan 30, 2024
07aa7a3
Revert "Merge pull request #71 from mattchin35/matt-behavior"
mattchin35 Jan 30, 2024
21666bf
Add files via upload
juliabenville Sep 5, 2024
8385047
Merge pull request #179 from SjulsonLab/juliabenville_CueIVSA
juliabenville Sep 5, 2024
ad9c925
Update run_julia_test_self_admin.py
juliabenville Sep 17, 2024
3dd443a
Update session_info_julia_test_self_admin.py
juliabenville Sep 17, 2024
0d2fe7b
Update session_info_julia_test_self_admin.py
juliabenville Sep 25, 2024
89a8847
Update julia_test_self_admin.py
juliabenville Sep 25, 2024
4d0bb8b
Create julia_DCL_self_admin.py
juliabenville Sep 26, 2024
db47911
Update julia_DCL_self_admin.py
juliabenville Sep 26, 2024
020f867
Update julia_DCL_self_admin.py
juliabenville Sep 26, 2024
f0a26c9
Update julia_DCL_self_admin.py
juliabenville Sep 26, 2024
62e8da6
Create run_julia_IVSA.py
juliabenville Sep 26, 2024
561e490
Create session_info_julia_IVSA
juliabenville Sep 26, 2024
369225b
Update julia_DCL_self_admin.py
juliabenville Sep 26, 2024
d464209
Update julia_DCL_self_admin.py
juliabenville Sep 30, 2024
0cd9a23
Rename session_info_julia_IVSA to session_info_julia_IVSA.py
juliabenville Oct 2, 2024
5afc011
Delete session_info_julia_test_self_admin.py
juliabenville Nov 11, 2024
48a8a0c
Delete session_info_julia_IVSA.py
juliabenville Nov 11, 2024
26afdc0
Delete run_julia_test_self_admin.py
juliabenville Nov 11, 2024
de8aff4
Delete julia_DCL_self_admin.py
juliabenville Nov 11, 2024
0d04b68
Delete julia_test_self_admin.py
juliabenville Nov 11, 2024
38d5a2d
Delete run_julia_IVSA.py
juliabenville Nov 11, 2024
b98cbb3
add sound files in the sound folder
soyounk Mar 24, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
23 changes: 23 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,2 +1,25 @@
# RPi4_behavior_boxes
BehavBox is a system with Raspberry Pi computers that is sufficient to provide a foundation of constructing animal behavior training and experiment.

# Quick Start
in task_protocol, you could find all the task examples. Each task has its run file, task file, task information file and an example of session_information file.

To start running a task file, you need to first create a session_information pathway and session_information file in a path (suggested) in the home directory and outside of the git repo since you would have to configure and changes the file everyday for training purposes.

After configure the session_information path and input the appropriate path leading to the exact session_information of the date, you could do python3 run_x_task.py to start running the task.

# Example
To start running the task `headfixed_task.py`
1. Setup and configure the 'session_information_year-month-date.py' file:
first, create the experimental_data directory in the home directory path: <br />
`$ mkdir ~/experimental_data`<br />
second, create all the necessary subdirectories for the specific task: <br />
`$ mkdir ~/experimental_data/headfixed_task`<br />
`$ mkdir ~/experimental_data/headfixed_task/session_info`<br />
then, copy the session_information example file corresponding to the specific task: <br />
`$ cp ~/RPi4_behavbior_boxes/task_protocol/headfixed_task/session_info_headfixed_independent_reward.py ~/experimental_data/headfixed_task/session_info` <br />
2. Modify the newly created session_info file: <br />
`$ sudo nano ~/experimental_data/headfixed_task/session_info_headfixed_independent_reward.py` <br />
**KEEP IN MIND** after configuring the session_information to manually change the field manual date to the day of the experiment session (the day of the experiment), and the name of the session_info file from `session_info_headfixed_independent_reward.py` to `session_info_year-month-date.py`. Otherwise, an error would occur when running the task file.
3. After setting up the session_info file, run the task: <br />
`$ python3 ~/RPi4_behavior_boxes/task_protocol/headfixed_task/run_headfixed_task.py`
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
45 changes: 45 additions & 0 deletions debug/generate_reward_trajectory.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
def cumsum_positive(input_list):
for index in range(len(input_list)):
if index == 0 and input_list[index]<0:
input_list[index] = -input_list[index]
elif input_list[index]+input_list[index-1]<0:
input_list[index] = input_list[index] - input_list[index-1]
else:
input_list[index] = input_list[index] + input_list[index-1]
return input_list

def generate_reward_trajectory(scale=0.5, offset=3.0, change_point=20, ntrials=200):
# initial reward (need to be random)
rewards_L = [1]
rewards_R = [1]
for a in np.arange(np.round(ntrials / change_point)):
temp = np.random.randn(change_point) * scale
# temp = np.random.uniform(low=0.0, high=10, size=(change_point,))

# print("ay o" + str(temp))
# while temp < 0:
# temp = np.random.randn(change_point) * scale
rewards_L.append(cumsum_positive(temp) + offset)
temp = np.random.randn(change_point) * scale
# temp = np.random.uniform(low=0.0, high=10, size=(change_point,))
# while temp < 0:
# temp = np.random.randn(change_point) * scale
rewards_R.append(cumsum_positive(temp) + offset)
rewards_L = np.hstack(rewards_L)
rewards_R = np.hstack(rewards_R)
# plt.plot(rewards_L,'b');plt.plot(rewards_R,'r--')
reward_LR = [rewards_L, rewards_R]
reward_LR = np.transpose(np.array(reward_LR))
reward_LR = reward_LR[0:ntrials, :]
# print(reward_LR)
return reward_LR

# from reward_distribution import generate_reward_trajectory
scale = 0.5
offset = 3.0
change_point = 20
ntrials = 100

reward_distribution_list = generate_reward_trajectory(scale, offset, change_point, ntrials)

print(reward_distribution_list)
File renamed without changes.
File renamed without changes.
3 changes: 3 additions & 0 deletions pump_debug.py → debug/pump_debug.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# pump_debug.py: it's for testing the pump class and the pump itself
import sys
import time
# updated with reorganization (on 7/11/2023)
import sys
sys.path.insert(0,'/home/pi/RPi4_behavior_boxes/essential')
from behavbox import Pump
# from fake_session_info import fake_session_info
side = str(sys.argv[1])
Expand Down
File renamed without changes.
File renamed without changes.
File renamed without changes.
24 changes: 24 additions & 0 deletions debug/reward_distribution.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import numpy as np

def generate_reward_trajectory(scale=0.5, offset=3.0, change_point=20, ntrials=200):
# initial reward (need to be random)
rewards_L = [1]
rewards_R = [1]
for a in np.arange(np.round(ntrials/change_point)):
temp = np.random.randn(change_point)*scale
rewards_L.append(np.cumsum(temp, axis=-1) + offset)
temp = np.random.randn(change_point)*scale
rewards_R.append(np.cumsum(temp, axis=-1) + offset)
rewards_L=np.hstack(rewards_L)
rewards_R=np.hstack(rewards_R)
#plt.plot(rewards_L,'b');plt.plot(rewards_R,'r--')
reward_LR = [rewards_L, rewards_R]
reward_LR = np.transpose(np.array(reward_LR))
reward_LR = reward_LR[0:ntrials,:]
print(reward_LR)
return reward_LR
# generate_reward_trajectory()
# session_information['reward']['scale']
# session_information['reward']['offset']
# session_information['reward']['change_point']
# session_information['reward']['ntrials']
93 changes: 93 additions & 0 deletions debug/syringe_pump_pygame_debug.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
import pygame
# updated with reorganization (on 7/11/2023)
import sys
sys.path.insert(0,'/home/pi/RPi4_behavior_boxes/essential')
from behavbox import Pump
import numpy as np

class Pump(object):
def __init__(self, session_info):
self.pump1 = LED(19)
self.pump2 = LED(20)
self.pump3 = LED(21)
self.pump4 = LED(7)
self.pump_air = LED(8)
self.pump_vacuum = LED(25)
self.reward_list = [] # a list of tuple (pump_x, reward_amount) with information of reward history for data
# visualization

def reward(self, which_pump, reward_size):
# import coefficient from the session_information
coefficient_p1 = [0.13, 0]
coefficient_p2 = [0.13, 0]
coefficient_p3 = [0.13, 0]
coefficient_p4 = [0.13, 0]
duration_air = 1
duration_vac = 1

if which_pump == "1":
duration = round((coefficient_p1[0] * (reward_size / 1000) + coefficient_p1[1]), 5) # linear function
self.pump1.blink(duration, 0.1, 1)
self.reward_list.append(("pump1_reward", reward_size))
logging.info(";" + str(time.time()) + ";[reward];pump1_reward(reward_coeff: " + str(coefficient_p1) +
", reward_amount: " + str(reward_size) + "duration: " + str(duration) + ")")
elif which_pump == "2":
duration = round((coefficient_p2[0] * (reward_size / 1000) + coefficient_p2[1]), 5) # linear function
self.pump2.blink(duration, 0.1, 1)
self.reward_list.append(("pump2_reward", reward_size))
logging.info(";" + str(time.time()) + ";[reward];pump2_reward(reward_coeff: " + str(coefficient_p2) +
", reward_amount: " + str(reward_size) + "duration: " + str(duration) + ")")
elif which_pump == "3":
duration = round((coefficient_p3[0] * (reward_size / 1000) + coefficient_p3[1]), 5) # linear function
self.pump3.blink(duration, 0.1, 1)
self.reward_list.append(("pump3_reward", reward_size))
logging.info(";" + str(time.time()) + ";[reward];pump3_reward(reward_coeff: " + str(coefficient_p3) +
", reward_amount: " + str(reward_size) + "duration: " + str(duration) + ")")
elif which_pump == "4":
duration = round((coefficient_p4[0] * (reward_size / 1000) + coefficient_p4[1]), 5) # linear function
self.pump4.blink(duration, 0.1, 1)
self.reward_list.append(("pump4_reward", reward_size))
logging.info(";" + str(time.time()) + ";[reward];pump4_reward(reward_coeff: " + str(coefficient_p4) +
", reward_amount: " + str(reward_size) + "duration: " + str(duration) + ")")
elif which_pump == "air_puff":
self.pump_air.blink(duration_air, 0.1, 1)
self.reward_list.append(("air_puff", reward_size))
logging.info(";" + str(time.time()) + ";[reward];pump4_reward_" + str(reward_size))
elif which_pump == "vacuum":
self.pump_vacuum.blink(duration_vac, 0.1, 1)
logging.info(";" + str(time.time()) + ";[reward];pump_vacuum" + str(duration_vac))
try:
pump = Pump()
pygame.init()
DISPLAYSURF = pygame.display.set_mode((200, 200))
pygame.display.set_caption("pygame debugging")
except Exception as error_message:
print(str(error_message))

run = True
reward_size = float(input("Input reward_size: "))

while run:
for event in pygame.event.get():
if event.type == pygame.KEYDOWN and event.key == pygame.K_ESCAPE:
run = False
if event.type == pygame.KEYDOWN:
# print("KeyDown: " + str(event.type) + "\n")
if event.key == pygame.K_q:
print("Q down: syringe pump 1")
pump.reward("1", reward_size)
if event.key == pygame.K_w:
print("W down: syringe pump 2")
pump.reward("2", reward_size)
if event.key == pygame.K_e:
print("E down: syringe pump 3")
pump.reward("3", reward_size)
if event.key == pygame.K_r:
print("R down: syringe pump 4")
pump.reward("4", reward_size)
if event.key == pygame.K_a:
print("A down: air puff on")
pump.reward("air_puff", reward_size)
if event.key == pygame.K_s:
print("S down: vacuum on")
pump.reward("vacuum", 1)
File renamed without changes.
1 change: 1 addition & 0 deletions environment/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

1 change: 1 addition & 0 deletions essential/ADC/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

File renamed without changes.
File renamed without changes.
File renamed without changes.
1 change: 1 addition & 0 deletions essential/RTC/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

File renamed without changes.
File renamed without changes.
1 change: 1 addition & 0 deletions essential/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

Loading