A new paper led by Wojciech Zajkowski and Dominik Krezeminski is published on Computational Brain & Behavior. This study examined the neurocognitive processes underlying such voluntary decisions by integrating cognitive modelling of behavioral responses and EEG recordings in a probabilistic reward task. Human participants performed binary choices between pairs of unambiguous cues associated with identical reward probabilities at different levels. Higher reward probability accelerated RT, and participants chose one cue faster and more frequent over the other at each probability level. The behavioral effects on RT persisted in simple reactions to single cues. By using hierarchical Bayesian parameter estimation for an accumulator model, we showed that the probability and preference effects were independently associated with changes in the speed of evidence accumulation, but not with visual encoding or motor execution latencies. Time-resolved MVPA of EEG-evoked responses identified significant representations of reward certainty and preference as early as 120 ms after stimulus onset, with spatial relevance patterns maximal in middle central and parietal electrodes. Furthermore, EEG-informed computational modelling showed that the rate of change between N100 and P300 event-related potentials modulated accumulation rates on a trial-by-trial basis. Our findings suggest that reward probability and spontaneous preference collectively shape voluntary decisions between equal options, providing a mechanism to prevent indecision or random behavior.

The paper is now available online.

This week we welcome our new PhD student Isabella Colic. Isabella will work on MEG and decision-making processes.

The COVID‐19 pandemic has impacted almost every aspect of our lives, including the way how we conduct research. This article shares our experience of running behavioural experiments online, with some hints on how to make this process smoother.

There are commercial products available for online testing (Testable, Inquisit, or Gorilla). However, if you opt for open-source solutions, there is no single tool to solve the whole problem. Thus, right after you come up with a design of an experiment you need to make a few choices:

1. Programming toolkit for your experiment

Online experiments typically run in a web-browser, and they need to be implemented in a programming language that such browser understands, like JavaScript (JS) with HTML and CSS components. Fortunately, there are several JS modules that provide ready to use components for online studies:

Lab.js - probably the easiest to use for people not too experienced with web development. It has a nice web interface but still requires some programming.

PsychoJS – recommended for those already familiar with psychopy, as PsychoPy Builder can automatically convert an exising experiment into JS, albeit with limited functionalities.

jsPsychour current choice, as it is easier to use if you need to design an experiment from scratch. Its website provides some good tutorials with many predefined plugins to use, eg. random dot motion (RDK) task. Also, the community is quite active on the discussion forum.

2. Hosting platform for the experiment

You can host the experiment wherever you want: on your university server, github pages, some other hosting provider, but our choice is Pavlovia as it provides nice ecosystem for the experiment code hosting and data saving that relies on git. After you share your experiment publicly, Pavlovia helps you saving the data on the server, such that you can download it with just one command line: git pull.

Here is tutorial on how to integrate jsPsych with pavlovia: https://pavlovia.org/js-psych

Pavlovia requires a paid license or buying storage credits.

3. Recruitment platform

Finally, it comes the time to recruit your participants. For pilots, you may recruit from local participant panels in a normal way (e.g., SONA as used in many UK universities). One advantage of online experiments is that you can use existing services to reach a large participant portal. We evalauted two solutions: Amazon Mechanical Turk and Prolific. We eventually decided to use Prolific because of their easy-to-use prescreening procedure.

From Prolific you will get a unique code at the end of the experiment, which indicates that your participants successfully finished the task. It can look like this in jsPsych (at the end of your experiment):

var theend = { 
type: 'html-button-response', 
stimulus: "<p>This is the end of the experiment.<br/>" + 
"You can come back to Prolific now by clicking the link below:</p>", 
choices: ["<a href='https://app.prolific.co/submissions/complete?cc=<EXPERIMENT UNIQUE CODE>'>Press to finish</a>"]; 
}; 
timeline.push(theend); 

With all this you are basically ready to go! But there are some important details you need to be careful with.

Continue Reading ...

A new paper by Dominik Krezeminski is published on Network Neuroscience. We extended an energy landscape method to quantify the occurrence probability of network states in resting-state MEG oscillatory, which was derived from a pairwise maximum entropy model (pMEM). The pMEM provided a good fit to the binarized MEG oscillatory power in both patients with juvenile myoclonic epilepsy (JME) and controls. Patients with JME exhibited fewer local minima of the energy and elevated energy values than controls, predominately in the fronto-parietal network across multiple frequency bands. Furthermore, multivariate features constructed from energy landscapes allowed significant single-patient classification. Our results highlighted the pMEM as a descriptive, generative, and predictive model for characterizing atypical functional network properties in brain disorders.

The paper is now available online.

Congratulations to Dr Szul on passing his PhD viva this week. Well done!

And we are enjoying a group Christmas dinner.