top of page
Search
graysebastian1983

Decoding Design: A Guide to Understanding and Applying the Principles of Design



Maggie is past president of the Communication Artists of New Mexico, teaches logo design and symbolism as visual literacy for designers at the University of New Mexico/Albuquerque, speaks for conferences and universities, and gives workshops on creating more effective, engaging and aesthetic visual communications based in universal principles.




Decoding Design



There are two basic problems in the statistical analysis of neural data. The "encoding" problem concerns how information is encoded in neural spike trains: can we predict the spike trains of a neuron (or population of neurons), given an arbitrary stimulus or observed motor response? Conversely, the "decoding" problem concerns how much information is in a spike train, in particular, how well can we estimate the stimulus that gave rise to the spike train? This chapter describes statistical model-based techniques that in some cases provide a unified solution to these two coding problems. These models can capture stimulus dependencies as well as spike history and interneuronal interaction effects in population spike trains, and are intimately related to biophysically based models of integrate-and-fire type. We describe flexible, powerful likelihood-based methods for fitting these encoding models and then for using the models to perform optimal decoding. Each of these (apparently quite difficult) tasks turn out to be highly computationally tractable, due to a key concavity property of the model likelihood. Finally, we return to the encoding problem to describe how to use these models to adaptively optimize the stimuli presented to the cell on a trial-by-trial basis, in order that we may infer the optimal model parameters as efficiently as possible.


Decoders optimized offline to reconstruct intended movements from neural recordings sometimes fail to achieve optimal performance online when they are used in closed-loop as part of an intracortical brain-computer interface (iBCI). This is because typical decoder calibration routines do not model the emergent interactions between the decoder, the user, and the task parameters (e.g. target size). Here, we investigated the feasibility of simulating online performance to better guide decoder parameter selection and design. Three participants in the BrainGate2 pilot clinical trial controlled a computer cursor using a linear velocity decoder under different gain (speed scaling) and temporal smoothing parameters and acquired targets with different radii and distances. We show that a user-specific iBCI feedback control model can predict how performance changes under these different decoder and task parameters in held-out data. We also used the model to optimize a nonlinear speed scaling function for the decoder. When used online with two participants, it increased the dynamic range of decoded speeds and decreased the time taken to acquire targets (compared to an optimized standard decoder). These results suggest that it is feasible to simulate iBCI performance accurately enough to be useful for quantitative decoder optimization and design.


Here, we develop a similar simulation-based approach to predict online performance and validate it by comparing its predictions to held-out closed-loop iBCI data from three clinical trial participants. Our approach improves upon the OPS study by being able to run entirely on the computer (requiring no input from a human volunteer) and by enabling user-specific performance predictions. This expands the utility of the approach by enabling a rapid search across more parameters than would be possible with human volunteers. It also allows the simulation approach to be used in a clinical setting to customize the decoding parameters to suit a given iBCI user. Although several other studies have also successfully employed computer simulations of iBCI control to make qualitative insights (e.g.22,25,26), we are aware of no prior work that has demonstrated an ability to simulate iBCI control with the accuracy required for quantitative parameter selection and design.


Supplemental Section 5 shows the same data but for each session separately, confirming that the PLM can accurately predict within-session variance (due solely to gain and smoothing) in addition to explaining across-session variance (due partly to differences in participants and day to day changes in decoding noise28). We also confirmed that good performance across all participants could still be obtained regardless of which particular blocks for each session were used to fit the PLM. To do so, we compared model performance when the PLM was fit on the lowest gain block of each session, the highest gain block, and the median gain block. The average model FVAF and MAE did not vary appreciably (Supplemental Section 3).


Here, we test whether the PLM can predict how online performance will be affected when task parameters change (e.g. when the target distance or radius changes). This is important for two reasons: (1) the model should be able to predict optimal decoder parameters under a large variety of task settings (e.g. target distances and sizes), and (2) predicting how performance changes as a function of task parameters may enable principled optimization of the task (e.g. the sizes and placement of buttons on a virtual keyboard), complementing existing design approaches to user interfaces29.


The PLM implicitly takes into account user-specific and task-specific aspects when predicting the optimal gain and smoothing parameters. Figure 7B illustrates this by showing how the gain and smoothing parameters chosen by the model vary as a function of decoding noise variance, feedback delay and target radius. Here, the optimal parameters were defined as those that were predicted to minimize the total movement time. In general, in more challenging settings (e.g. more decoding noise, long feedback delays, and small target radii), lower gain and higher smoothing values are better. In less demanding settings, higher gain and lower smoothing values are better. The PLM can take these factors into account to find the right parameter values for the specific situation. Note that, if optimal parameters are desired for a flexible task with varying target sizes and distances, these can be found by simulating trajectories towards targets with different radii and distances in whatever proportion the user would experience them and then averaging the performance metrics over these trajectories.


Next, we give an example of how the PLM can be used to help optimize a typing interface (Fig. 7C). It was recently shown that iBCIs can restore the ability to communicate at record speeds by combining 2D cursor control with an onscreen keyboard with a grid-like layout (shown in the top left of Fig. 7C)18. One method for key selection is to require the cursor to dwell on top of a key for a specified amount of time. How long should this dwell time be to maximize the information throughput? The dwell time presents a trade-off: shorter dwell times enable faster but less accurate selections, while longer dwell times enable slower but more accurate selections. In addition to the dwell time, the decoder parameters must also be optimized, and the optimal decoder parameters may change as a function of dwell time. This presents a difficult optimization problem that would be infeasible to fully explore without simulation. Moreover, the optimal dwell time and decoder parameters are likely to change from user to user since each user has a different amount of decoding noise. Thus, even if the time was spent to find good values for one user through trial and error, these would not necessarily be good values for other users.


We demonstrated the utility of a simulation-based optimization approach by using the PLM to design such a transform. The resulting function scales up higher speeds (allowing quick movements to the target) while still mapping a wide range of decoded speeds to lower values (to retain the same or better stopping precision). Online results from participants T5 and T8 show that this approach is indeed one possible way to achieve a greater dynamic range of speed and precision than what a linear decoder can provide. Other solutions to this same problem have also been proposed, including attenuating the cursor speed when the cursor is changing directions more quickly32, using a nonlinear two state decoder that can switch between a postural decoder and a movement decoder33, and using a hidden Markov model to detect a stopping state34. Combining the nonlinear transform with these other improvements might yield greater gains in performance.


We showed that the PLM can predict closed-loop user behavior accurately when simulating a linear velocity decoder with exponential smoothing dynamics. Although this is a commonly used type of decoder that can achieve a high level of performance relative to its simplicity1,2,3,4,6,13,15,16,17,20,35,36,37, we anticipate that the field will ultimately move towards more general nonlinear decoders (e.g. neural networks33,38,39,40,41) that have a greater capacity to leverage patterns in the neural activity which lack a linear relationship to movement velocity. The same simulation-based design approach used here could be expanded to optimize more general kinds of decoders. Indeed, more complex decoders might stand to benefit even more from a simulation-based approach, since they have more free parameters that can be difficult to tune through trial and error alone. 2ff7e9595c


0 views0 comments

Recent Posts

See All

7 movie roles z download

7 Movie Roles Z Download: Como assistir a filmes de Bollywood e Telugu online gratuitamente Se você é fã do cinema indiano, deve ter...

Comments


bottom of page