On Ebola

Medscape on Ebola for US Clinicians, October 2014


  • One of the difficulties in identifying potential cases of Ebola infection is the nonspecific presentation of most patients. Fever/chills and malaise are usually the initial symptoms, so all medical personnel should maintain a high index of suspicion in these cases.
  • The most common symptoms of patients in the current outbreak of Ebola include fever (87%), fatigue (76%), vomiting (68%), diarrhea (66%), and loss of appetite (65%).
  • Other symptoms may include chest pain, shortness of breath, headache or confusion, conjunctival injection, hiccups, and seizures.
  • Bleeding does not affect every patient with Ebola and usually presents as small subcutaneous bleeding vs frank hemorrhage.
  • More severe symptoms at presentation with Ebola infection predict a higher risk for mortality. Most patients with fatal disease die between days 6 and 16 of complications.
  • The case-fatality rate of the current outbreak is approximately 71%.
  • Patients who survive infection with Ebola generally begin to improve around day 6 of the infection.
  • Multiple tests can be used to diagnose Ebola provided they are ordered in a timely fashion. Antigen-capture enzyme-linked immunosorbent assay (ELISA), IgM ELISA, polymerase chain reaction, and virus isolation may be employed to make the diagnosis during the first few days of symptoms. Patients identified later during the disease course may be diagnosed with serum antibody levels.
  • There is no cure for Ebola infection; treatment is largely supportive. Therefore, prevention of the spread of Ebola in healthcare facilities is particularly important.
  • Patients with fever, even subjective fever, or other symptoms associated with Ebola infection along with a history of travel to an Ebola-affected area within the past 21 days need to be identified in triage.
  • If such a patient is identified, she/he needs to isolated immediately in a single room with access to a bathroom. The door to the room should remain closed.
  • Hospital infection control and local health departments should be contacted immediately in the case of suspected Ebola disease.
  • Standard, contact, and droplet precautions should be enforced immediately.
  • PPE must be worn at all times when in the patient room and must include a gown, facemask, eye protection, and gloves. Shoe or leg covers should be worn if there is a high risk of soiling on the ground, and an N95 respirator is necessary for procedures with possible airborne contact.
  • A "buddy system" (application and removal of PPE with a witness) should be employed to ensure that PPE protocols are carried out appropriately.
  • PPE should be discarded with the utmost caution to avoid contamination, and hand hygiene is necessary after PPE is removed.
  • A log should be maintained of all persons entering the room of a patient with suspected Ebola disease.
  • Invasive procedures, including phlebotomy, should be limited to what is medically necessary.
  • A contact assessment should be performed for all patients with suspected Ebola infection. High-risk contacts include individuals with direct contact with the patient's skin or bodily fluids. Low-risk contacts include household members and others who have had no more than casual contact with the patient. Healthcare workers in the area of a patient with Ebola who do not use PPE are also considered low-risk contacts.

CLINICAL IMPLICATIONS

  • Fever is the most common presenting symptom of the current outbreak of Ebola disease, followed by fatigue and vomiting.
  • Measures to reduce the risk of transmission of Ebola in healthcare settings include the identification of suspected cases in triage; immediate patient isolation in a room with access to a bathroom; and PPE featuring a gown, facemask, eye protection, and gloves.

More on Rough Sets and Species Identity, With Application to the sorta Operator

According to Zdzisław Pawlak (the inventor of the mathematical concept of rough sets, and from whose writings I will be quoting liberally below from an online introductory book he has posted here), a rough set, which is a formal approximation of an ordinary or crisp set, can be defined mathematically via a combination of two ordinary sets R* and R* which define the approximation of the rough set X within the universal domain set U. He defines the rough set as follows:

The rough set concept can be defined quite generally by means of topological operations, interior and closure, called approximations. Let us describe the problem more precisely. Suppose we are given a set of objects U called the universe and an indiscernibility relation R ⊆ U × U, representing our lack of knowledge about elements of U. For the sake of simplicity we assume that R is an equivalence relation. Let X be a subset of U. We want to characterize the set X with respect to R. To this end we will need the basic concepts of rough set theory given below.
  • The lower approximation of a set X with respect to R is the set of all objects, which can be for certain classified as X with respect to R (are certainly X with respect to R).
  • The upper approximation of a set X with respect to R is the set of all objects which can be possibly classified as X with respect to R (are possibly X in view of R).
  • The boundary region of a set X with respect to R is the set of all objects, which can be classified neither as X nor as not-X with respect to R.
  • Now we are ready to give the definition of rough sets.
  • Set X is crisp (exact with respect to R), if the boundary set is empty.
  • Set X is rough (inexact with respect to R), if the boundary set is nonempty.
  • Thus a set is rough (imprecise) if it has nonempty boundary region; otherwise the set is crisp (precise). This is exactly the idea of vagueness proposed by Frege.

    Pawlak goes on to define useful regions of the rough set X as:

    Formal definitions of approximations and the boundary region are as follows:
  • R-lower approximation of X [is] R*(x) = ⋃x ∈ { U R(x): R(x) ∈ X }
  • R-upper approximation of X [is] R*(x) = ⋃x ∈ { U R(x): R(x) ∩ X ≠ ∅ }
  • R-boundary region of X [is] RNR X = R*(X) - R*(X)
  • and later he notes several properties of the sets R, R, and X in U, such as that

  • R*(-X) = - R*(X)
  • R*(-X) = - R*(X)
  • Which are notable because they allow rough sets to avoid Gareth Evans' challenge to rough sets as I previously blogged about here, since the complement of possible identity is not therefore definite identity.

    Furthermore, there is a definiton of accuracy with regard to a rough set, if B is a criterion for defining the rough set:

    [A] Rough set can be also characterized numerically by the following coefficient
    αB(X) = | B*(X) | / | B*(X) |
    called accuracy of approximation, where |X| denotes the cardinality of X. Obviously 0 ≤αB(X) ≤ 1. If αB(X) = 1, X is crisp with respect to B (X is precise with respect to B), and otherwise, if αB(X) < 1, X is rough with respect to B (X is vague with respect to B).
    Finally, I note that Wikipedia says that:

    Clearly, when the upper and lower approximations are equal (i.e., boundary region empty), then αB(X) = 1, and the approximation is perfect; at the other extreme, whenever the lower approximation [ B*(X) ] is empty, the accuracy is zero (regardless of the size of the upper approximation).


    Anyway, I've been intrigued by how the idea of species as a rough set can be used to analyze the uses of the species category concept in other domains, like consciousness studies, where there may be discussion of the possibility of a truly intelligent and conscious computer. Rough sets of the type used to define Darwin's species concept are defined by paired sets which have well defined inner "definitely is " and outer "definitely is not" boundaries defining an uncertain zone for the rough set.

    As applied to organisms in biology, we would say that any given individual organism can certainly be classified within some broad range, but it might be only approximately classified as to species, since its measured characteristics might place it in the vague boundary of a species instead of well within or without a given species definition.

    For example, let's look at the coyote and dog species. Dogs (Canis lupus familiaris) and coyotes (Canis latrans) are separate canine species. How do they differ, and is this difference one where vagueness applies? Let's look at the difference between the species:

    Dogs are of the species Canis lupus familiaris. A dog is a mammal, a ground-dwelling quadruped carnivore-omnivore with prominent canine teeth, which places it in the Canis genus of which it is the taxonomic archetype. Dog breeding has created the largest diversity of types of dogs seen within any mammalian species. In general, dogs have deep chests, white nail beds, pale tail tips, and have elbow joints placed above the height of the sternum. The ears of a dog may be upright or droop, but tend to be thin-skinned. Dogs usually run with their tail up or level.

    Coyotes are Canis latrans. Coyotes are ground-dwelling quadruped omnivores with prominent canine teeth, also placing them in the Canis genus. Coyotes have narrowed skulls and less muscled jaws than dogs or wolves, and their elbows are below their chests, since they have shallower chests and lungs than dogs. Coyotes have long, thick upright ears, dark tail tips, and dark nail beds.

    What characteristics, then, do we have for species classification (not counting those based on DNA sequencing)? Here is a chart:

    CharacteristicDogCoyoteAbbrev.
    legs44L4+
    dietomniomniDo+
    teethprominent caninesprominent caninesTc+
    elbow positionabove sternumbelow sternumAd, Ac
    earsthin, variably floppythick, upright, longEd, Ec
    tail tiplightdarkTd, Tc
    nail bedslightdarkNd, Nc
    Note that there are very pale coyotes with pale tail and nails, and that there are dogs with dark nail beds and dark tails. German shepherds arguably have thick upright ears, and greyhounds have elbows below the sternum, so, looking at a large group of dogs, we might get the following:
    CharacteristicsCount
    L4+ Do+ Tc+ Ad Ed Td Nd74884
    L4+ Do+ Tc+ Ac Ed Td Nd36
    L4+ Do+ Tc+ Ad Ec Td Nd950
    L4+ Do+ Tc+ Ad Ed Tc Nd21
    L4+ Do+ Tc+ Ad Ed Td Nc15
    L4+ Do+ Tc+ Ad Ed Tc Nc319
    L4+ Do+ Tc+ Ad Ec Tc Nd11
    So, the accuracy of the criteria above would be (74884) / (36+950+21+15+319+11+74884) = 0.982, or 98% accuracy for the rough set's criteria in classifying these dogs as Canis lupus familiaris.

    An important point to make here is that dogs and coyotes are close relatives, since they are both canids (that is, they are of genus Canis) and also have been rarely known to interbreed. A canid that seems to be not quite a dog, since it is a little like a coyote, is still in almost all respects like other canids which we are sure are dogs. We'll return to this fact later.

    In several past writings in the last few years, including the recent book Intuition Pumps and Other Tools for Thinking (2014), philosopher Daniel Dennett has described a sorta operator, which he describes by example to be an artificial intelligence analog as

    What we might call the sorta operator is, in cognitive science, the parallel of Darwin's gradualism in evolutionary processes. Before there were bacteria there were sorta bacteria, and before there were mammals there were sorta mammals and before there were dogs there were sorta dogs, and so forth. We need Darwin's gradualism to explain the huge difference between an ape and an apple, and we need Turing's gradualism to explain the huge difference between a humanoid robot and hand calculator. (p. 96)

    The problem with talking about apes and apples with regard to the sorta operator, however, is that we don't consider an ape to be sorta an apple, though we might consider a coyote to be sorta a dog! Evolution, as a theory of the origins of life or the major categories of organisms NEVER says that a multicellular animal is gradually a multicellular plant. Rather, it says that unicellular organisms that were sorta unicellular plants and sorta unicellular animals diverged at some point. It's probable that during that divergence the proto-plant cells were vaguely like the proto-animal cell. Because some aspects of evolutionary specialization appear to be one-way in their effects, there is no sorta path from apple to ape! So, the analogy of a sorta operator from apple to ape fails here, and I think that Dennett's inadequate grasp of paleobiology is sorta leading him to make a category error of some kind. Ironically, since it is the historical, empirical course of evolutionary history that leads to its contradiction of Dennett's gradualism, if the creationists were right and current understanding of evolutionary history were wrong, Dennett's ideas here might be closer to validity.

    Intuition Pumps and Other Tools for Thinking's analogy between species vagueness, taken in this blog as a rough set, and machine/human intelligence fails to work well in at least one more respect. Consider our working example of a intelligent and conscious object, a human and the human brain:

    Now consider computers and computer processors:

    Do we see a gradual merging in classification of these two kinds of objects, especially looking at consciousness? No. As we move away from conscious with humans, we move either toward non-conscious humans, such as persons asleep or in coma, or in species we move toward apes and monkeys. Not towards silicon machines! On the computer side, we move upward from simple calculators to more sophisticated non-conscious, non-intentional AI, but we never have any examples of truly conscious AI. Dennett admits as much, since he says Turing's gradualism is between "humanoid robot and hand calculator." Unfortunately, we have no examples of a conscious, intelligent, humanoid robot. So, in rough set terms, for computing machines that are conscious, | R* | = 0. And thus, by the accuracy measure above, the accuracy of Dennett's sorta operator in classifying AI as intelligent is zero.

    To put this another way, rough set theory tells us that we cannot say any form of computer AI or other computing simulation of consciousness or intelligence is sorta conscious until we have an example that we can classify as definitely conscious. Once we have that, we can find a sorta neighborhood of that truly intelligent AI, just as we can define a sorta conscious region around a set of normal waking humans. But we have no empirical evidence for any sorta neighborhood that contains an existing conscious machine AI.

    Since we have no examples of truly conscious AI, we must take humans as composing our R*, and place current computer models of AI in a region (within our universe of objects) of things that are definitely not conscious, in U outside of R*. The accuracy of a sorta operator in identifying true conscious machine intelligence is zero, at least until we actually have a machine consciousness to give us an example of what that machine sorta would be.

    So, while it might make a good intuition pump for the imagination, as a way of pointing to the possibility of conscious computers it looks like the sorta operator is sorta wrong.

    Medicine Nobel Prize 2014

    The Nobel Prize in Physiology or Medicine 2014 was divided, one half awarded to John O'Keefe, the other half jointly to May-Britt Moser and Edvard I. Moser "for their discoveries of cells that constitute a positioning system in the brain."

    Congratulations to Doctor O'Keefe and to the Doctors Moser!

    Here's a map update on the Big Island lava flow from the Kilauea volcano's Pu'u 'O'o vent (courtesy USGS's Hawaii Volcano Observatory). Mostly stalled, but still creeping toward the little village of Pahoa.

    EEG Reading of Emotions: A skeptical look at the emotional valence EEG literature.

    Countless science fiction movies have depicted wires attached to the head as either reading thoughts, controlling the person's thoughts or behavior, or all of these. Yet what is the reality?

    The electroencephalogram, or EEG, is a device that measures small (millivolt) changes in the electrical field of the scalp. Since brain tissue is electrically active, the brain generates a constantly changing, tiny, but measurable electrical field that can be detected with a sufficiently powered system of electronic amplifiers. The brain's electrical activity changes significantly according the the overall state of alertness, so that coma, sleep, waking with eyes closed, and waking with eyes open and evaluation of the environment can usually be distinguished reliably. Because a sudden emotional shock can alert us, sudden large changes in the state of awareness, such as those caused by surges of emotion, can also often be seen on the EEG, depending on the baseline alertness at the time of the emotion.

    Thus, emotional state changes can cause a change in brain state by changing our level of alertness, which in turn can change the EEG. However, such a change may not be seen with every emotional change. For example, someone who is already fully awake and thinking could have the same state of alertness with many different types of emotional reactions, if the person stays equally alert throughout all the different emotional states. Therefore, there are significant limits to the resolution of such EEG measurement in determining the kind of feeling a person is having.

    Because of various difficulties with categorizing various emotions in research animals, there has been a tendency in animal research to correlate approach and avoidance behaviors in animals with a single positive-negative scale of emotion called valence. Higher valence states go with higher approach behaviors,and higher negative valence states go with more avoidance behaviors. Thus, happiness, joy, satisfaction, hope, and approval would be positive valence, whereas fear and disgust would be negative valence. (Anger is more difficult to place on an approach-avoidance scale, since anger motivates to approach in order to eliminate the same threat the animal might in fear avoid, even though the emotional state would otherwise be negative in its quality from a human perspective.)

    Reading specific thoughts seems impossible with surface EEG. Since the surface EEG is related mostly to the averaged activity of billions of cells in the closest electrically active tissues, especially the brain's outer cortex, telling people's thoughts based on surface EEG is like reading the words of a book by getting the weight of the ink on each page. That is very hard to do, likely impossible without a huge amount of background knowledge to narrow the possibilities for interpretation. It is telling that experts with the use of lie detector test equipment may use skin impedance, also a measure of sudden emotional surges via autonomic nervous system output from brain to skin, more in their evaluations than they use any EEG waves they may measure. Skin impedance of the arm may be more reliable in telling whether one is lying than brain EEG!

    Early on after the invention of EEG, researchers tried to correlate various psychiatric disorders, including mood disorders, via the EEG, Although early correlation research was published and taught for decades, essentially all of this research was found to be poorly reproducible and not of practical or clinical utility. Says Felix Schirmann here, regarding the EEG assessments of psychopathy in the 1940's:

    In this period, “the wondrous eyes” of EEG wandered over immoral persons' brains without spotting significant characteristics. The findings were inconclusive. There were no comprehensive EEG-based theories that connected the results or satisfactorily explained human badness. The new technology failed to deliver the hoped for revelations regarding diagnosis, classification, etiology, and therapy. In general, the contribution of EEG to psychiatry proved disappointing

    --The wondrous eyes of a new technology: A history of the early electroencephalography (EEG) of psychopathy, delinquency, and immorality. (Fron Hum Neurosci, April 2014.)

    Current standards of EEG interpretation in medicine suggest that the EEG reader avoid assertions regarding the emotional state of the patient, and that psychiatric use of the EEG is best kept to the seeking of a specific non-psychiatric illness in differential diagnosis.

    In the past decade, with the availability of more sophisticated computer equipment for analyzing EEG data and with the apparent rush to report success of fMRI in studying emotional states and detecting repeated patterns of thinking in the human brain's metabolism, there has been a second look for emotional and psychiatric correlates of the EEG. Researchers have tried to duplicate fMRI's early apparent success in analyzing cognition and emotional states. Let's look at four such studies.

    The first study is Schuster et al's "EEG-based Valence Recognition: What do we Know About the influence of Individual Specificity?" in IEEE Trans Biomed Eng. 2010 July. This study claims in its abstract that "Support vector machine classifications based upon intra-individual data showed significantly higher classification rates[F(19.498),p<.001] than global ones." This line of the abstract was likely due to wanting at least one significant P value among the results, since the discussion notes merely that "although statistical analysis was not able to show a difference between the classes positive and negative, classification rates were mostly above chance level." The problem with "above chance level" is that flipping a coin 10 times and getting 6 heads is above chance level, but it does not give a significant P value.

    The second study is Hiyoshi-Taniguchi et al's "EEG Beta Range Dynamics and Emotional Judgments of Face and Voices" from Cognitive Computation, July 2013. This study paired a face expressing an emotion with a voice which said a word in a tone either consistent or discordant with the emotion. The study's results, though significant, can be explained by measuring differences in arousal based on the surprise value of a discordant response (for example, a happy voice and angry facial expression), so reading of the actual emotion on EEG was not clearly done.

    The third study is Lin et al's "EEG-based emotion recognition in music listening," in EEE Trans Biomed Eng., 2010. This study looked at EEG correlates with music and the subjects' self-reported emotions induced by listening to various musical pieces. The researchers did show that EEG can differentiate between different pieces of music, but they did not show that the EEG correlated with mood rather than music. Why? Rhythmic, musical stimuli are well studied to have the ability to entrain the EEG to echo that rhythm, especially in the occipital cortex but also with auditory stimuli. Such entrainment effects are the basis for using EEG-derived visual and auditory evoked potentials in the evaluation of neurological conditions. The investigators would have to show that their EEG changes can be used to predict the EEG with visual emotional stimuli of the same type to show that they are not merely detecting a difference in brain response to music which is independent of the music's emotional influence. That has not been done.

    The fourth and last study, our most recent, is Liu et al's. "Emotion recognition from single-trial EEG based on kernel Fisher's emotion pattern and imbalanced quasiconformal kernel support vector machine" in Sensors, 2014 Jul. Even though those researchers claimed significant statistical results, when I looked at the actual data tables, these showed that the "balance ratio" of the emotional valence data in their group of 10 subjects was 1.04, almost 1, or no significant difference between high and low valance, but that the arousal data ratio was 1.73, showing a consistent overall difference between high and low arousal states among the subjects. The researchers then used statistical analysis to find a group of EEG features that correlated and classified well among the subjects, but failed to validate their derived statistical measure on a second group of subjects. This measure, the "kernel Fisher's emotion pattern" or KEFP, is, tellingly, based on configuring repeated Fisher T tests on parts of the data set until a good correlation was found. Such a synthetic empirical measure must always be validated on other subjects, since with many different T tests, some are going to correlate with emotional valence in the data by coincidence. That the "KEFP" measure in this study can be used in other subjects successfully as a predictive measure is therefore a hypothesis generated by the data, not a validated correlation for EEG brain and emotions.

    What can be concluded from these reports? First, that EEG's efficacy in the detailed reading of emotions is difficult and is likely not practicable even with modern computer signal processing. Second, that emotions do change the EEG signal, but not in a way that allows us to judge the qualities of that emotion the way we, or a computer, might be able to read a facial expression. Third, that many publications make overlarge conclusions from mere correlations in the EEG data, just as the psychiatrists and criminologists did 75 years ago.


    ----------------------------------------------

    ABSTRACTS

    EEG-based Valence Recognition: What do we Know About the influence of Individual Specificity?

    Timo Schuster, Sascha Gruss, Stefanie Rukavina, Steffen Walter & Harald C. Traue

    The fact that training classification algorithms in a within-subject design is inferior to training on between subject data is discussed for an electrophysiological data set. Event related potentials were recorded from 18 subjects, emotionally stimulated by a series of 18 negative, 18 positive and 18 neutral pictures of the International Affective Picture System. In addition to traditional averaging and group comparison of event related potentials, electroencephalographical data have been intra- and inter-individually classified using a Support Vector Machine for emotional conditions. Support vector machine classifications based upon intraindividual data showed significantly higher classification rates [F(19.498),p<.001] than global ones. An effect size was calculated (d = 1.47) and the origin of this effect is discussed within the context of individual response specificities. This study clearly shows that classification accuracy can be boosted by using individual specific settings.


    ----------------------------------------------

    Title: EEG Beta Range Dynamics and Emotional Judgments of Face and Voices

    Authors: K. Hiyoshi-Taniguchi, M. Kawasaki, T. Yokota, H. Bakardjian, H. Fukuyama, F. B. Vialatte and A. Cichocki

    Abstract: The purpose of this study is to clarify multi-modal brain processing related to human emotional judgment. This study aimed to induce a controlled perturbation in the emotional system of the brain by multi-modal stimuli, and to investigate whether such emotional stimuli could induce reproducible and consistent changes in the brain dynamics. As we were especially interested in the temporal dynamics of the brain responses, we studied EEG signals. We exposed twelve subjects to auditory, visual, or combined audio-visual stimuli. Audio stimuli consisted of voice recordings of the Japanese word ‘arigato’ (thank you) pronounced with three different intonations (Angry - A, Happy - H or Neutral - N). Visual stimuli consisted of faces of women expressing the same emotional valences (A, H or N). Audio-visual stimuli were composed using either congruent combinations of faces and voices (e.g. H x H) or non-congruent (e.g. A x H). The data was collected with a 32-channel Biosemi EEG system. We report here significant changes in EEG power and topographies between those conditions. The obtained results demonstrate that EEG could be used as a tool to investigate emotional valence and discriminate various emotions.


    ----------------------------------------------

    EEE Trans Biomed Eng. 2010 Jul;57(7):1798-806. doi: 10.1109/TBME.2010.2048568. Epub 2010 May 3.

    EEG-based emotion recognition in music listening. Lin YP1, Wang CH, Jung TP, Wu TL, Jeng SK, Duann JR, Chen JH.

    Ongoing brain activity can be recorded as electroencephalograph (EEG) to discover the links between emotional states and brain activity. This study applied machine-learning algorithms to categorize EEG dynamics according to subject self-reported emotional states during music listening. A framework was proposed to optimize EEG-based emotion recognition by systematically 1) seeking emotion-specific EEG features and 2) exploring the efficacy of the classifiers. Support vector machine was employed to classify four emotional states (joy, anger, sadness, and pleasure) and obtained an averaged classification accuracy of 82.29% +/- 3.06% across 26 subjects. Further, this study identified 30 subject-independent features that were most relevant to emotional processing across subjects and explored the feasibility of using fewer electrodes to characterize the EEG dynamics during music listening. The identified features were primarily derived from electrodes placed near the frontal and the parietal lobes, consistent with many of the findings in the literature. This study might lead to a practical system for noninvasive assessment of the emotional states in practical or clinical applications.


    ----------------------------------------------

    Sensors (Basel). 2014 Jul 24;14(8):13361-88. doi: 10.3390/s140813361.

    Emotion recognition from single-trial EEG based on kernel Fisher's emotion pattern and imbalanced quasiconformal kernel support vector machine.

    Liu YH1, Wu CT2, Cheng WT3, Hsiao YT4, Chen PM5, Teng JT6.

    Electroencephalogram-based emotion recognition (EEG-ER) has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI). However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher's discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher's emotion pattern (KFEP), and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM) serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS) are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68%) and arousal (84.79%) among all testing methods.

    The floating arm trick and motor tricks in dystonia: could they share an underlying mechanism?

    The "floating arm trick" is a phenomenon of a slow, tonic movement reflex that seems uniquely confined in normal humans to the deltoid muscles of the shoulders. To do this demonstration on yourself, follow the instructions here, or just stand in a doorway, press the backs of your hands firmly against the door frame with elbows straight, tensing around the shoulders, for at least 30 seconds to contract the deltoid muscles, then release hands to drop to sides. As you do so, you will feel a slight upward pull of your hands, as the deltoids try to pull your arms up again in a slow, almost dystonic contraction.

    This phenomenon, called the Kohnstamm phenomenon, was investigated by a team of scientists in Zurich, who found that in order to override the reflex, the body has to send an additional motor signal to the arm to override the reflex, so that voluntary control does not stop the original signal to the muscle from occurring along with the voluntary cancelling. This was true even though, by transcranial magnetic stimulation, they were able to show the reflex movement comes from motor cortex,not the spine where the muscle stretch reflexes originate.

    People with dystonias such as "wryneck" or writer's cramp have similar involuntary tonic movements. Perhaps the ability of such patients to briefly hold back their involuntary movement is similar in nature. So, perhaps there may be a treatment beyond Botox for dystonia in the floating arm trick. The dystonias have been shown to sometimes respond to "sensory tricks." Could there be a motor trick out there for dystonia? (HT: Science Magazine's Sifter.)

    ------------------------------------

    ABSTRACT

    Proc Biol Sci. 2014 Nov 7;281(1794). pii: 20141139. doi: 10.1098/rspb.2014.1139.

    Using voluntary motor commands to inhibit involuntary arm movements.

    Ghosh A, Rothwell J, Haggard P.

    A hallmark of voluntary motor control is the ability to stop an ongoing movement. Is voluntary motor inhibition a general neural mechanism that can be focused on any movement, including involuntary movements, or is it mere termination of a positive voluntary motor command? The involuntary arm lift, or 'floating arm trick', is a distinctive long-lasting reflex of the deltoid muscle. We investigated how a voluntary motor network inhibits this form of involuntary motor control. Transcranial magnetic stimulation of the motor cortex during the floating arm trick produced a silent period in the reflexively contracting deltoid muscle, followed by a rebound of muscle activity. This pattern suggests a persistent generator of involuntary motor commands. Instructions to bring the arm down voluntarily reduced activity of deltoid muscle. When this voluntary effort was withdrawn, the involuntary arm lift resumed. Further, voluntary motor inhibition produced a strange illusion of physical resistance to bringing the arm down, as if ongoing involuntarily generated commands were located in a 'sensory blind-spot', inaccessible to conscious perception. Our results suggest that voluntary motor inhibition may be a specific neural function, distinct from absence of positive voluntary motor commands.

    You are what you eat?

    Well, maybe your microbiome is about 3% exactly what you ate, at least. ABSTRACT =======================================================...