Humans have a nose for gender: Chemical cues influence perceptions of movement as more masculine or feminine

The human body produces chemical cues that communicate gender to members of the opposite sex.
Credit: © olly / Fotolia

The human body produces chemical cues that communicate gender to members of the opposite sex, according to researchers who report their findings in the Cell Press journal Current Biology on May 1. Whiffs of the active steroid ingredients (androstadienone in males and estratetraenol in females) influence our perceptions of movement as being either more masculine or more feminine. The effect, which occurs completely without awareness, depends on both our biological sex and our sexual orientations.

“Our findings argue for the existence of human sex pheromones,” says Wen Zhou of the Chinese Academy of Sciences. “They show that the nose can sniff out gender from body secretions even when we don’t think we smell anything on the conscious level.”

Earlier studies showed that androstadienone, found in male semen and armpits, can promote positive mood in females as opposed to males. Estratetraenol, first identified in female urine, has similar effects on males. But it wasn’t clear whether those chemicals were truly acting as sexual cues.

In the new study, Zhou and her colleagues asked males and females, both heterosexual and homosexual, to watch what are known as point-light walkers (PLWs) move in place on a screen. PLWs consist of 15 dots representing the 12 major joints in the human body, plus the pelvis, thorax, and head. The task was to decide whether those digitally morphed gaits were more masculine or feminine.

Individuals completed that task over a series of days while being exposed to androstadienone, estratetraenol, or a control solution, all of which smelled like cloves. The results revealed that smelling androstadienone systematically biased heterosexual females, but not males, toward perceiving walkers as more masculine. By contrast, the researchers report, smelling estratetraenol systematically biased heterosexual males, but not females, toward perceiving walkers as more feminine.

Interestingly, the researchers found that homosexual males responded to gender pheromones more like heterosexual females did. Bisexual or homosexual female responses to the same scents fell somewhere in between those of heterosexual males and females.

“When the visual gender cues were extremely ambiguous, smelling androstadienone versus estratetraenol produced about an eight percent change in gender perception,” Zhou says, a statistically very significant effect.

“The results provide the first direct evidence that the two human steroids communicate opposite gender information that is differentially effective to the two sex groups based on their sexual orientation,” the researchers write. “Moreover, they demonstrate that human visual gender perception draws on subconscious chemosensory biological cues, an effect that has been hitherto unsuspected.”

Source: Sciencedaily.com

Chemosensory Communication of Gender through Two Human Steroids in a Sexually Dimorphic Manner

Wen Zhouemail, Xiaoying Yang, Kepu Chen, Peng Cai, Sheng He, Yi Jiang

Highlights

  • Human steroid androstadienone conveys masculinity to straight women and gay men
  • Human steroid estratetraenol conveys femininity to straight men
  • The effects take place in the absence of awareness
  • Human gender perception draws on subconscious chemosensory biological cues

Summary

Recent studies have suggested the existence of human sex pheromones, with particular interest in two human steroids: androstadienone (androsta-4,16,-dien-3-one) and estratetraenol (estra-1,3,5(10),16-tetraen-3-ol). The current study takes a critical step to test the qualification of the two steroids as sex pheromones by examining whether they communicate gender information in a sex-specific manner. By using dynamic point-light displays that portray the gaits of walkers whose gender is digitally morphed from male to female [ 1, 2 ], we show that smelling androstadienone systematically biases heterosexual females, but not males, toward perceiving the walkers as more masculine. By contrast, smelling estratetraenol systematically biases heterosexual males, but not females, toward perceiving the walkers as more feminine. Homosexual males exhibit a response pattern akin to that of heterosexual females, whereas bisexual or homosexual females fall in between heterosexual males and females. These effects are obtained despite that the olfactory stimuli are not explicitly discriminable. The results provide the first direct evidence that the two human steroids communicate opposite gender information that is differentially effective to the two sex groups based on their sexual orientation. Moreover, they demonstrate that human visual gender perception draws on subconscious chemosensory biological cues, an effect that has been hitherto unsuspected.

Source: Cell.com

Evolution of mosquito preference for humans linked to an odorant receptor

Female mosquitoes are major vectors of human disease and the most dangerous are those that preferentially bite humans. A ‘domestic’ form of the mosquito Aedes aegypti has evolved to specialize in biting humans and is the main worldwide vector of dengue, yellow fever, and chikungunya viruses. The domestic form coexists with an ancestral, ‘forest’ form that prefers to bite non-human animals and is found along the coast of Kenya. We collected the two forms, established laboratory colonies, and document striking divergence in preference for human versus non-human animal odour. We further show that the evolution of preference for human odour in domestic mosquitoes is tightly linked to increases in the expression and ligand-sensitivity of the odorant receptor AaegOr4, which we found recognizes a compound present at high levels in human odour. Our results provide a rare example of a gene contributing to behavioural evolution and provide insight into how disease-vectoring mosquitoes came to specialize on humans.

Source: Nature.com

A juvenile mouse pheromone inhibits sexual behaviour through the vomeronasal system

Animals display a repertoire of different social behaviours. Appropriate behavioural responses depend on sensory input received during social interactions. In mice, social behaviour is driven by pheromones, chemical signals that encode information related to age, sex and physiological state1. However, although mice show different social behaviours towards adults, juveniles and neonates, sensory cues that enable specific recognition of juvenile mice are unknown. Here we describe a juvenile pheromone produced by young mice before puberty, termed exocrine-gland secreting peptide 22 (ESP22). ESP22 is secreted from the lacrimal gland and released into tears of 2- to 3-week-old mice. Upon detection, ESP22 activates high-affinity sensory neurons in the vomeronasal organ, and downstream limbic neurons in the medial amygdala. Recombinant ESP22, painted on mice, exerts a powerful inhibitory effect on adult male mating behaviour, which is abolished in knockout mice lacking TRPC2, a key signalling component of the vomeronasal organ2, 3. Furthermore, knockout of TRPC2 or loss of ESP22 production results in increased sexual behaviour of adult males towards juveniles, and sexual responses towards ESP22-deficient juveniles are suppressed by ESP22 painting. Thus, we describe a pheromone of sexually immature mice that controls an innate social behaviour, a response pathway through the accessory olfactory system and a new role for vomeronasal organ signalling in inhibiting sexual behaviour towards young. These findings provide a molecular framework for understanding how a sensory system can regulate behaviour.

Source: Nature.com

In search of self

Several years ago, Tom Wolfe wrote an essay about the rise of cognitive neuroscience, which he entitled “Sorry, your soul just died”. Although reports of the soul’s death may be premature, Wolfe was surely right in suggesting that advances in understanding how our brains work will pose an unprecedented challenge to our sense of who we are.

Last month, some of these challenges were explored at an unusual meeting hosted by the New York Academy of Science. “The Self: From Soul to Brain” brought together a range of experts in neuroscience, psychology, philosophy, theology and anthropology, to discuss the extent to which our sense of self can be explained in the language of neuroscience. A few years ago, a conference like this would have been considered a fringe event, but the questions it sought to address are now firmly within the mainstream of scientific inquiry.

Cognitive neuroscience is of course in its infancy, but there can be no disputing its relevance to understanding our sense of self; as philosopher Daniel Dennett memorably put it, a brain transplant is the only operation for which it’s better to be the donor. Identifying the brain as the place to look is a good start, but the field is still a very long way from being able to answer the deep questions of human existence. Perhaps the most significant general conclusion to be drawn from the current state of knowledge is that our intuitions are a very poor guide to how our minds actually work. Most of us share a strong intuition that our own self is an irreducible whole, that there must be some place in our brains where our perceptions and thoughts all come together and where our future actions are decided. Yet this view is now known to be incorrect—different mental processes are mediated by different brain regions, and there is nothing to suggest the existence of any central controller. The study of split-brain patients reinforces the point, by showing that perception and action can be mediated independently by the two hemispheres; if for example two different visual stimuli are presented to the two hemispheres, the right hemisphere will direct an appropriate behavioral response to the stimulus it sees, while the left hemisphere, which controls language but has no access to the stimulus seen by the right hemisphere, will confabulate a plausible alternative explanation for the behavior, based on the unrelated stimulus to which it does have access. Clearly, the idea of a single locus of perception and decision-making is untenable.

If there is no single brain structure that embodies the self, how can the field progress, and indeed what questions should it be asking? Although there is no consensus yet, some critical issues are emerging, including the neural basis of perception and decision-making, the origin of our sense of free will, the storage and retrieval of autobiographical memories, the origin of ethical precepts, and the basis of self-awareness, including the ability to explain one’s own actions. One particularly promising approach to this last question, highlighted by the condition of autism, may be to explore the basis of social cognition. Autism has been called ‘mind-blindness’, and is often characterized as a lack of insight into the minds of other people. But autistic people also show a lack of insight into their own minds, and it seems plausible that the two deficits are causally linked; understanding one’s own mind may also be an important source of knowledge about other people. It is possible to ask what brain regions are specifically activated by tasks that require such insights, and recent evidence suggests that self-knowledge and knowledge of others both involve at least some of the same regions, notably the anterior cingulate cortex.

These questions are of more than purely intellectual interest. As discussed by Martha Farah in a commentary on page 1123 of this issue, our growing ability to understand and manipulate our own brains is creating a wide range of new ethical dilemmas that will require practical answers. How society resolves these questions will depend in large part on the public’s conception of neuroscience and its ability to address questions about self and soul. There is an obvious parallel with the debate over stem cells and cloning. Among the general public, it is widely believed that the soul arises at conception, despite the obvious counter-arguments about twinning, nuclear transplantation and so forth. This discrepancy between what most biologists believe and what many lay people believe has led (particularly in the United States) to bitter public controversy, culminating in regulations for human stem cell research that most biologists regard as inappropriately restrictive.

It is understandable that people are drawn to a simple account in which the appearance of a new individual corresponds to the sharply defined event of fertilization. The alternative is to accept that the self is not an indivisible all-or-nothing entity, but that it instead emerges gradually, and that its origin must be explained in terms of the complexities of developmental neuroscience and psychology. That is not an easy message to convey even to highly educated people, so it is encouraging that an increasing number of prominent neuroscientists and psychologists are willing to make the attempt through popular books (including Joe LeDoux, the organizer of the NYAS conference, whose book The Synaptic Selfis reviewed on page 1115). That our own identity can be dissected into its component parts, that these components can be studied separately, and that many of our intuitions about our own mental lives will prove wrong, these are revolutionary ideas that will require patient explanation, and we should not expect them to be accepted easily or quickly.

Source: Nature.com

Is prefrontal white matter enlargement a human evolutionary specialization?

Chet C Sherwood1, Ralph L Holloway2,3, Katerina Semendeferi4 & Patrick R Hof3,5
  1. Department of Anthropology and School of Biomedical Sciences, Kent State University, Kent, Ohio 44242, USA.
  2. Department of Anthropology, Columbia University, New York, New York 10025, USA.
  3. New York Consortium in Evolutionary Primatology, New York, New York, USA.
  4. Department of Anthropology, University of California, San Diego, La Jolla, California 92093, USA.
  5. Department of Neuroscience, Mount Sinai School of Medicine, New York, New York 10029, USA.

Correspondence to: Chet C Sherwood1 e-mail: csherwoo@kent.edu

Using a comparative volumetric analysis of MRI scans of brains from 11 primate species (including monkeys, apes and humans), Schoenemann and colleagues1 claim that the prefrontal white matter of humans is enlarged compared to that of other primates. This would suggest that the evolution of human cognitive capacities mediated by prefrontal circuitry relies on enhanced interconnectivity.

Source: Nature.com

Deceiving the law

Lie-detection tests have not been scientifically proven to reliably detect deception at an individual level, yet they are being marketed by several companies and have even been admitted as evidence in an Indian court. This calls for a critical appraisal of these technologies and regulatory measures to prevent misuse.

A court near Mumbai, India, recently became the first to admit a neuroscience-based lie-detection technique as evidence. Although the use of this technology for such accurate lie detection is dubious, this court used evidence from an electroencephalography-based technique to convict a suspect of murder. The judge cited this test, administered to the suspect by a state forensics laboratory, as proof that the suspect’s brain held ‘experiential knowledge’ about the crime that only the killer could have. The verdict: a life sentence in prison.

Although such ‘evidence’ is currently not admissible in US or European courts, several companies are already developing and marketing the use of neuroscience-based lie-detection technology. The classic polygraph has been long discredited as a reliable biomarker of lying and is almost universally inadmissible in court. There is little evidence to indicate that the newer lie-detection technologies, whether based on electroencephalographic (EEG) techniques or functional magnetic resonance imaging (fMRI), work well enough to detect deception accurately on an individual level with an error rate that is low enough to be anywhere near acceptable in court. The case in India should be a call for action for an objective assessment of these technologies and a serious appraisal of whether their current state of efficacy and safety requires tighter regulation of their use.

EEG-based techniques noninvasively measure electrical potentials over the scalp. The theory behind this is that the brain processes familiar information differently from new information. Proponents of its use for lie-detection rely on the assumption that the EEG patterns of guilty suspects should reveal that a crime scene is familiar to them. There are several reasons why this method cannot be used to detect deception at an individual level. First, many experts would agree that there is no established marker for ‘familiarity’. More critically, however, experts also agree that this technique always produces false positives, with some putting the error rate at 15–30%. Moreover, a single trial analysis of EEGs is almost impossible given the signal-to-noise ratio; one would need to average many such trials to get any kind of result.

These manifold problems do not deter the proponents of this technique. The use of EEG-based technologies to test ‘guilty knowledge’ was first championed by a US scientist, Larry Farwell, who founded a company called Brain Fingerprinting Laboratories (http://www.brainwavescience.com/) to market this idea. Although Farwell’s claims are disputed by the scientific community, an Iowa district court admitted ‘brain fingerprinting’ evidence in a court ruling. The district court rejected an appeal despite this testimony, and finally the Iowa Supreme Court, reconsidering the appeal, wrote that it did not give brain fingerprinting data any consideration1. The company’s website, however, continues to hype their technique as having an error rate close to zero and disturbingly claims that their test can even help to identify terrorists.

More recent attempts to develop neuroscience-based lie-detectors focus on the use of fMRI. Two companies in the US now market these lie-detection techniques: No Lie MRI (http://www.noliemri.com/) and CEPHOS (http://www.cephoscorp.com/). They are based on scientific work showing that, in a group of people in a research setting, it is possible to tease out brain activity patterns that correspond to deception. fMRI relies on measuring the hemodynamic response to increased neural activity. As blood flow is affected by numerous other factors, including the fitness state, age of the individual and medications, many variables could affect the results. There are also other technical issues that make the signal-to-noise ratio so low that individual variability could swamp any real results.

More fundamentally, there is no hard data to show that we can actually detect lies (particularly at the level of individual subjects) with great accuracy. Reports of finding brain patterns of activation corresponding to ‘deception’ almost always use subjects (often university students) who are told to lie about something (usually a relatively unimportant matter). Equating the lies told in such an artificial setting to the kinds of lies people tell in reality is pure fantasy at this point. Moreover, it is not obvious how an experiment could be designed that would take into account all these major confounds.

Given these inherent limitations, it is hard to imagine a scenario in which these technologies could ever be accurate enough to be used in critical situations such as convictions in murder trials or conviction of terrorism. Nonetheless, No Lie MRI claims that they are working toward having their tests allowed as evidence in US courts. CEPHOS claims that its technology probably meets the minimum requirements for admissibility in court.

Many neuroscientists have watched with some amusement claims of mind reading in the arena of neuromarketing or claims of being able to divine voters’ preferences. The stakes with lie-detection are much higher, as erroneous results could have devastating consequences for individuals; we have an obligation to speak up and flag the many caveats associated with these technologies. Stanford law professor Hank Greely and neuroethicist Judy Illes have called for much tighter regulation of lie-detection technology, suggesting a ban on nonresearch uses of lie-detection, unless the method is proved to be safe and effective to the satisfaction of a regulatory agency and fully vetted by the scientific establishment1. Although there are many issues to consider when formulating such regulation, more discussion of such options is very welcome.

Source: Nature.com