Feed aggregator

Facebook Syndication Error

Diabolus Rex - 12 min 53 sec ago
This feed URL is no longer valid. Visit this page to find the new URL, if you have access: <a href="https://www.facebook.com/profile.php?id=157259387630213">https://www.facebook.com/profile.php?id=157259387630213</a>

Sirius Business: Parallel Man & Dinoworld

Disinformation - 1 hour 45 min ago

At this point, my research in Siriusian influence on Earth doesn’t pay the bills, so I need to supplement this work with other writing. One of these gigs is reviewing genre media for a variety of outlets. Usually, this involves films or books, but recently a comic book landed on my assignment blotter. The comic book is Parallel Man, published by Future Dude, and concerns a dimension jumping revolutionary out to save, not one, but many Earths spread throughout branching dimensions.

Parallel Man #1 Cover

The comic itself features great art by Christopher Jones and Zac Atkinson. The book was written by publisher Jeffrey Morris and Fredrick Haugen. I’m not writing a review in this column, though. What struck me about this story was that a significant portion of the world occurs on a very well-imagined planet that is inhabited by evolved reptilian humanoids. Interestingly, the creators take this world beyond the usual stock details of past pulp stories and Sleestaks of Land of the Lost. I decide to take some time out of my research into Siriusian history to take a look at this exciting new development. Mr. Morris was kind enough to agree to an interview about the origins of the comic book.

Q: Okay, Mr. Morris, I read the first issue of Parallel Man and the background blogs by Rob Callahan. I was struck by how much thought went into the world building of Dinoworld. You went into more detail than you might see in classic pulp adventures. There’s a lot of stuff going on in just the first issue, but I want to focus on the Dino Riders. Do you have a name for the Saurians?

Morris: First off, thanks for providing a forum for the truth and the opportunity to connect with you and your readers. It’s great to have a place to finally get this off my chest. From what I’ve been told, the Saurians as you call them, their actual self-defined term is unpronounceable by humans.

Q: I should get some legal stuff out of the way before we go too deep into this. Just a couple of quick questions to put this into context. First, are you in any way affiliated with a disclosure campaign orchestrated by any government agency?

Morris: I am not affiliated with the government in any way. I have however, felt ‘compelled’ to share the idea of humanoid dinosaurs through my stories.

Q: Are you in anyway affiliated with a misinformation campaign by a government agency?

Morris: Absolutely not. I feel the truth is important in any situation, regardless of any possible danger it may pose to myself and my organization.

Q: Have you knowingly communicated with either Reptilians (Sirusian Saurians) or Reptoids (Terran Saurians)?

Morris: I would not say that I have been in a conscious fashion. Although, I felt deeply ‘inspired’ in my imagination to create the DinoWorld. So inspire and it has, at times, made me question the possibility of outside influence.

Q: Have you unknowingly communicated with either Reptilians or Reptoids?

Morris: That depends. I have interacted with odd individuals recently, mostly at comic book and science fiction conventions. Does that count?

Q: Great. We appreciate your candor. In your story, Dinoworld exists on an alternate Earth where there was no extinction event. I understand you’re working from the official story of extinction, even though evidence suggests real Saurians continued to evolve long after the historical event. Anyway, I think Rob Callahan covered the technological story very well, so I won’t repeat that. The Dino Riders are immediately hostile to both sets of  “visitors” in the book, so I was wondering what the psychological or moral state of the Dinoworld Saurians is? Are they “monsters”?

Morris: I think they are more misunderstood by the Ascendancy soldiers as opposed to being actual monsters. I see them as warriors who are akin to many tribal cultures. In a science fiction context, think of them as ancient Klingons. The fact that the Ascendancy technology is so foreign to them leads to immediate attack. Shoot first, ask questions later is their philosophy. To me it makes sense. I would find a bunch of HoverCycles skipping into a herd of my riding animals to be a threat. Wouldn’t you?

Parallel Man #1: Art by Christopher Jones & Zac Atkinson

Q: Are they a planetary or tribal culture?

Morris: There are many millions across the world. They are spread across their world in tribes. They do have wars.

Q: The Dino Riders have a distinct look, very different from drawings based on Reptoid encounters and other folk illustrations. Do you feel your design is closer to what actual Saurians look like?

Morris: Simply put… Yes.

Parallel Man #1: Art by Christopher Jones & Zac Atkinson

Q: What happened to the mammals on DinoWorld?

Morris: That didn’t work out so well as the years progressed. I hear they tasted good for a while, though.

Q: I love the how the bridled dinosaurs have only vestigial stumps. Is this evolution or part of their domestication?

Morris: I would say domestication. Selective breeding definitely played a role.

Q: Why don’t they have feathers?

Morris: I was told that…

(This answer was unreadable due to file corruption. When asked for further clarification, Mr. Morris became uncharacteristically silent on the topic. I can only speculate that this subject is rather touchy to whatever “inspires” the writing.)

Q: Are any of the folks on “our Earth” going to be revealed as Reptoid shapeshifters?

Morris: No. But we are not finished with the Dinoworld yet. We are planning an entire special issue set there for next year. The DinoMan and his steed that were ‘shot’ by Mackenzie Cartwright are not dead. You will see them again in their own comic book adventure next year.

Q: Any plans to make action figures?

Morris: Not yet. But when there are, you will be the first to know!

 

Parallel Man is an interesting book in it own right, as it incorporates some very contemporary concepts into a classic science fiction adventure format. I think that Dinoworld is the most exciting development in recent depictions of Saurian lifeforms, and I’ll follow it closely as it goes on. Of course, science fiction is full of reptilian aliens, but this story represents one of the few opportunities we’ve had to look at an earlier stage in the development of their world. Dinoworld is an alternate Earth, but I think it points to some possibilities in the actual development of Reptoids and Reptilians.

I think it is interesting that Parallel Man ignores the somewhat controversial issue of Reptoid shapeshifting. This is, of course, one of the more radical aspects of Saurian theory and I like that they have simplified things down to a basic level of alternative evolution. The millions of years before the rise of mammals is very important to the history of our planet, and I think Dinoworld offers very well considered speculation about what this era may have looked like.

If you’re interested in following Dinoworld and Parallel Man, copies are available for pre-order at your local comic shop or online distributor.

***

This doesn’t warrant an entire article, but as an interesting sidebar I’d like to show a couple of pictures I took in the Wisconsin Dells. We had a couple of free passes to Chula Vista Water Park, so we spent a few hours there. During my first trip down the lazy river, I noticed this unmistakable image of an ancient Reptoid. There is also a gigantic snake wrapped around one of their larger fake trees. Unlike Denver International Airport, I don’t see any particular larger meaning behind this artifice. However, it does illustrate how embedded Saurians are within our popular understanding of ancient worlds.

Reptoids of Chula Vista

Reptoids of Chula Vista (detail)

 

 

The post Sirius Business: Parallel Man & Dinoworld appeared first on disinformation.

Do Gut Bacteria Rule Our Minds? In an Ecosystem within Us, Microbes Evolved to Sway Food Choices

Disinformation - 2 hours 44 min ago

This image illustrates the relationship between gut bacteria and unhealthy eating. Credit: Courtesy of UC San Francisco

via ScienceDaily:

It sounds like science fiction, but it seems that bacteria within us — which outnumber our own cells about 100-fold — may very well be affecting both our cravings and moods to get us to eat what they want, and often are driving us toward obesity.

In an article published this week in the journal BioEssays, researchers from UC San Francisco, Arizona State University and University of New Mexico concluded from a review of the recent scientific literature that microbes influence human eating behavior and dietary choices to favor consumption of the particular nutrients they grow best on, rather than simply passively living off whatever nutrients we choose to send their way.

Bacterial species vary in the nutrients they need. Some prefer fat, and others sugar, for instance. But they not only vie with each other for food and to retain a niche within their ecosystem — our digestive tracts — they also often have different aims than we do when it comes to our own actions, according to senior author Athena Aktipis, PhD, co-founder of the Center for Evolution and Cancer with the Helen Diller Family Comprehensive Cancer Center at UCSF.

While it is unclear exactly how this occurs, the authors believe this diverse community of microbes, collectively known as the gut microbiome, may influence our decisions by releasing signaling molecules into our gut. Because the gut is linked to the immune system, the endocrine system and the nervous system, those signals could influence our physiologic and behavioral responses.

“Bacteria within the gut are manipulative,” said Carlo Maley, PhD, director of the UCSF Center for Evolution and Cancer and corresponding author on the paper.” “There is a diversity of interests represented in the microbiome, some aligned with our own dietary goals, and others not.”

Fortunately, it’s a two-way street. We can influence the compatibility of these microscopic, single-celled houseguests by deliberating altering what we ingest, Maley said, with measurable changes in the microbiome within 24 hours of diet change.

“Our diets have a huge impact on microbial populations in the gut,” Maley said. “It’s a whole ecosystem, and it’s evolving on the time scale of minutes.”

There are even specialized bacteria that digest seaweed, found in humans in Japan, where seaweed is popular in the diet.

Research suggests that gut bacteria may be affecting our eating decisions in part by acting through the vagus nerve, which connects 100 million nerve cells from the digestive tract to the base of the brain.

“Microbes have the capacity to manipulate behavior and mood through altering the neural signals in the vagus nerve, changing taste receptors, producing toxins to make us feel bad, and releasing chemical rewards to make us feel good,” said Aktipis, who is currently in the Arizona State University Department of Psychology.

In mice, certain strains of bacteria increase anxious behavior. In humans, one clinical trial found that drinking a probiotic containing Lactobacillus casei improved mood in those who were feeling the lowest.

Maley, Aktipis and first author Joe Alcock, MD, from the Department of Emergency Medicine at the University of New Mexico, proposed further research to test the sway microbes hold over us. For example, would transplantation into the gut of the bacteria requiring a nutrient from seaweed lead the human host to eat more seaweed?

The speed with which the microbiome can change may be encouraging to those who seek to improve health by altering microbial populations. This may be accomplished through food and supplement choices, by ingesting specific bacterial species in the form of probiotics, or by killing targeted species with antibiotics. Optimizing the balance of power among bacterial species in our gut might allow us to lead less obese and healthier lives, according to the authors.

“Because microbiota are easily manipulatable by prebiotics, probiotics, antibiotics, fecal transplants, and dietary changes, altering our microbiota offers a tractable approach to otherwise intractable problems of obesity and unhealthy eating,” the authors wrote.

The authors met and first discussed the ideas in the BioEssays paper at a summer school conference on evolutionary medicine two years ago. Aktipis, who is an evolutionary biologist and a psychologist, was drawn to the opportunity to investigate the complex interaction of the different fitness interests of microbes and their hosts and how those play out in our daily lives. Maley, a computer scientist and evolutionary biologist, had established a career studying how tumor cells arise from normal cells and evolve over time through natural selection within the body as cancer progresses.

In fact, the evolution of tumors and of bacterial communities are linked, points out Aktipis, who said some of the bacteria that normally live within us cause stomach cancer and perhaps other cancers.

“Targeting the microbiome could open up possibilities for preventing a variety of disease from obesity and diabetes to cancers of the gastro-intestinal tract. We are only beginning to scratch the surface of the importance of the microbiome for human health,” she said.

The co-authors’ BioEssays study was funded by the National Institutes of Health, the American Cancer Society, the Bonnie D. Addario Lung Cancer Foundation and the Institute for Advanced Study, in Berlin.

What I’m wondering is if the Illuminati’s gut bacteria control their minds.

The post Do Gut Bacteria Rule Our Minds? In an Ecosystem within Us, Microbes Evolved to Sway Food Choices appeared first on disinformation.

Darpa signs on for Pentagon’s “Space Plane”

Disinformation - 3 hours 43 min ago

DARPA is about to make a reusable spacecraft for the Pentagon. “It would be a spacecraft that most resembles what people see in the movies,” former Air Force command officer Brian Weeden said.

10 launches in as many days, autonomous, carries mid sized satellite payload.

What do you think the payload will be for the Pentagon? Something to win the hearts and minds with? Democracy or Freedom? Nikes? Can’t wait to find out what you think.

via The Daily Caller:

Aerospace and defense contractor Northrop Grumman recently unveiled its concept for the Pentagon’s new space plane, the XS-1 — an unmanned drone-shuttle capable of carrying small and medium-sized satellites into orbit cheaply and autonomously.

“It would be a spacecraft that most resembles what people see in the movies,” former Air Force Space Command Officer Brian Weeden told War is Boring about the concept craft, which is being headed up by the Defense Advanced Research Projects Agency. Northrop is competing with Boeing and Masten Space Systems for the contract to build the final product.

“If we could pull it off, it would enable much cheaper and faster access to space,” Weeden said. “Something that many people see as the key to opening up space development.”

While a typical single-use rocket-launched satellite takes months to plan at a typical cost of more than $50-million per launch, DARPA wants the XS-1 to be capable of deploying within hours, and able to execute 10 launches in just as many days at a cost of $5-million each, according to the report.

Designing such a craft carries a host of engineering obstacles, including adding permanent systems like landing equipment, fuel tanks and boosters. The latter two are typically shed during the course of a multi-stage rocket launch.

More weight also means more necessary power to launch the vehicle, which in turn means more fuel — bringing us back to weight again.

Reusable spacecraft like the space shuttle also require extraordinary maintenance as a result of the extreme heat and vibration the craft sustains during launch and reentry.

Continue reading.

The post Darpa signs on for Pentagon’s “Space Plane” appeared first on disinformation.

British Jihadis to Undergo Forced De-Radicalization

Disinformation - 4 hours 43 min ago

Although the British government’s motives may be honorable, deprogramming anyone sounds scarily close to brainwashing. Breitbart has the story:

Britain’s governing coalition has agreed that potential Jihadis who return to Britain from abroad should be forced to undergo a de-radicalisation programme when they arrive back in the UK.

Photo: Menendj (CC)

The proposal is one of a series of measures agreed by the Conservative Party and the Liberal Democrats who form Britain’s coalition government. Also proposed is a new law to force airlines to share full passenger lists with police and security agencies, and plans to temporarily suspend to passports of UK citizens fighting for ISIS, preventing them from coming home.

According to the Sun, talks over the measures were “tense” and going on well into the night last night, ahead of the Prime Minister’s emergency statement in the House of Commons this afternoon.

The proposals come after the UK’s offical terror threat level was raised last week to severe, meaning that an attack is “highly likely”. Security sources have also said that there is a risk that ISIS sympathisers may perform a “marauding terrorist firearms attack” (MTFA), which involves opening fire in a crowded place with automatic weapons.

Although the Prime Minister is determined to plug any gaps in the UK’s defences, Deputy Prime Minister Nick Clegg is under pressure from senior members of his own party block any further clampdown. One former leader, Paddy Ashdown said that is was “the job of politicians to act but as jealous protectors of our liberties,” while another, Sir Menzies Campbell, said that stripping UK jihadis of their citizenship could “constitute illegality”…

[continues at Breitbart]

The post British Jihadis to Undergo Forced De-Radicalization appeared first on disinformation.

The Science of Laziness

Disinformation - 5 hours 43 min ago

The ASAPScience video about coffee that I shared the other day received a lot of positive attention, so I dug around for another. I came across “The Science of Laziness.” Enjoy.

via the YouTube page:

Why are some people so lazy? Is there a couch-potato gene?

The post The Science of Laziness appeared first on disinformation.

Neanderthals Created Cave Art

Disinformation - 6 hours 44 min ago

A new discovery at Gorham’s Cave in Gibraltar suggests that Neanderthals were, contrary to their poor reputation, cave artists (and created the hashtag). Report via Chicago Tribune:

Belying their reputation as the dumb cousins of early modern humans, Neanderthals created cave art, an activity regarded as a major cognitive step in the evolution of humankind, scientists reported Monday in a paper describing the first discovery of artwork by this extinct species.

Gorham’s Cave. Photo by Gibmetal77 (CC)

The discovery is “a major contribution to the redefinition of our perception of Neanderthal culture,” said prehistorian William Rendu of the French National Center for Scientific Research, who was not involved in the work. “It is a new and even stronger evidence of the Neanderthal capacity for developing complex symbolic thought” and “abstract expression,” abilities long believed exclusive to early modern humans.

In recent years researchers have discovered that Neanderthals buried their dead, adorned themselves with black and red pigments, wore shell and feather jewelry and cared for the elderly and infirm, all evidence of complex thought. But no unambiguously Neanderthal art was ever found.

The new study, published in Proceedings of the National Academy of Sciences, could change that.

Researchers from 11 European institutions reported that deep in Gorham’s Cave in Gibraltar, overlooking the Mediterranean Sea, they found carvings that resemble nothing so much as a rococo Twitter hashtag: eight partially crisscrossing lines with three shorter lines on the right and two on the left, incised on a shelf of bedrock jutting out from the wall about 16 inches above the cave floor.

The engraving is covered by undisturbed sediment that contains 294 previously discovered stone tools. The tools are in a style long known as the signature of Neanderthals, who had reached Europe from Africa some 300,000 years ago.

Standard techniques had dated the tools at 39,000 years old, about when Neanderthals went extinct, meaning the art below it must be older.

Modern humans, who painted the famous caves at Lascaux in France and Altimira in Spain, by then had not reached the region where Gorham’s Cave is located…

[continues at the Chicago Tribune]

The post Neanderthals Created Cave Art appeared first on disinformation.

Ignore the IQ test: your level of intelligence is not fixed for life

Disinformation - 7 hours 43 min ago

“intelligence (‘cognition’) is a vector” by Gisela Giardino via Flickr.

This article was originally published on The Conversation.
Read the original article.

By Bryan Roche, National University of Ireland Maynooth

We’re getting more stupid. That’s one point made in a recent article in the New Scientist, reporting on a gradual decline in IQs in developed countries such as the UK, Australia and the Netherlands. Such research feeds into a long-held fascination with testing human intelligence. Yet such debates are too focused on IQ as a life-long trait that can’t be changed. Other research is beginning to show the opposite.

The concept of testing intelligence was first successfully devised by French psychologists in the early 1900s to help describe differences in how well and quickly children learn at school. But it is now frequently used to explain that difference – that we all have a fixed and inherent level of intelligence that limits how fast we can learn.

Defined loosely, intelligence refers to our ability to learn quickly and adapt to new situations. IQ tests measure our vocabulary, our ability to problem-solve, reason logically and so on.

But what many people fail to understand is that if IQ tests measured only our skills at these particular tasks, no one would be interested in our score. The score is interesting only because it is thought to be fixed for life.

Who is getting smarter?

Standardised IQ tests used by clinical psychologists for diagnostic purposes, such as the Weschler scale, are designed in such a way that it is not easy to prepare for them. The contents are kept surprisingly secret and they are changed regularly. The score given for an individual is a relative one, adjusted based on the performance of people of the same age.

But even as we become better educated and more skillful at the types of tasks measured on IQ tests (a phenomenon known as the “Flynn effect”, after James Fylnn who first noted it) our IQs stay pretty much the same. This is because the IQ scoring system takes into account the amount of improvement expected over time, and then discounts it. This type of score is called a “standardised score” – it hides your true score and merely represents your standing in relation to your peers who have also been getting smarter at about the same rate.

This apparent stability in IQ scores makes intelligence look relatively constant, whereas in fact we are all becoming more intelligent across and within our lifetimes. The IQ test and the IQ scoring system are constantly adjusted to ensure that the average IQ remains at 100, despite a well-noted increase in intellectual ability worldwide.

Politics of IQ testing

Psychologists are aware that intelligence scores are somewhat subject to cultural influence and social opportunity, but some have still insisted that we cannot raise our IQ by much. This is because our general intelligence (or “g”) is a fixed trait that is insensitive to education, “brain training”, diet, or other interventions. In other words, they say, we are all biologically limited in our intelligence levels.

The idea that IQ is fixed for life is built into the questionable politics of IQ testing. The most serious consequence of this is the use of IQ tests to blame educational difficulties on students rather than on teaching systems.

But it is the job of psychologists to find better ways to teach, not to find better ways to justify the poor performance of students. This particular use of IQ tests has caused one leader in the field of intelligence research, Robert Sternberg, to refer to IQ testing as “negative psychology” in a 2008 article.

All is not lost

Those who hang dearly onto the notion that IQ is fixed for life have managed to ignore decades of published research in the field of applied behaviour analysis. This has reported very large IQ gains in children with autism who have been exposed to early intensive behavioural interventions once they have been diagnosed with learning difficulties.

Another 2009 Norwegian study examined the effects of an increase in the duration of compulsory schooling in Norway in the 1960s which lengthened the time in education for Norwegians by two years. The researchers used records of cognitive ability taken by the military to calculate the IQ of each individual in the study. They found that IQ had increased by 3.7 points for every extra year of education received.

More recent studies by John Jonides and his colleagues at the University of Michigan reported improvements in objective measures of intelligence for those who practised a brain-training task called the “n-back task” – a kind of computerised memory test.

My own research, in the field of relational frame theory, has shown that understanding relations between words, such as “more than”, “less than” or “opposite” is crucial for our intellectual development. One recent pilot study showed that we can considerably raise standard IQ scores by training children in relational language skills tasks over a period of months. Again, this finding challenges the idea that intelligence is fixed for life.

So it’s about time we reconsidered our ideas about the nature of intelligence as a trait that cannot be changed. Undoubtedly, there may be some limits to the development of our intellectual skills. But in the short term, the socially responsible thing to do is not to feel bound by those limits, but to help every child work towards and even exceed them.

Bryan Roche is a director of Relational Frame Training ltd. trading as raiseyouriq.

 

The post Ignore the IQ test: your level of intelligence is not fixed for life appeared first on disinformation.

Filmmaker Lars von Trier’s Newest Project Will Be a TV Show

Disinformation - 8 hours 43 min ago

Director Lars von Trier leaving the press conference of the film “Nymphomaniac” at the 2014 Berlin Film Festival. By Siebbi via Wikimedia Commons

Unfortunately, not much has been revealed, but I’m excited nonetheless. Did anyone ever watch The Kingdom (Riget)?

via IndieWire:

It could be argued that all of Lars von Trier‘s efforts are “without precedent,” singular visions from the mind of a filmmaker that is truly like no other. Because really, who else would’ve put together a five-and-a-half hour epic about a woman addicted to sex that starts with her being found beaten in an alley? And even as von Trier closes the book on “Nymphomaniac,” with director’s cuts of both volumes screening for the first time together at the Venice Film Festival, he’s got another big project on the way.

The director — who vowed never to speak to the press following his Nazi comment controversy at the Cannes Film Festival — appeared via video link at Venice over the weekend during the press conference for “Nymphomaniac Vol II — Director’s Cut” (check out three NSFW new clips here) and revealed his next project. Sort of. He didn’t say anything (though at one point Stellan Skarsgard held up a sign that said “lifeline” and called von Trier on his cellphone to help answer a question, relaying what the director said) but the filmmaker’s producer Louise Vesth did reveal what was next from the provocateur.

“Now, when he’s not able to speak, so he cannot say that it’s not ture, I’m happy to announce that the next Lars von Trier project will be a TV series in the English language,” she said. “He has a really really good idea which I cannot tell more about right now. He wants a huge cast and from what I heard I’m sure that it will be something that you have never seen before and you will definitely never see it again.”

The series will be called “The House That Jack Built,” and producer Peter Aalbæk Jensen adds that the show will be “without precedent” and warned everyone that, “You better hold your breath.” The plan is to bring together a big international cast, and of course, plots details aren’t being revealed but that may also be because the thing is still in early stages — Lars von Trier is writing it at the moment, with filming only to begin in 2016. So you’ll be waiting a while for this.

Continue reading.

The post Filmmaker Lars von Trier’s Newest Project Will Be a TV Show appeared first on disinformation.

When Whites Just Don’t Get It

Disinformation - 20 hours 13 min ago

Some shocking facts support Nicholas Kristof’s op-ed at the New York Times, such as “the net worth of the average black household in the United States is $6,314, compared with $110,500 for the average white household”:

Many white Americans say they are fed up with the coverage of the shooting of Michael Brown in Ferguson, Mo. A plurality of whites in a recent Pew survey said that the issue of race is getting more attention than it deserves.

Bill O’Reilly of Fox News reflected that weariness, saying: “All you hear is grievance, grievance, grievance, money, money, money.”

Indeed, a 2011 study by scholars at Harvard and Tufts found that whites, on average, believed that anti-white racism was a bigger problem than anti-black racism.

Yes, you read that right!

So let me push back at what I see as smug white delusion. Here are a few reasons race relations deserve more attention, not less:

• The net worth of the average black household in the United States is $6,314, compared with $110,500 for the average white household, according to 2011 census data. The gap has worsened in the last decade, and the United States now has a greater wealth gap by race than South Africa did during apartheid. (Whites in America on average own almost 18 times as much as blacks; in South Africa in 1970, the ratio was about 15 times.)

• The black-white income gap is roughly 40 percent greater today than it was in 1967.

• A black boy born today in the United States has a life expectancy five years shorter than that of a white boy.

• Black students are significantly less likely to attend schools offering advanced math and science courses than white students. They are three times as likely to be suspended and expelled, setting them up for educational failure…

[continues at at the New York Times]

The post When Whites Just Don’t Get It appeared first on disinformation.

Race is a Social Concept, Not a Scientific One

Disinformation - Mon, 09/01/2014 - 17:00

“silent diversity” by DryHundredFear via Flickr.

via Live Science:

Beyond the Ferguson, Mo., media reports on the “racial divide,” the facts require some correction: Despite notions to the contrary, there is only one human race. Our single race is independent of geographic origin, ethnicity, culture, color of skin or shape of eyes — we all share a single phenotype, the same or similar observable anatomical features and behavior.

Science highlights these similarities in our embryonic development, physiology (our organ-based systems), biochemistry (our metabolites and reactions), and more recently, genomics (our genetic makeup). As a molecular biologist, this last one is indeed the most important to me — data show that the DNA of any two human beings is 99.9 percent identical, and we all share the same set of genes, scientifically validating the existence of a single biological human race and one origin for all human beings. In short, we are all brothers and sisters. [What is the Difference between Race and Ethnicity? ]

Biologically speaking, one clear example is that most diseases afflict all of us — diseases like cancers and cardiovascular and neurological disorders, as well as viral, microbial and parasitic infections. Obviously, there are differences in how individual humans respond to various diseases or infections; some never suffer from cancer and may be immune to assorted infections. This may be due to factors such as diet, exercise, overall health or environmental conditions. However, the fact that a human population, irrespective of geography or ethnicity is susceptible to the same diseases, coupled with the existence of multiple pandemics , is a clear indication of how identical we are.

Genetically speaking, studies have shown that there is much greater genetic variation within a given human population (e.g., Africans, Caucasians, or Asians) than between populations (Africans vs. Caucasions), indicating that human variation cannot be subdivided into discrete races.

It is history, not science,that reveals how the concept of different human “races” arose, how the term has become widely misused, and how it continues to pervade our planet. In fact, the word race has come to symbolize the division of humanity into segments, divisions that often lead to conflicts. Over centuries, people have used the word to divide us into black, white, yellow, red, and other distinctions in order to fulfill selfish goals and objectives. Whether those goals were to subjugate various groups of humans, deem them inferior or simply discriminate against them, the reality is that billions of people have been directly affected as a result of the misuse of the word race.

Continue reading.

The post Race is a Social Concept, Not a Scientific One appeared first on disinformation.

Are we heading for technological unemployment? An Argument

Disinformation - Mon, 09/01/2014 - 13:45

Altar of technology by zeitfaenger.at via Flickr.

This piece was first published on Philosophical Disquisitions We’re all familiar with the headlines by now: “Robots are going to steal our jobs”, “Automation will lead to joblessness”, and “AI will replace human labour”. It seems like more and more people are concerned about the possible impact of advanced technology on employment patterns. Last month, Lawrence Summers worried about it in the Wall Street Journal but thought maybe the government could solve the problem. Soon after, Vivek Wadhwa worried about it in the Washington Post, arguing that there was nothing the government could do. Over on the New York TimesPaul Krugman has been worrying about it for years.

But is this really something we should worry about? To answer that, we need to distinguish two related questions:

The Factual Question: Will advances in technology actually lead to technological unemployment?

The Value Question: Would long-term technological unemployment be a bad thing (for us as individuals, for society etc)?


I think the answer to the value question is a complex one. There are certainly concerns one could have about technological unemployment — particularly its tendency to exacerbate social inequality — but there are also potential boons — freedom from routine drudge work, more leisure time and so on. It would be worth pursuing these issues further. Nevertheless, in this post I want to set the value question to one side. This is because the answer to that question is going to depend on the answer to the factual question: there is no point worrying or celebrating technological unemployment if its never going to happen.

So what I want to do is answer the factual question. More precisely, I want to try to evaluate the arguments for and against the likelihood of technological unemployment. I’ll start by looking at an intuitively appealing, but ultimately naive, argument in favour of technological unemployment. As I’ll point out, many mainstream economists find fault with this argument because they think that one of the assumptions it rests on is false. I’ll then outline five reasons for thinking that the mainstream view is wrong. This will leave us with a more robust argument for technological unemployment. I will reach no final conclusion about the merits of that argument. As with all future-oriented debates, I think there is plenty of room for doubt and disagreement. I will, however, suggest that the argument in favour of technological unemployment is a plausible one and that we should definitely think about the possible future to which it points.

My major reference point for all this will be the discussion of technological unemployment in Brynjolfsson and McAfee’s The Second Machine Age. If you are interested in a much longer, and more detailed, assessment of the relevant arguments, might I suggest Mark Walker’s recent article in the Journal of Evolution and Technology?


1. The Naive Argument and the Luddite Fallacy
To start off with, we need to get clear about the nature of technological unemployment. In its simplest sense, technological unemployment is just the replacement of human labour by machine “labour” (where the term “machine” is broadly construed and where one can doubt whether we should call what machines do “labour”). This sort of replacement happens all the time, and has happened throughout human history. In many cases, the unemployment that results is temporary: either the workers who are displaced find new forms of work, or, even if those particular workers don’t, the majority of human beings do, over the long term.

Contemporary debates about technological unemployment are not concerned with this temporary form of unemployment; instead, they are concerned with the possibility of technology leading to long-term structural unemployment. This would happen if displaced workers, and future generations of workers, cannot find new forms of employment, even over the long-term. This does not mean that there will be no human workers in the long term; just that there will be a significantly reduced number of them (in percentage terms). Thus, we might go from a world in which there is a 10% unemployment rate, to a world in which there is a 70, 80 or 90% unemployment rate. The arguments I discuss below are about this long-term form of technological unemployment.

So what are those arguments? In many everyday conversations (at least the conversations that I have) the argument in favour of technological unemployment takes an enthymematic form. That is to say, it consists of one factual/predictive premise and a conclusion. Here’s my attempt to formulate it:

(1) Advances in technology are replacing more and more forms of existing human labour.

(2) Therefore, there will be technological unemployment.


The problem with this argument is that it is formally invalid. This is the case with all enthymemes. We are not entitled to draw that conclusion from that premise alone. Still, formal invalidity will not always stop someone from accepting an argument. The argument might seem intuitively appealing because it relies on a suppressed or implied premise that people find compelling. We’ll talk about that suppressed premise in a moment, and why many economists doubt it. Before we do that though, it’s worth briefly outlining the case for premise (1).

That case rests on several different strands of evidence. The first is just a list of enumerative examples, i.e. cases in which technological advances are replacing existing forms of human labour. You could probably compile a list of such examples yourself. Obviously, many forms of manufacturing and agricultural labour have already been replaced by machines. This is why we no longer rely on humans to build cars, plough fields and milk cows (there are still humans involved in those processes, to be sure, but their numbers are massively diminished when compared with the past). Indeed, even those forms of agricultural and manufacturing labour that have remained resistant to technological displacement — e.g. fruit pickers — may soon topple. There are other examples too: machines are now replacing huge numbers of service sector jobs, from supermarket checkout workers and bank tellers, to tax consultants and lawyers; advances in robotic driving seem likely to displace truckers and taxi drivers in the not-too-distant future; doctors may soon see diagnostics outsourced to algorithms; and the list goes on and on.

In addition to these examples of displacement, there are trends in the economic data that are also suggestive of displacement. Brynjolfsson and McAfee outline some of this in chapter 9 of their book. One example is the fact that recent data suggests that in the US and elsewhere, capital’s share of national income has been going up while labour’s share has been going down. In other words, even though productivity is up overall, human workers are taking a reduced share of those productivity gains. More is going to capital, and technology is one of the main drivers of this shift (since technology is a form of capital). Another piece of evidence comes from the fact that since the 1990s recessions have, as per usual, been followed by recoveries, but these recoveries have tended not to significantly increase overall levels of employment. This means that productivity gains are not matched by employment gains. Why is this happening? Again, the suggestion is that businesses find that technology can replace some of the human labour they relied on prior to the recession. There is consequently no need to rehire workers to spur the recovery. This seems to be especially true of the post-2008 recovery.

So premise (1) looks to be solid. What about the suppressed premise? First, here’s my suggestion for what that suppressed premise looks like:

(3) Nowhere to go: If technology replaces all existing forms of human labour, and there are no other forms of work for humans to go to, then there will be technological unemployment.


This plugs the logical gap in the initial argument. But it does so at a cost. The cost is that many economists think that the “nowhere to go” claim is false. Indeed, they even have a name for it. They call it the “Luddite fallacy”, inspired in that choice of name by the Luddites, who protested against the automation of textile work during the Industrial Revolution. History seems to suggest that the Luddite concerns about unemployment were misplaced. Automation has not, in fact, led to increased long-term unemployment. Instead, human labour has found new uses. What’s more, there appear to be sound economic reasons for this, grounded in basic economic theory. The reason why machines replace humans is that they increase productivity at a reduced cost. In other words, you can get more for less if you replace a human worker with a machine. This in turn reduces the costs of economic outputs on the open market. When costs go down, demand goes up. This increase in demand should spur the need or desire for more human workers, either to complement the machines in existing industries, or to assist entrepreneurial endeavours in new markets.

So embedded in the economists’ notion of the Luddite Fallacy are two rebuttals to the suppressed premise:

(4) Theoretical Rebuttal: Economic theory suggests that the increased productivity from machine labour will reduce costs, increase demand, and expand opportunities for existing or novel forms of human labour.

(5) Evidential Rebuttal: Accumulated evidence, over the past 200 years, suggests that technological unemployment is at most a temporary problem: humans have always seemed to find other forms of work.

 



Are these rebuttals any good? There are five reasons for thinking they aren’t.


2. Five Reasons to Question the Luddite Fallacy
The five reasons are drawn from Brynjolfsson and McAfee’s book. I will refer to them as “problems” for the mainstream approach. The first is as follows:

(6) The Inelastic Demand Problem: The theoretical rebuttal assumes that demand for outputs will be elastic (i.e. that reductions in price will lead to increases in demand), but this may not be true. It may not be true for particular products and services, and it may not be true for entire industries. Historical evidence seems to bear out this point.


Let’s go through this in a little more detail. The elasticity of demand is a measure of how sensitive demand is to changes in price. The higher the elasticity, the higher the the sensitivity; the lower the elasticity, the lower the sensitivity. If a particular good or service has a demand elasticity of one, then for every 1% reduction in price, there will be a corresponding 1% increase in demand for that good or service. Demand is inelastic when it is relatively insensitive to changes in price. In other words, consumers tend to demand about the same over time (elasticity of zero).

The claim made by proponents of the Luddite fallacy is that the demand elasticity for human labour, in the overall economy, is around one, over the long haul. But as McAfee and Brynjolfsson point out, that isn’t true in all cases. There are particular products for which there is pretty inelastic demand. They cite artificial lighting as an example: there is only so much artificial lighting that people need. Increased productivity gains in the manufacture of artificial lighting don’t result in increased demand. Similarly, there are entire industries in which the demand elasticity for labour is pretty low. Again, they cite manufacturing and agriculture as examples of this: the productivity gains from technology in these industries do not lead to increased demand for human workers in those industries.

Of course, lovers of the Luddite fallacy will respond to this by arguing that it doesn’t matter if the demand for particular goods or services, or even particular industries, is inelastic. What matters is whether human ingenuity and creativity can find new markets, i.e. new outlets for human labour. They argue that it can, and, more pointedly, that it always has. The next two arguments against the Luddite fallacy give reason to doubt this too.

(7) The Outpacing Problem: The theoretical rebuttal assumes that the rate of technological improvement will not outpace the rate at which humans can retrain, upskill or create new job opportunities. But this is dubious. It is possible that the rate of technological development will outpace these human abilities.


I think this argument speaks for itself. For what it’s worth, when JM Keynes first coined the term “technological unemployment”, it was this outpacing problem that he had in mind. If machines displace human workers in one industry (e.g. manufacturing) but there are still jobs in other industries (e.g. computer programming), then it is theoretically possible for those workers (or future generations of workers) to train themselves to find jobs in those other industries. This would solve the temporary problem of automation. But this assumes that humans will have the time to develop those skills. In the computer age, we have witnessed exponential improvements in technology. It is possible that these exponential improvements will continue, and will mean that humans cannot redeploy their labour fast enough. Thus, I could encourage my children to train to become software engineers, but by the time they developed those skills, machines might be better software engineers than most humans.

The third problem is perhaps the most significant:

(8) The Inequality Problem: The technological infrastructure we have already created means that less human labour is needed to capture certain markets (even new ones). Thus, even if people do create new markets for new products and services, it won’t translate into increased levels of employment.


This one takes a little bit of explanation. There are two key trends in contemporary economics. First is the fact that an increasing number of goods and services are being digitised (with the advent of 3D printing, this now include physical goods). Digitization allows for those goods and services to be replicated at near zero marginal cost (since it costs relatively little for a digital copy to be made). If I record a song, I can have it online in an instant, and millions of digital copies can be made in a matter of hours. The initial recording and production may cost me a little bit, but the marginal cost of producing more copies is virtually zero. A second key trend in contemporary economics is the existence of globalised networks for the distribution of goods and services. This is obviously true of digital goods and services, which can be distributed via the internet. But it is also true of non-digital goods, which can rely on vastly improved transport networks for near-global distribution.

These two trends have led to more and more “winner takes all” markets. In other words, markets in which being the second (or third or fourth…) best provider of a good or service is not enough: all the income tends to flow to one participant. Consider services like Facebook, Youtube, Google and Amazon. They dominate particular markets thanks to globalised networks and cheap marginal costs. Why go to the local bookseller when you have the best and cheapest bookstore in the world at your fingertips?

The fact that the existing infrastructure makes winner takes all markets more common has pretty devastating implications for long-term employment. If it takes less labour input to capture an entire market — even a new one — then new markets won’t translate into increased levels of employment. There are some good recent examples of this. Instagram and WhatsApp have managed to capture near-global markets for photo-sharing and free messaging, but with relatively few employees. (Note: there is some hyperbole in this, but the point still holds. Even if the best service provider doesn’t capture the entire market, there is still less opportunity for less-good providers to capture a viable share of the market. This still reduces likely employment opportunities.)

The fourth problem with the Luddite fallacy has to do with its reliance on historical data:

(9) The Historical Data Problem: Proponents of the Luddite fallacy may be making unwarranted inferences from the historical data. It may be that, historically, technological improvements were always matched by corresponding improvements in the human ability to retrain and find new markets. But that’s because we were looking at the relative linear portion of an exponential growth curve. As we now enter a period of rapid growth, things may be different.


In essence, this is just a repeat of the point made earlier about the outpacing problem. The only difference is that this time it is specifically targetted at the use of historical data to support inferences about the future. That said, Brynjolfsson and McAfee do suggest that recent data support this argument. As mentioned earlier, since the 1990s job growth has “decoupled” from productivity: the number of jobs being created is not matching the productivity gains. This may be the first sign that we have entered the period of rapid technological advance.

The fifth and final problem is essentially just a thought experiment:

(10) The Android Problem: Suppose androids could be created. These androids could do everything humans could do, only more efficiently (no illness, no boredom, no sleep) and at a reduced cost. In such a world, every rational economic actor would replace human labour with android labour. This would lead to technological unemployment.


The reason why this thought experiment is relevant here is that there doesn’t seem to be anything unfeasible about the creation of androids: it could happen that we create such entities. If so, there is reason to think technological unemployment will happen. What’s more, this could arise even if the androids are not perfect facsimiles of human beings. It could be that there are one or two skills that the androids can’t compete with humans on. Even still, this will lead to a problem because it will mean that more and more humans will be competing for jobs that involve those one or two skills.




3. Conclusion
So there you have it: an argument for technological unemployment. At first, it was naively stated, but when defended from criticism, it looks more robust. It is indeed wrong to assume that the mere replacement of existing forms of human labour by machines will lead to technological unemployment, but if the technology driving that replacement is advancing at a rapid rate; if it is built on a technological infrastructure that allows for “winner takes all” markets; and if ultimately it could lead to the development of human-like androids, then there is indeed reason to think that technological unemployment could happen. Since this will lead to a significant restructuring of human society, we should think seriously about its implications.

At least, that’s how I see it right now. But perhaps I am wrong? There are a number of hedges in the argument — we’re predicting the future after all. Maybe technology will not outpace human ingenuity? Maybe we will always create new job opportunities? Maybe these forces will grind capitalism to a halt? What do you think?

The post Are we heading for technological unemployment? An Argument appeared first on disinformation.

What If Everything We Know About Treating Depression Is Wrong?

Disinformation - Mon, 09/01/2014 - 11:41

“How to Overcome Depression” by Kevin Dooley via Flickr

Could it be that we’re treating the wrong part of the brain?

via AlterNet:

A new study is challenging the relationship between depression and an imbalance of serotonin levels in the brain, and brings into doubt how depression has been treated in the U.S. over the past 20 years.

Researchers at the John D. Dingell VA Medical Center and Wayne State University School of Medicine in Detroit have bred mice that cannot produce serotonin in their brains, which should theoretically make them always depressed. But researchers instead found that the mice showed no signs of depression, but instead acted aggressively and exhibited compulsive personality traits.

This study backs recent research that indicates that selective serotonin reuptake inhibitors, or SSRIs, may not be effective in lifting people out of depression. These commonly used antidepressants, such as Prozac, Paxil, Celexa, Zoloft, and Lexapro, are taken by some 10% of the U.S. population and nearly 25% of women between 40 and 60 years of age. More than 350 million people suffer from depression, according to the World Health Organization, and it is the leading cause of disability across the globe.

The study was published the journal ACS Chemical Neuroscience. Professor Donald Kuhn, the lead author of the study, set out to find what role, if any, serotonin played in depression. To do this, Kuhn and his associates bred mice that lacked the ability to produce serotonin in their brains, and ran a battery of behavioral tests on them.

In addition to being compulsive and extremely aggressive, the mice that could not produce serotonin showed no signs of depression-like symptoms. The researchers also found, to their surprise, that under stressful conditions, the serotonin-deficient mice behaved normally.

A subset of the mice that couldn’t produce serotonin were given antidepressant medications and they responded in a similar manner to them as did normal mice. Altogether, the study found that serotonin is not a major player in depression, and science should look elsewhere to identify other factors that might be involved. These results could greatly reshape depression research, say the authors, and shift the focus of the search for depression treatments.

The study joins others in directly challenging the notion that depression is related to lower levels of serotonin in the brain. One study has shown that some two-thirds of those who take SSRIs remain depressed while taking them, while another has even found them clinically insignificant.

Critics of common antidepressants claim that they’re not much better than a placebo, yet may still have unwanted side effects.

Continue reading.

The post What If Everything We Know About Treating Depression Is Wrong? appeared first on disinformation.

Inequality and the USA: A Nation in Denial?

Disinformation - Mon, 09/01/2014 - 09:30

Photo: Alex Proimos (CC)

Sam Pizzigati of Inequality.org writes at PressTV that “the United States… doesn’t just have the world’s most unequal major developed economy… [it] has the most people in denial about the inequality they live amid”:

Every August, for most of the last four decades, top central bankers from around the world have been making their way to the Wyoming mountain resort of Jackson Hole for an invitation-only blue-ribbon economic symposium.

This year’s Jackson Hole hobnob, once again hosted by the Federal Reserve Bank of Kansas City, last week attracted the usual assortment of central bankers, finance ministers, and influential business journalists. But this year’s gathering also attracted something else: protesters.

For the first time ever, activists converged on Jackson Hole — to let the Fed’s central bankers know, as protest organizers put it, that “it’s not just the rich who are watching them.”

Over 70 groups and unions backed the protest and signed onto an open letter that calls on America’s central bankers to start nurturing an economy that works for workers. At one point, early on in the Jackson Hole gathering, protesters actually had a brief exchange with Federal Reserve Board chair Janet Yellin.

“We understand the issues you’re talking about,” Yellin told them, “and we’re doing everything we can.”

But that “everything” remains distinctly dispiriting. Many of Yellin’s fellow central bank officials, protesters note, are pushing the Fed “to put the brakes on growth so wages don’t rise” and, the fear goes, stimulate inflation.

Those “brakes”— higher interest rates — are definitely coming, Kansas City Fed president Esther George told protest leaders in another Jackson Hole exchange. America needs them, she added, to better “balance” the economy.

But America, the protesters point out, needs a balancing of an entirely different sort. The nation’s top-heavy economy needs to become less top-heavy. The nation can’t afford to be a place where far too many “struggle to secure even basic levels of dignity” while “the wealthiest Americans are richer than ever.”

The green-shirted protesters at Jackson Hole had plenty of support for that stance at last week’s other blue-ribbon global gathering of economic dignitaries, Germany’s fifth Lindau Meeting on Economic Sciences…

[continues at PressTV]

The post Inequality and the USA: A Nation in Denial? appeared first on disinformation.

We’re self-obsessed – but do we understand the nature of the self?

Disinformation - Mon, 09/01/2014 - 07:30

Narcissism by Kevin Simpson via Flickr.

This article was originally published on The Conversation.
Read the original article.

By Michael Allen Fox, University of New England

We live in an age of self-obsession. Everywhere we look, we encounter a preoccupation with self-interest, self-development, self-image, self-satisfaction, self-love, self-expression, self-confidence, self-help, self-acceptance … the list goes on.

An internet headline sounds a warning: Facebook and Twitter are creating a vain generation of self-obsessed people with childlike need for feedback, warns top scientist. In 2013 no less august an organ than the Oxford English Dictionary chose “selfie” as its “Word of the Year”.

Ask yourself whether any other time but the present could boast of successful print magazines called i or Me or Self?

“The self” is actually quite a problematic notion. Given the destiny of the self concerns us so much we could all benefit from a little insight into its nature.

I’ll present here the view that the self is an entity which constitutes itself through its own acts of choosing and doing, and which grows (and grows responsible) in the process.

The great philosophical sceptic David Hume puzzled about the self in his Treatise of Human Nature(1739):

For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never catch myself at any time without a perception, and never can observe anything but the perception.”

He inferred that what passes for the self is “nothing but a bundle or collection of different perceptions”.

Hume was limited by his devotion to the Empiricist principle that the world must be understood solely as a construct derived from bits of experience woven together in various ways. Yet he raised a real issue: Where do we get the concept of self from and how do we justify it? Is the self to be included among the furniture of the universe, or is it merely an elusive fantasy (as Buddhism teaches)?

How much insight do we have into the philosophical problem of the self? MattysFlicks

Although there is much to debate concerning the self, belief in it can be defended.

Does this entail that it is part of our basic ontology, that is part of the inventory of what we believe to exist? Yes, and for two reasons:

  1. A sense of being someone and of having conscious awareness that is one’s own and no one else’s is essential to normal functioning in the world.
  2. Even if the concept of being a personal self is displaced within some broader spiritual or theoretical perspective, the self is no less immediately real to us right now than are physical objects, which can also be viewed at a deeper level as made up of particles and/or esoteric forms of energy.

So what is a self?

Hume is surely correct to state that it is not some thing that we can “stumble on” in our inner experience, or that is constantly present to our consciousness. Yet there is still room to argue that the self is a network of relationships forged by significant experiences (past and present), actions, and connections with people, places, things and events. The self therefore becomes “a bundle of relationships.”

And what we make of these relationships and do about them manifests who we are.

A self comes into being and perpetuates itself by means of choices and actions that create connections — relationships — with the world around us and with ourselves, which we integrate into our self-understanding as we move through life. The venerable 20th-century philosopher Bertrand Russell points out, in The Problems of Philosophy (1912), that relationships of various kinds are genuine aspects of the world that define the nature of individual entities.

One of these is what we call “reflexive self-awareness”, in which we make ourselves objects of our own scrutiny and judgement.

We might say that the self creates itself out of nothing; but more accurately, it arises out of our innate tendencies, which each of us develops in different ways, according to choice. Experiments on pre-linguistic children have shown that cooperative relationships with others are apparent at a very early age. See, for example, the work of Michael Tomasello in his book A Natural History of Human Thinking. These tendencies begin to help determine a self, even at the first stage of human development.

I suggest, then, that a self is the organised complex of relationships with people and other entities, events, states of affairs, memories and so on, established over time, and that the beliefs and values we adhere to are built up from our responses to what goes on outside as well as within our own self-consciousness. Strangely enough, this entails that the self exists within us and also extends outward spatially.

The self is, paradoxically, a self-made being.

For this reason, the question is:

What kind of self do I choose to be; and is there room in this self for the concerns of others to gain recognition and response – especially those in relation to whom I have become who I am?

Surely, confronting this question seriously is the cure for self-obsession.

Michael Allen Fox does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The post We’re self-obsessed – but do we understand the nature of the self? appeared first on disinformation.

It’s personal: why leaders don’t turn climate knowledge into action

Disinformation - Mon, 09/01/2014 - 05:45

By Nicola Jones via Flickr

This article was originally published on The Conversation.
Read the original article.

By Simon Divecha

There is an abundance of profitable business opportunity to be found in addressing sustainability issues. These stand out against the difficulties we face implementing effective change. Globally, the World Bank recently found that tackling climate change would help to grow the world’s economy by US$1.8 to 2.6 trillion a year.

Private sector investors argue for action as well. One prominent example is the Carbon Disclosure Project which represents 767 institutional investors holding US$92 trillion in assets worldwide. Its programs reward and promote companies acting on climate change.

There is detailed analysis, alongside successfully implemented examples, across nearly every industry sector showing an 80% reduction in environmental impact for each dollar of economic output. This is not, necessarily, even a case of implementing new technology. Planning and design help to deliver similar outcomes – for example, in residential developments.

So why is there so much resistance to change, and why is the prime minister’s chief business adviser distracting business with warnings about global cooling?

Of course, some of the reasons are financial – an illustration is the estimated AU$8 billion that would flow to coal electricity, at the expense of other businesses, if Australia’s renewable energy target is cut. However, it would be falling into a trap, similar to Newman’s simple cooling analysis, to imagine that such numbers explain everything.

An extraordinary paradox is that unrealised, profitable, low-risk change opportunities have existed for decades. Business has simply not acted to maximise its profits and this is particularly apparent with respect to energy efficiency.

What’s stopping business?

Business leaders are always planning, to some degree, for the future, in order to manage market trends and investor expectations, among a host of other reasons. But there are many trends and issues demanding our attention – there are ever increasing levels of complexity that business is challenged by. Consequently, when Maurice Newman expounds on global cooling you may feel relief – at last, something that makes life simpler!

You may wonder how anyone, especially someone holding such an important business and national leadership position, could be so irresponsible. Never mind the article’s selective simple science, what about the squandered opportunities, the billions his opinion implicitly de-prioritises? It does this to our individual and collective detriment.

Business, and the country, is much better served by promoting strategy and action based on the risks and opportunities arising from climate change.

Part of the problem is that we are privileging financial and other measures to the detriment of our real motivators, personal values and cultural cares.

Understanding ‘action logics’

The good news is we have some powerful models that can help us navigate our more subjective perspectives. One is called “action logics”.

An analogy is that each action logics perspective is a different coloured lens through which someone views the world. It colours how we know, and make sense of, environmental imperatives.

Action logics shows us that adults tend to express sustainability concepts in markedly different ways that mirror distinct stages of ever increasing mental complexity. Consequently, important motivators for some individuals may be far less prominent for others.

For example, at what we call the “expert” level, adults work effectively with abstract models. The person’s expert knowledge helps to solve defined problems. However, this is often within narrow boundaries of this expertise and the importance of other perspectives or approaches can often be disregarded, viewed as not relevant expertise.

In contrast the next action logic, “achiever”, values (among other things) mutualism. It is correlated with a step change in mental and cognitive capacity such that many diverse fields of expertise are likely to be weighed and evaluated against each other.

Adults can continue to develop through these stages, to later and more complex capacities and capabilities. Importantly, these action logics are highly relevant in today’s complex and volatile business world (for example, with respect to leadership and organisational success).

This sort of later stage complexity is associated with valuing and managing across abundant information. That is, success is far more than just the financial practicality – the risks and opportunities related to business and climate change spread into ethical, social, and cultural dilemmas. How we know – as modelled by action logics – is as important, if not more so, than what we know. We need to be able to join up many dots – social acceptability, financial viability, alignment and workforce innovation and motivation, our future outlook alongside the expectations we wish to meet and set for our business and society’s well being, to name just a few.

Action logics shine a light on why simple science opinions can appeal to many. It goes part of the way to explaining why business leaders are still struggling to integrate values and economics. Personally, it helps us map what we may need to become to meet important challenges.

Simon Divecha does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

The post It’s personal: why leaders don’t turn climate knowledge into action appeared first on disinformation.

“Crossfaded”: What happens when you’re drunk and stoned at the same time?

Disinformation - Sun, 08/31/2014 - 17:30

By Elvert Barnes via Flickr.

Luckily, Popular Science has the answer.

via Popsci:

The intoxicating effects of alcohol and of marijuana have been widely studied, but their combined effect—getting “cross-faded“—is woefully underexplored scientific territory. Here’s a look at what we know about how pot and booze together affect the brain.

First, the basics: Marijuana contains THC (tetrahydrocannabinol), which acts on the brain’s cannabinoid receptors. Alcohol depresses the central nervous system. Trying to compare the two isn’t even like comparing apples and oranges, says Gary Wenk, a professor of psychology and neuroscience at Ohio State University. “It’s apples and vegetables. They’re very different drugs.” An extremely simplified explanation would be to say that THC largely has cognitive effects, like paranoia and a distorted sense of time, while alcohol mainly affects motor skills, making it hard to walk in a straight line and causing slurred speech.

So does combining weed and alcohol just add their respective effects together? Not quite, says Scott Lukas, who teaches at Harvard Medical School and has researched the interaction of various drugs. In astudy published in 2001, Lukas found that after individuals smoked marijuana and a drank large dose of alcohol, the equivalent to a couple of shots, the THC levels in their blood plasma nearly doubled compared with people who smoked pot and consumed a placebo drink. The buzzed people in the study also detected the effects of marijuana sooner than those who only got stoned, and rated their high as subjectively “better.” This suggests that getting boozed up causes more THC to reach the brain, via the bloodstream, within the first few minutes of ingestion. One explanation for this finding is that alcohol may cause changes in blood vessels that boost the absorption of inhaled THC.

Lukas isn’t worried that the combination could be lethal, but he says that getting cross-faded could be more risky that just getting drunk or high alone. With more THC hitting the brain thanks to the ethanol in alcoholic drinks, the usual effects of marijuana—like impaired judgment and increased heart rate—are stronger. Which means accidents like drownings and car crashes could be more likely, Lukas says. He also points out that the amounts of drugs approved for his research were lower than the levels that people often use while out partying.

Continue reading.

The post “Crossfaded”: What happens when you’re drunk and stoned at the same time? appeared first on disinformation.

$1,000 For New Swear Word: A Prankster from 1910

Disinformation - Sun, 08/31/2014 - 13:30

I am utterly baffled by this newspaper clipping:

h/t Weird Universe

The post $1,000 For New Swear Word: A Prankster from 1910 appeared first on disinformation.

A short documentary on Cryonics: We Will Live Again

Disinformation - Sun, 08/31/2014 - 11:30

h/t The Daily Grail

We Will Live Again from Brooklyn Underground Films on Vimeo.

via the Vimeo page:

WE WILL LIVE AGAIN looks inside the unusual and extraordinary operations of the Cryonics Institute. The film follows Ben Best and Andy Zawacki, the caretakers of 99 deceased human bodies stored at below freezing temperatures in cryopreservation. The Institute and Cryonics Movement were founded by Robert Ettinger who, in his nineties and long retired from running the facility, still self-publishes books on cryonics, awaiting the end of his life and eagerly anticipating the next.

The post A short documentary on Cryonics: We Will Live Again appeared first on disinformation.

Syndicate content