Thursday, April 14, 2016

Friday Thinking 15 April 2016

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

In the 21st Century curiosity will SKILL the cat.


thinking is interactions of brain and body with the world. Those interactions are not evidence of, or reflections of, underlying thought processes. They are instead the thinking processes themselves.
Edwin Hutchins - The role of cultural practices in the emergence of modern human intelligence


One avenue through which meta-awareness might impact well-being lies in its relation to mind wandering. Mind wandering has been found to consume as much as 50% of our waking life and is tied to our sense of well-being. If training in attentional forms of meditation does strengthen meta-awareness, we might expect this to impact both the incidence and impact of mind wandering. Recent studies have found that meditation training alters patterns of task-unrelated thought, showing that even brief trainings in mindfulness meditation decrease the behavioral indicators of mind wandering. Although meta-awareness and self-referential processes are difficult to operationalize, a few recent studies seem to indicate that brain regions associated with self-referential processing, such as the medial prefrontal cortex and the posterior cingulate cortex, may be downregulated by mindfulness related practices.
Reconstructing and deconstructing the self: cognitive mechanisms in meditation practice


Virtual assistants have also received a boost from major advances in subsets of artificial intelligence known as machine learning and natural language processing, or the ability for computers to understand speech. Accuracy of word recognition reached something of a tipping point in recent years, going from 80 percent in 2009 to 95 percent in 2014, said Christopher Manning, a Stanford computer science professor and natural language expert.

The rise of this technology is evident in a wave of new jobs at the intersection of human and artificial intelligence. By 2025, 12.7 million new U.S. jobs will involve building robots or automation software; by 2019, more than one-third of the workforce will work side by side with such technologies, according to Forrester Data.
The next hot job in Silicon Valley is for poets


So what followed sovereignty was technocratic order. Once some borders got hashed out, and some alliances broke down, you can have universe bureaucrats come in and actually run some things.

So I think sovereignty allowed unique types of player alliances to form, which turned it less from a game about one-off battles into these systems where you’re watching huge, top-to-bottom logistics operations be in place, behind the battles, to reinforce the lines. And then to run those, you need very impressive leaders, who are going to give speeches to rally the workers to continue building ships or go out on the front lines of a fight that is relatively futile. And that was fascinating.

...It’s one of those things that you find when you start looking at EVE Online for long enough—the people who play, and the people who succeed at the game, are genuinely brilliant. When you talk to the leaders who run these organizations, you’ll ask them casually while a call is wrapping up, so what do you do for a living? And this one woman said, oh, I run a nuclear reactor up in Portland. Another guy was like, oh, I run an international shipping and logistics company.

And that has been borne out by some of the leaders in the game’s history—they’re so busy running their alliances, they don’t have time to log in and actually play the game. So they don’t. They’re playing the game through Google Docs, spreadsheets, IRC channels. They’re holding meetings with all the other leaders; they’re having diplomatic conferences in chat rooms.

Leaders start to figure out that they don’t have to fight their enemies. And if they don’t, they can start to chip away at their enemies’ morale because those players have showed up to fight. They’ve showed up to participate in a battle. So if you don’t give them that battle, if you don’t show up unless you absolutely have to—in the community they would call it “blueballsing” the enemy fleet. These players have showed up, they’ve given up six hours of their Saturday, and they didn’t even get to fight. And that has profound implications for the morale of a fleet, how well they’ll listen to their commanders, and whether you can get those players to show up next time.
How to Write a History of Video Game Warfare - Eve Online


a project undertaken or a product built not solely to fulfill some constructive goal, but with some wild pleasure taken in mere involvement, was called a “hack.”

This latter term may have been suggested by ancient MIT lingo— the word “hack” had long been used to describe the elaborate college pranks that MIT students would regularly devise, such as covering the dome that overlooked the campus with reflecting foil. But as the TMRC people used the word, there was serious respect implied.

While someone might call a clever connection between relays a “mere hack,” it would be understood that, to qualify as a hack, the feat must be imbued with innovation, style, and technical virtuosity.

Shaving off an instruction or two was almost an obsession with them. McCarthy compared these students to ski bums. They got the same kind of primal thrill from “maximizing code” as fanatic skiers got from swooshing frantically down a hill.

So the practice of taking a computer program and trying to cut off instructions without affecting the outcome came to be called “program bumming,” and you would often hear people mumbling things like “Maybe I can bum a few instructions out and get the octal correction card loader down to three cards instead of four.”
The Tech Model Railroad Club


The first quote in this Friday Thinking - refers to an emerging theory of embodied cognition such that thinking emerges from culture-environment entanglement. The digital environment will not only enable us to have ‘new senses’ but extend those ‘real senses’ throughout this digital environment. During a Twitter conversation with Karl Schroeder - he coined the term ‘distributed realism’. Rather confusing ourselves with fragmented slices of the future with terms like ‘virtual’ ‘augmented’ ‘mixed’ reality - or notions of AI-ssistants as intelligent amplification (IA) we could refer to ‘Distributed Reality’ (DR). This brings very significant new ‘percepts’ into possibility - with even more affordances - and perhaps most significant challenges many current conceptions of what a ‘mind’ is.
I know one industry almost every adult knows about is very interested in the development discussed in this article - this should be a must read - for anyone interested in the future of ‘digital connectivity’.
“A fundamental question in this project is what role haptic stimuli play in perception,” says Ernst. With the term ‘haptic stimuli,’ the cognitive scientist is referring to the sensations that arise from touch. “A special feature of our finger pads is that they are fleshy – they can ‘deform’ by giving way when touching something,” says Marc Ernst. For instance, when a person touches a sponge, she feels its composition and consistency through the tactile sensors in her skin.
Finger Fooled: New Perceptual Illusion Discovered
Fingers are a human’s most important tactile sensors, but they do not always sense accurately and can even be deceived. Researchers at the Cluster of Excellence Cognitive Interaction Technology (CITEC) of Bielefeld University demonstrated this in a new study in which they ‘outwit’ human perception. Test subjects placed their index finger in an apparatus and touched an object whose softness changed at random without the person noticing. While touching the object, the test subjects were under the illusion that it was the position of their finger that changed, not the softness of the object. The curious thing here was that the test subjects felt an “illusory” finger displacement, much larger in extent than the actual, “real” displacement. The researchers published their findings this Thursday, 7 April in the scientific journal “Current Biology.”


This is another must read/view Nature article - if only for the infographics.
A world where everyone has a robot: why 2040 could blow your mind
Technological change is accelerating today at an unprecedented speed and could create a world we can barely begin to imagine.
In March 2001, futurist Ray Kurzweil published an essay arguing that humans found it hard to comprehend their own future. It was clear from history, he argued, that technological change is exponential — even though most of us are unable to see it — and that in a few decades, the world would be unrecognizably different. “We won’t experience 100 years of progress in the 21st century — it will be more like 20,000 years of progress (at today’s rate),” he wrote, in ‘The Law of Accelerating Returns’.

Fifteen years on, Kurzweil is a director of engineering at Google and his essay has acquired a cult following among futurists. Some of its predictions are outlandish or over-hyped — but technology experts say that its basic tenets often hold. The evidence, they say, lies in the exponential advances in a suite of enabling technologies ranging from computing power to data storage, to the scale and performance of the Internet (see ‘Onwards and upwards’). These advances are creating tipping points — moments at which technologies such as robotics, artificial intelligence (AI), biology, nanotechnology and 3D printing cross a threshold and trigger sudden and significant change. “We live in a mind-blowingly different world than our grandparents,” says Fei-Fei Li, head of the Stanford Artificial Intelligence Laboratory in California, and this will be all the more true for our children and grandchildren (see 'Future focus').


Here’s a 2 ½ min video that graphically presents the current state of the world. Worth the view. Data sources are listed under the video.
If The World Were 100 People | GOOD Data
Published on 14 Mar 2016
If the population of the world was only 100 people, what would society look like?
Produced and Written by Gabriel Reilich


This is an open-call from the folks who support the Creative Commons license.
Help Us Build Creative Commons Certificates – Open Community Call
With Creative Commons now being used by people all over the world to openly license over a billion pieces of content, a good working knowledge of what Creative Commons is and how it works is critical.

Creative Commons is developing a series of certificates to provide organizations and individuals with a range of options for increasing knowledge and use of Creative Commons.

The Creative Commons Master Certificate will define the full body of knowledge and skills needed to master CC. This master certificate will be of interest to those who need a broad and deep understanding of all things Creative Commons.

In addition, custom certificates are being designed for specific types of individuals and organizations. Initially Creative Commons is focusing on creating a specific CC Certificate for 1. educators, 2. government, and 3. librarians. The CC Certificate for each of these will include a subset of learning outcomes from the overall CC Master Certificate along with new learning outcomes specific to each role.


This is a must see 20 min TED Talk - what is truly amazing is the office from which Linus guides the world’s greatest open source software. How he doesn’t really ‘love’ other people - but has come to love people. :) Both Linux and Git were projects that Linus needed so he wouldn’t have to work with lots of people. He says, “I’m not a visionary, I’m an engineer - I’m not looking at the star, I’m concerning with fixing the pothole ahead of me before I fall in’.
Linus Torvalds: The mind behind Linux
Linus Torvalds transformed technology twice — first with the Linux kernel, which helps power the Internet, and again with Git, the source code management system used by developers worldwide. In a rare interview with TED Curator Chris Anderson, Torvalds discusses with remarkable openness the personality traits that prompted his unique philosophy of work, engineering and life. "I am not a visionary, I'm an engineer," Torvalds says. "I'm perfectly happy with all the people who are walking around and just staring at the clouds ... but I'm looking at the ground, and I want to fix the pothole that's right in front of me before I fall in."


Where do ‘hackers’ come from? There is a lot of public perception of ‘hacking’ as nefarious willful interference - rather than ‘willful tinkering to understand and make things better’. This is a nice article (19 min read)  giving some history to the original ‘hackers’.
LOGIC ELEMENTS: the term seems to encapsulate what drew Peter Samson, son of a mill machinery repairman, to electronics. The subject made sense. When you grow up with an insatiable curiosity as to how things work, the delight you find upon discovering something as elegant as circuit logic, where all connections have to complete their loops, is profoundly thrilling. Peter Samson, who early on appreciated the mathematical simplicity of these things, could recall seeing a television show on Boston’s public TV channel, WGBH, which gave a rudimentary introduction to programming a computer in its own language. It fired his imagination: to Peter Samson, a computer was surely like Aladdin’s lamp—rub it, and it would do your bidding.
The Tech Model Railroad Club
The first computer wizards who called themselves hackers started underneath a toy train layout at MIT’s Building 20
Just why Peter Samson was wandering around in Building 26 in the middle of the night is a matter that he would find difficult to explain. Some things are not spoken. If you were like the people whom Peter Samson was coming to know and befriend in this, his freshman year at the Massachusetts Institute of Technology in the winter of 1958–59, no explanation would be required.


Why are creative commons so important - no serious innovation can happen when all past creative works are behind a paywall. If we want to unleash human creative capacity people need to be able to access creative content to re-combine and rework for new creation. There are real alternative to current business models the require everything to be artificially made scarce or rival. Imagine Uber - but with every driver also an owner - the distributed ownership of Uber - would represent the sharing economy.
Record numbers of self-employed enter new tax year… and the co-operative model is here to help
“Working alone can be aspirational, but it can also be lonely and anxious. There is an extraordinary opportunity for new co-operative solutions for self-employed people, giving them the freedom of freelancing with the muscle of mutuality.”
Cooperatives UK have just released an in depth report full of examples of best practices for co-operatives collaborating to meet the needs of a growing class of dispossessed workers – over 70% of whom in the UK are in poverty. We will cover various aspects of the report in the following days and you can also read the full report here.


This is definitely something to consider, including the potential that renewable energy of solar and wind have to change energy geo-politics and drive an economic-learning infrastructure for a digital revolution. But we in any part of the developed world have to re-imagine our digital infrastructure - beyond privatized access to the Internet.
How can Africa master the digital revolution?
Digital connectivity has the potential to do for Africa what railroads did for Western economies in the 19th century. The digital revolution is not just about communication. It is about recognizing that information is the currency of all economic activities.

Unfortunately, despite the rapid adoption of mobile phones, Africa lags behind other regions in its use of core digital platforms such as the internet. This is compounded by the high prices charged for critical digital services such as broadband.


This is a fantastic innovation - perfect for Canadian winters (heat) and creating a distributed electricity commons - plus these could serve as mobile and Internet hotspots - part of the future of the ‘smart city’.
London transparent about its new solar bus shelters
Dark dreary bus shelters could soon be a thing of the past after London's first transparent solar bus shelter was switched on at the Canary Wharf business district on Friday. The shelter is reported capable of producing enough electricity for a standard London home for a year and will be used to power signage and other transport infrastructure around the Canary Wharf estate.

London is no stranger to solar-powered bus shelters, having installed solar-powered E-ink displays last year. Transparent solar panels are also something we've seen before, but early prototypes only offered around 1 percent conversion efficiency. UK solar technology company Polysolar has developed a new solar-photovoltaic technology with a whopping 6-12 percent conversion efficiency, dependent on the film's level of transparency. This new design allows for a transparent shelter that operates in low and ambient light.


The price of identity theft - that is the price of selling an identity once it’s stolen has become a commodity.
The price of stealing an identity is crashing, with no bottom in sight
Crooks are also acting more like normal businesses. Eager to please customers in a competitive marketplace, Russian hackers have started promoting round-the-clock customer service and placing ads promoting “free-trial attacks” and their “huge abilities.” This year’s hit product were upgraded ATM “skimmers” that capture the magnetic stripe data on the back of a debit or credit card priced at $1,775. Dell even observed new versions being manufactured with 3D printers, along with bluetooth and micro-camera accessories. For the aspiring thief on a budget, hacking tutorials were at the lowest price in three years—just $19.99.


This may be a bit of hyperbole - but it really is coming to senior management and leadership cadres near you … soon. Now does that mean there will be less of a rationale to pay them the ‘big bucks’? Or that there will be less opportunity for malfeasance, and other vices - because ethical constraints can be coded and advice provided recorded?
• Exposing the long-term implications of short-term decisions. Computers can enhance systemic thinking, uncovering the far-reaching (and often unintended) consequences of executive decisions.
• Experimenting to uncover new sources of value. Computers could, for example, be used to simulate the impact of large events, such as the potential acquisition of a rival.
• Augmenting human judgment. Computers can free executives to do what they do best – exercise their business and ethical judgment. They can also help people avoid common decision-making traps, like the tendency toward groupthink.
How intelligent machines are helping the C-suite
Computers are not just passive decision-support tools. They are becoming active advisers and partners in the C-Suite
How should Volkswagen rebuild its brand image after the widespread damage inflicted by the recent emissions scandal? Should Apple have fought its encryption battle against the US government? Would Yahoo be better off divesting its core operations? These are the types of momentous decisions that keep senior executives up late at night, but help – in the form of advanced computers – is on it way.

Today, we are moving well beyond the era of computers being merely passive assistants. We are now entering a period when the collaboration between humans and intelligent machines will be a source of competitive advantage for businesses. Computers are now becoming active advisers and partners, and that includes their presence at the highest levels of an organisation. Which means the prospect of intelligent machines in the C-Suite is not as far-fetched as it might first seem. In a survey of 800 executives conducted by The World Economic Forum, almost half of the respondents said they expect the first AI machine to be on a company’s board by the year 2025.
  • From Incrementalism to Active Experimentation
  • The Shaping of Strategy
  • No More Sacred Cows
  • Reaping Maximum Benefits


Here’s is another industry facing looming disruption.
Death of a real estate broker: 10 ways the industry is changing
What do real estate brokers, brokerage clerks and telemarketers have in common?
They will all lose their jobs in the near future. According to researchers at Oxford University, the potential for artificial intelligence computer algorithms to replace these jobs is estimated at between 97 and 99%.

The technological disruption that is happening in the real estate sector will go far beyond taking away “what you can sell”; it is set to radically transform the real estate profession. Recent research indicates that it is not just brokers but the entire real estate industry that has to rethink how new technologies as well as shifts in demographics and behaviour will impact upon real estate jobs, skills and business models.
The real estate industry as we know it will disappear.


Intimately related to real estate is the financial and banking sector - what coming for this sector is certainly a ‘looming’ shadow - but it’s hard to guess how exactly things will play out. What will significant ‘job’ cuts entail? More inequality as few people at the top of finance harvest all the spoils? Or more equity? What makes it impossible to predict is that this an a domain that is highly regulated and the question for the future is not less regulation - but who shapes regulation to favor what/who’s interests? Let’s hope the Panama Papers contributes to initiation of a wave of progressive reforms.
What does the rise of fintech mean for banking?
European and US banks may be on the brink of an "Uber moment", as the explosion of fintech disrupts the industry and leads to massive job cuts over the next decade, a new report predicts.

Up to 30% of current employees in banks across Europe and the US may lose their jobs to technology by 2025, according to the report by Citigroup, which forecasts that around 1.8 million positions will go – mainly as a result of the automation of retail banking.


This really is a MUST SEE - a few graphs that illustrate the accelerating speed of technology-based change - This is not just the camera - but mobile technology written large.
This is What the History of Camera Sales Looks Like with Smartphones Included
A few months ago, we shared a chart showing how sales the camera market have changed between 1947 and 2014. The data shows that after a large spike in the late 2000s, the sales of dedicated cameras have been shrinking by double digit figures each of the following years. Mix in data for smartphone sales, and the chart can shed some more light on the state of the industry.

Photographer Sven Skafisk decided to see what the same chart would look like with smartphone sales factored in. Here’s the chart he came up with using data from Gartner Inc.  


Intel has declared Moore’s Law dead - maybe and maybe not.
A $2 Billion Chip to Accelerate Artificial Intelligence
A new chip design from Nvidia will allow machine-learning researchers to marshal larger collections of simulated neurons.
The field of artificial intelligence has experienced a striking spurt of progress in recent years, with software becoming much better at understanding images, speech, and new tasks such as how to play games. Now the company whose hardware has underpinned much of that progress has created a chip to keep it going.

On Tuesday Nvidia announced a new chip called the Tesla P100 that’s designed to put more power behind a technique called deep learning.

At a company event in San Jose, he said, “For the first time we designed a [graphics-processing] architecture dedicated to accelerating AI and to accelerating deep learning.” Nvidia spent more than $2 billion on R&D to produce the new chip, said Huang. It has a total of 15 billion transistors, roughly three times as many as Nvidia’s previous chips. Huang said an artificial neural network powered by the new chip could learn from incoming data 12 times as fast as was possible using Nvidia's previous best chip.


The thing about more is that more can become different - and this takes us to Moore’s Law becomes different.
Bringing Big Neural Networks to Self-Driving Cars, Smartphones, and Drones
Engineers are trying to squeeze outsize AI into mobile systems
Artificial intelligence systems based on neural networks have had quite a string of recent successes: One beat human masters at the game of Go, another made up beer reviews, and another made psychedelic art. But taking these supremely complex and power-hungry systems out into the real world and installing them in portable devices is no easy feat. This February, however, at the IEEE International Solid-State Circuits Conference in San Francisco, teams from MIT, Nvidia, and the Korea Advanced Institute of Science and Technology (KAIST) brought that goal closer. They showed off prototypes of low-power chips that are designed to run artificial neural networks that could, among other things, give smartphones a bit of a clue about what they are seeing and allow self-driving cars to predict pedestrians’ movements.

Until now, neural networks—learning systems that operate analogously to networks of connected brain cells—have been much too energy intensive to run on the mobile devices that would most benefit from artificial intelligence, like smartphones, small robots, and drones. The mobile AI chips could also improve the intelligence of self-driving cars without draining their batteries or compromising their fuel economy.


Did I just say Moore’s Law becomes Different?
Scientists just made the world's smallest diode out of DNA
Electronics on the molecular scale.
Researchers have shrunk down one of the fundamental components of modern electronics, creating the world's smallest diode out of a single molecule of DNA. In fact, it's so tiny, you can't even see it using a conventional microscope.

Diodes are electronic devices that make it easy for current to flow in one direction, but not another. In other words, they're responsible for moving current around a lot of common electronics, and are printed by the millions onto modern-day silicon chips. But to increase the processing power of these chips, we need to make diodes a lot smaller, which is where DNA comes into it.

"For 50 years, we have been able to place more and more computing power onto smaller and smaller chips, but we are now pushing the physical limits of silicon," said lead researcher Bingqian Xu, from the University of Georgia. "Our discovery can lead to progress in the design and construction of nanoscale electronic elements that are at least 1,000 times smaller than current components."


And talk about Google Glass - remember that? This is not in production - but….. Here comes the participatory panopticon...
Samsung Patents Contact Lenses With Built-In Camera
Samsung has been granted a patent in South Korea for contact lenses with a display that projects images directly into the wearer’s eyes.

According to SamMobile, the patent includes a “contact lens equipped with a tiny display, a camera, an antenna, and several sensors that detect movement and the most basic form of input using your eyes: blinking.”

A smartphone will be required for the device to work, according to documents.
The “smart” contact lenses could prove to be a substantial upgrade from so-called “smart glasses”, posing a threat to what will be its main competitor in the market, Google Glasses.


This isn’t 3D printing as manufacturing but it points to new ways to produce local drugs less expensively. It part of the change in the conditions of change.
Pharmacy on demand
New, portable system can be configured to produce different drugs.
MIT researchers have developed a compact, portable pharmaceutical manufacturing system that can be reconfigured to produce a variety of drugs on demand.

Just as an emergency generator supplies electricity to handle a power outage, this system could be rapidly deployed to produce drugs needed to handle an unexpected disease outbreak, or to prevent a drug shortage caused by a manufacturing plant shutdown, the researchers say.

“Think of this as the emergency backup for pharmaceutical manufacturing,” says Allan Myerson, an MIT professor of the practice in the Department of Chemical Engineering. “The purpose is not to replace traditional manufacturing; it’s to provide an alternative for these special situations.”

Such a system could also be used to produce small quantities of drugs needed for clinical trials or to treat rare diseases, says Klavs Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT.

Traditional drug manufacturing, also known as “batch processing,” can take weeks or months. Active pharmaceutical ingredients are synthesized in chemical manufacturing plants and then shipped to other sites to be converted into a form that can be given to patients, such as tablets, drug solutions, or suspensions. This system offers little flexibility to respond to surges in demand and is susceptible to severe disruption if one of the plants has to shut down.

Many pharmaceutical companies are now looking into developing an alternative approach known as flow processing — a continuous process that is done all in one location. Five years ago, an MIT team that included Jamison, Jensen, and Myerson demonstrated a larger prototype (24 by 8 by 8 feet) for the continuous integrated manufacturing of drugs from chemical synthesis to tablets. That project has ended, but the continuous manufacturing initiative, funded by Novartis, is still underway as the researchers develop new methods for synthesis, purification, and formulation.   


This is very interesting progress in the 3D printing.
3D-printed hydraulic robot 'can practically walk right out of the printer'
The bot's creation shows how 3D printing can advance from making individual components to whole active systems.
Researchers from MIT have used a new 3D-printing method that works with both solids and liquids to create a six-legged, hydraulically-powered robot. The team from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) created the bot using a commercially-available 3D printer. Several sets of tiny bellows power the robot's legs, and are filled with liquid deposited during the 22-hour printing process.

CSAIL director Daniela Rus said in a press release that the research was a step towards "the rapid fabrication of functional machines." She added: "All you have to do is stick in a battery and motor, and you have a robot that can practically walk right out of the printer." The resulting bots weighs about 1.5 pounds and is just under six inches long. Future applications for such cheap robots could include exploring disaster sites where humans cannot easily go.


This is totally fascinating - this should be a MUST READ for all social scientists - the 21st century is the century of complexity, the domestication of DNA & matter and mass migrations to virtual worlds - and who knows what else? This article reveals a new domain for research - the history of virtual worlds. Equally important in understanding these Massive Multiplayer Online Games - is that they are new research platforms for social experimentation on all levels - something all social scientist should become familiar with - affordances for new research tools, methods and domains.
“It’s one of the first times we can point to something and see an ideologically fueled conflict taking place on the Internet between tens of thousands of networked individuals.”
“They don’t view it as a game. They view it as a very real part of their lives, and a very real part of their accomplishments as people.”
At the end of the day, no one is doomed to work for any alliance. They can, at any time, pick up their bag and leave. So inspiration and morale become these huge resources for these organizations, and that’s fun, because it means propaganda starts to matter. I have examples of these wonderful propaganda speeches, where the leaders of these alliances start talking like they’re Winston Churchill. And you get the very real sense they believe it.
Would it be ‘unimaginable’ to think of ‘real’ conflict occurring in virtual domains? The concepts of distributed realism enables a massive participatory platform generating embodied sensation, meaning, implications.
I got to know EVE because when you’re just in the games community, when you follow games as a niche medium, you tend to hear these tip-of-the-iceberg tales about what goes on in EVE. Whispers get around of this battle that had 4,000 players in it, or this incredible Ponzi scheme someone ran that duped 10,000 people throughout the game’s history.
How to Write a History of Video Game Warfare
A journalist has assembled the first chronology of the largest war yet fought on the Internet—the Great War of EVE Online.
Imagine the Star Wars universe is real, and you live in it.
You have lived in it, every day, for the past 13 years. You’re the captain of a nimble fighter, working for money or sheer thrill; or you operate a modest mining ship, plying your trade in the nearest asteroid belt. For all that time, you have been scraping by in the final frontier—evading pirates, settling scores, and enduring the Machiavellian rise and fall of space empires. Sometimes you even pledge allegiance to a warlord.

For the more than 500,000 people who play EVE Online, this isn’t a fantasy. It’s real life. EVE Online is a massive multiplayer online game—a single environment shared by thousands of players, like World of Warcraft or Second Life—that has been in continual operation since 2003. It contains all the set pieces of space opera—moons, distant outposts, mighty dreadnoughts—but it is no ordinary video game. In fact, it is like little else on the Internet in its ability to mirror the functioning complexity of the real world.

Thursday, April 7, 2016

Friday Thinking 8 April 2016

Hello – Friday Thinking is curated on the basis of my own curiosity and offered in the spirit of sharing. Many thanks to those who enjoy this. 

In the 21st Century curiosity will SKILL the cat.


At Medium, the experiment in self-organisation continues. Medium has abandoned Holacracy, but it hasn’t abandoned its pursuit of a horizontal management system. According to Doyle, Medium’s problem with Holacracy was functional rather than philosophical. ‘Many of the principles we value most about Holacracy are already embedded in the organisation through how we approach our work, collaborate, and instigate change’, he claims. Medium is currently moving forward by articulating these principles and assembling a team ‘to translate [them] into a functional system’. The company remains committed to ‘pioneering new ways to operate’. Doyle puts this in perspective: ‘The management model that most companies employ was developed over a century ago. Information flows to quickly — and skills are too diverse — for it to remain effective in the future’. Realistically, any company concerned to be around in ten years time should be exploring new organizational operating systems. Technology and culture are evolving far too rapidly for companies to bank on maintaining their position without continuous self-reinvention.

This is the crux of the matter. Digital technologies make it absurdly easy to share information and coordinate collaborative work. While they do not drive (or ‘want’) openness and collaboration, these technologies makes self-organisation so simple, it is foolish not to explore it.

A 2015 Deloitte survey of more than 7000 companies revealed that the majority of companies are moving away from top down, command and control structures towards flexible structures based in teams. Only 38% of companies surveyed retain a traditional structure. Of these companies, 92% cite organisational redesign as ‘the top priority’.
Medium’s Experiment with Holacracy Failed. Long Live the Experiment!


Economic rents are obtained when someone is able to extract wealth or excessive returns despite no additional contribution to productivity, or what could be called socially useless activity.
REPORT: 74% OF BILLIONAIRE WEALTH FROM RENT-SEEKING


the particular features of email activity that were most predictive of low satisfaction. For work — life balance, it was the fraction of emails sent out of working hours: more is worse. For managerial satisfaction, it was manager response time: slower is worse. And for perceptions of company-wide collaboration, it was the size of the manager’s email network: smaller is worse….

...Insights like these are of immediate interest to both employees and managers. In particular, because predictions based on email sending behavior can be made in real time, HR can obtain more timely feedback than surveys allow. Moreover, modern statistical modeling approaches such as ours can help managers in complex situations where many different factors could be at play — e.g., by showing which of many plausible explanations are supported by the evidence, and by cautioning against “one size fits all” solutions. Finally, employees could also benefit from tools that highlight help them quantify their work activity in the same way that personal fitness trackers help them quantify physical activity.
The Organizational Spectroscope


Problem-based, cooperative work is best expressed organizationally through emergent, responsive communities. The mainstream business approach is still predictive grouping and an ex ante organizational structure. It is typically a process organization designed and controlled by the expert/manager. This is based on the presuppositions that we know (1) all the linkages that are needed beforehand, and (2) what the right sequential order in acting is. Neither of these beliefs is correct any more.

The variables of creative work have increased beyond systemic models of process design.

The Internet is the best architecture for the open and loosely coupled work systems of the future. When it comes to work the Internet hasn’t really started yet
Esko Kilpi - Problem-based work - The lessons from Google


The traditional business model involved payment for a product or service upfront, regardless of whether or not it was ever used. Customers are less and less willing to tolerate this form of payment and are increasingly expecting to pay for actual usage

As customers gain more power, they won’t be satisfied with paying for usage. They’ll want to pay based on value created, rather than simple usage. What if I use a product or service and create very little value from that usage – should I really have to pay for simple usage? We are already seeing value based billing emerge in certain parts of the professional services world.

This is obviously a far more challenging expectation because it requires the ability to measure and monitor value creation for the customer, rather than simple usage. But technology is rapidly evolving to give us a much richer view of the context of usage and the impact created by that usage. And, if we can quantify the value created for the customer from usage of the product, that customer would be much more willing to pay for value received.
John Hagel - The Big Shift in Business Models


“Many economists, for better or worse, have come to see themselves as doing ‘positive research,’ in a philosophical sense. What isn’t always explicit is that the so-called positive research is nestled in a set of normative axiomatic assumptions about how the world works. One such assumption is that the basic rules of capitalism and democracy are immutable. I’ve learned that those who seek debate over the validity of  these assumptions carry the burden of establishing conclusively that the assumptions are violated in practice. If we’re going to make progress on the economic theory of the firm as it applies to corporate political activity, we need more economics-based empirical research on the harms, if any, of corporate lobbying.”
Is There a Crisis in the Economic Theory of the Firm? Participants at Harvard Business School Conference Agree: Firms Try to Change the Rules of the Game


This is a must read by Duncan Watts - the future of social science applied to organizations - although it doesn’t talk about sociometric badges and apps - it expands the methods of data gathering disrupting the primacy of reliance on traditional surveys.
The Organizational Spectroscope
Using digital data to shed light on team satisfaction and other questions about large organizations
For several decades sociologists have speculated that the performance of firms and other organizations depends as much on the networks of information flow between employees as on the formal structure of the organization.

This argument makes intuitive sense, but until recently it has been extremely difficult to test using data. Historically, employee data has been collected mostly in the form of surveys, which are still the gold standard for assessing opinions, but reveal little about behavior such as who talks to whom. Surveys are also expensive and time consuming to conduct, hence they are unsuitable for frequent and comprehensive snapshots of the state of a large organization.

Thanks to the growing ubiquity of productivity software, however, this picture is beginning to change. Email logs, web-based calendars, and co-authorship of online documents all generate digital traces that can be used as proxies for social networks and their associated information flows. In turn, these network and activity data have the potential to shed new light on old questions about the performance of teams, divisions, and even entire organizations.

Recognizing this opportunity, my colleagues Jake Hofman, Christian Perez, Justin Rao, Amit Sharma, Hanna Wallach, and I — in collaboration with Office 365 and Microsoft’s HR Business Insights unit — have embarked on a long-term project: the Organizational Spectroscope.

The Organizational Spectroscope combines digital communication data, such as email metadata (e.g., time stamps and headers), with more traditional data sources, such as job titles, office locations, and employee satisfaction surveys. These data sources are combined only in ways that respect privacy and ethical considerations. We then use a variety of statistical modeling techniques to predict and explain outcomes of interest to employees, HR, and management.


Here’s a must view 56 min video by the always thought provoking and entertaining Kevin Kelly - for anyone interested in the future.
Kevin Kelly | 12 Inevitable Tech Forces That Will Shape Our Future | SXSW Interactive 2016
In a few years we’ll have artificial intelligence that can accomplish professional human tasks. There is nothing we can do to stop this. In addition our lives will be totally 100% tracked by ourselves and others. This too is inevitable. Indeed much of what will happen in the next 30 years is inevitable, driven by technological trends which are already in motion, and are impossible to halt without halting civilization. Some of what is coming may seem scary, like ubiquitous tracking, or robots replacing humans. Others innovations seem more desirable, such as an on-demand economy, and virtual reality in the home. And some that is coming like network crime and anonymous hacking will be society’s new scourges. Yet both the desirable good and the undesirable bad of these emerging technologies all obey the same formation principles.


At the risk of being too Kevin Kelly focused - this is another must view 58 min video given by Kevin at the Singularity University - for anyone who is serious about thinking about the future - or more precisely serious in how to think in better ways about the future. - Futurism is essential to the human condition.
Kevin Kelly - Tricks For Predicting The Future


This is a MUST READ by Cory Doctorow (Canadian Internet Activist, journalist and sci-fi writer) - The Dark Side of the Internet-of-Things looms in the discussion - especially if we don’t get business models and corresponding digital rights appropriately developed and implemented.
The last-millennium Digital Millennium Copyright Act has managed to stay on the books because we still think of it as a way to pull off small-potatoes ripoffs like forcing you to re-buy the movies you own on DVD if you want to watch them on your phone. In reality, the DMCA's anti-circumvention rules are a system that makes corporations into the only "people" who get to own property -- everything you "buy" is actually a license, dictated by terms of service that you've never read and certainly never agreed to, which give companies the right to reach into your home and do anything they want with the devices you've paid for.
Google reaches into customers' homes and bricks their gadgets
Revolv is a home automation hub that Google acquired 17 months ago; yesterday, Google announced that as of May 15, it will killswitch all the Revolvs in the field and render them inert. Section 1201 of the DMCA -- the law that prohibits breaking DRM -- means that anyone who tries to make a third-party OS for Revolv faces felony charges and up to 5 years in prison.

Revolv is apparently being killswitched because it doesn't fit in with Google's plan for Nest, the other home automation system it acquired. Google's FAQ tells its customers that this is OK because their warranties have expired, and besides, this is all covered in the fine-print they clicked through, or at least saw, or at least saw a link to.

This isn't the earthquake, it's the tremor. From your car to your lightbulbs to your pacemaker, the gadgets you own are increasingly based on networked software. Remove the software and they become inert e-waste. There is no such thing as a hardware company: the razor-thin margins on hardware mean that every funded hardware company is a service and data company, and almost without exception, these companies use DRM to acquire the legal right to sue competitors who provide rival services or who give customers access to their own data on "their" data.

We are entering the era where dishwashers can reject third-party dishes, and their manufacturers can sue anyone who makes "third-party dishes" out of existence. Selling you a toaster has never afforded companies the power to dictate your bread choices, nor has making a record player given a company the right to control which records get made.


And here’s another IoT shadow. Scarier than the one above. It is vital that we begin to think about digital infrastructure as public goods - well regulated with better provisions for transparency regarding user controls and interoperability in a device ecology.
Oculus Rift terms and conditions allow Facebook to monitor users’ movements and use it for advertising
The Facebook-owned company’s VR headset installs a piece of software that keeps watch of when people are using it — and can send that off to other firms


Whenever we think of the future we have to consider how we understand time - this is a wonderful 32 min video that explores exactly how we perceive time. Well worth the view. If anyone is interested in the possibility of ‘bullet time’ (e.g. remember the Matrix?), he explores the phenomena of slowed down time. This is a fascinating account.
What is time to the brain ? Perception of time delation
Setting time aright - Investigating the nature of time

1. The Flash-lag Effect
2. Time perception recalibrates
2.5 illusory reversal of cause and effect
3. Can subjective time run in slow motion?

We live in the past (perception of present can depend on what happens next), Temporal order recalibrates (can reverse perception of causation), Time is not one thing, Subjective duration indexes neural energy.


Here’s a MUST SEE 20 min TED Talk - about the world beyond the one our human senses construct - and how technology has and will continue to give us access to a much larger universe from which to construct new realities for ourselves - we will ave vast choice of peripheral sense inputs. We will soon be able to feel crowds and events in real time - whether we are present or not.
David Eagleman: Can we create new senses for humans?
Published on 18 Mar 2015
As humans, we can perceive less than a ten-trillionth of all light waves. “Our experience of reality,” says neuroscientist David Eagleman, “is constrained by our biology.” He wants to change that. His research into our brain processes has led him to create new interfaces to take in previously unseen information about the world around us.


And considering senses - with better feedback we can come to sense how our own brain’s unconscious processes are doing with inexpensive technology. This is an interesting account of some of this popular quantified self - wearable technology.
A FITBIT FOR YOUR BRAIN IS AROUND THE CORNER
“Anxiety, depression, schizophrenia, dementia, Alzheimer’s, Parkinson’s, autism…” Le ticks off a litany of neurological disorders, then continues: “Most of these conditions are developmental in nature. The markers probably exist decades before the symptoms manifest themselves. We need more early intervention and early monitoring.” According to Le, the way we do neuroscience nowadays is fundamentally flawed. “For the most part, we only study brains when something goes wrong.” People with supposedly healthy brains almost never have their brains scanned—in part because, until this decade, obtaining readable results from EEG has been time- and labor-intensive.

The rise in popularity of wearable health technology, Le says, has “opened up the opportunity for us to monitor, track and learn about the brain and to start to build better models of the brain across a broad spectrum of users,” not just people who are ill. In other words, if enough people start using the Insight (Emotiv’s new EEG rig) and let Emotiv collect data about their minds, maybe that amassed data could allow neuroscientists to finally know what a healthy brain looks like when it deals with various everyday stimuli. That information could allow neuroscientists to better identify “early biomarkers for a variety of neurological disorders,” says Le.


New insights and developments in understanding our brain and links to disease continue to emerge - this is a fascinating article discussing the emergence of a deeper understanding of the brain’s support cell processes.
The Rogue Immune Cells That Wreck the Brain
Beth Stevens thinks she has solved a mystery behind brain disorders such as Alzheimer’s and schizophrenia.
Microglia are part of a larger class of cells—known collectively as glia—that carry out an array of functions in the brain, guiding its development and serving as its immune system by gobbling up diseased or damaged cells and carting away debris. Along with her frequent collaborator and mentor, Stanford biologist Ben Barres, and a growing cadre of other scientists, Stevens, 45, is showing that these long-overlooked cells are more than mere support workers for the neurons they surround. Her work has raised a provocative suggestion: that brain disorders could somehow be triggered by our own bodily defenses gone bad.

In one groundbreaking paper, in January, Stevens and researchers at the Broad Institute of MIT and Harvard showed that aberrant microglia might play a role in schizophrenia—causing or at least contributing to the massive cell loss that can leave people with devastating cognitive defects. Crucially, the researchers pointed to a chemical pathway that might be targeted to slow or stop the disease. Last week, Stevens and other researchers published a similar finding for Alzheimer’s.

This might be just the beginning. Stevens is also exploring the connection between these tiny structures and other neurological diseases—work that earned her a $625,000 MacArthur Foundation “genius” grant last September.


And another fascinating development with huge implications related to our ongoing domestication of DNA and the process of transforming the expression of our DNA heritage - while we live - thus deepening the meaning of personal transformation.
Biological mechanism passes on long-term epigenetic 'memories'
Researchers discover the on/off button for inheriting responses to environmental changes
According to epigenetics -- the study of inheritable changes in gene expression not directly coded in our DNA -- our life experiences may be passed on to our children and our children's children. Studies on survivors of traumatic events have suggested that exposure to stress may indeed have lasting effects on subsequent generations. But how exactly are these genetic "memories" passed on?


This is an excellent account of one company’s experiment with ‘Holocracy’ - which is one attempt to create an organizational architecture that is not hierarchical - however, I would suggest that it remain quite bureaucratic. That said this should be a must read for anyone interested in developing better form of organization.
Medium’s Experiment with Holacracy Failed. Long Live the Experiment!
The organisations of the future are being invented today. They are densely connected, human-centered, agile, and intrinsically innovative. The question for business leaders is not if they should shift to a more flexible, self-organizing structure, but how.
In March 2016, Medium abandoned Holacracy. The carnivores of the business press, who had been circling the blog publishing company since it started using the management ‘operating system’ three years earlier, closed in for the kill. ‘Well, waddaya know?’ Paul Carr gloated in the tech journal Pando. ‘Medium drops Holacracy, because Holacracy is “time consuming and divisive”’. This misrepresents Andy Doyle’s claim in his announcement of Medium’s decision. Doyle’s actual point is that Holacracy was problematic ‘for larger initiatives, which require coordination across functions’. But why let the truth get in the way of a trouncing?

...When Medium abandoned Holacracy, the status quo rejoiced. The disruptor is dead. Order is restored. We can all sleep more soundly knowing that the organisational forms of the 19th century are alive and well, unchallenged by the pretensions of the ‘bossless organisation’.

Not so fast. Medium is not the only company experimenting with Holacracy. The operating system is currently used by about 70 companies around the world, including the online shoe retailer Zappos. Zappos CEO Tony Hsieh shifted the entire company to Holacracy in 2013. Hsieh’s view is that command and control structures are death. Self-organizing structures are not only more resilient, they become more innovative as they expand. Many of Hsieh’s employees failed to see the upside. Confronted with protracted resistance, Hsieh told employees to accept Holacracy or quit. In the end, 260 employees (roughly 18% of the company) accepted Hsieh’s redundancy offer and left. The majority of media coverage failed to note that Zappos’ standard annual turnover is around 20%. Plus it was a superb payout. Hsieh half jokingly suggests that, given the size of the redundancy package, ‘the headline really should be “82% of employees chose NOT to take the offer”’.


Based on the Business Model Canvas - this is a very interesting site and idea to help bureaucracies flesh out new initiatives. This is well worth the view for anyone struggling to develop models for innovating in a bureaucracy.
The GovLab Academy Canvas
Use the GovLab Public Problem Solving Canvas to create and develop your public interest project. These twenty questions are designed to help you refine your understanding of the problem and those whom it affects; express your Big Idea; and turn that idea into an actionable strategy in the real world to the end of improving people's lives.


The need for creative, originality and innovation - also depends on individuals - this is an entertaining 15 min TED Talk.
Adam Grant: The surprising habits of original thinkers
How do creative people come up with great ideas? Organizational psychologist Adam Grant studies "originals": thinkers who dream up new ideas and take action to put them into the world. In this talk, learn three unexpected habits of originals — including embracing failure. "The greatest originals are the ones who fail the most, because they're the ones who try the most," Grant says. "You need a lot of bad ideas in order to get a few good ones."

After years of studying the dynamics of success and productivity in the workplace, Adam Grant discovered a powerful and often overlooked motivator: helping others.


And in the realm of self-driving and drones - here’s one glimpse of the future of the Navy. There is a very short video as well.
DARPA starts speed testing its submarine-hunting drone ship
Open-water tests will follow this summer.
DARPA's 130-foot unmanned ship is almost ready to take on rogue submarines. Its christening isn't slated to take place until April 7th, but it's now in the water near its construction site in Portland, Oregon -- the agency has even begun conducting speed tests. The drone called ACTUV or Anti-Submarine Warfare Continuous Trail Unmanned Vessel has successfully reached the top speed its creators were expecting (31mph) during the preliminary tests. It was, however, designed to do much more than traverse the oceans at 31mph. ACTUV has the capability to use long/short-range sonar to detect foreign submarines, even stealthy diesel electric ones that don't make noise.

It can then follow those submarines around in an effort to spook out their operators and drive them out. If needed, the vessel can also deliver supplies and be sent on reconnaissance missions with absolutely no human on board. Before it can do the tasks it was made for, though, it still has to undergo open-water testing in California sometime this summer.


This is an amazing new type of sensor.
Tiny gravity sensor could detect drug tunnels, mineral deposits
A new device the size of a postage stamp can detect 1-part-per-billion changes in Earth’s gravitational field—equivalent to what the gizmo would experience if it were lifted a mere 3 millimeters. The technology may become so cheap and portable it could one day be mounted on drones to spot everything from hidden drug tunnels to valuable mineral deposits.

Hammond and his colleagues set out to build a smaller, cheaper spring-based gravimeter. The heart of their device is a postage stamp–sized bit of silicon; it’s carved so that in its center there’s a 25-milligram bit of material left suspended by three stiff, fiber-like structures that are each about 5 micrometers across (less than one-third the diameter of the finest human hair). Together, these act as the spring. As the gravitational field surrounding the device changes—such as it would if it passed over a large underground cavern or a dense deposit of minerals, because of the sudden change of density in the underlying rocks—the tiny bit of silicon bobs up and down in response to that change,  Hammond says. Those movements are tracked by monitoring the silicon’s shadow as it moves across a light detector.

The team’s gravimeter is so sensitive it can track the up-and-down motions of Earth’s surface caused by the changing positions of the sun and moon, the researchers report online today in Nature. (These so-called “Earth tides” occur and are measurable, but they are much smaller than those seen in the seas because rock is stiffer than water.)


The future is not a linear flourishing of the past - but more often is a proliferation into emergent ecologies of diverse possibles and plausibles.
While you’re charging your EV, BMW is preparing for a hydrogen future
Don’t let $2 gas fool you, the alternative fuel revolution is well under way. A vast majority of car makers offer at least one hybrid model, and the number of electric cars on the market grows annually. However, hydrogen technology is still lagging behind because it’s plagued by an array of setbacks, including an underdeveloped infrastructure.

Digital Trends sat down with Merten Jung, BMW’s head of fuel cell development, to get insight on where the technology stands today, what will change in the coming years, and when we can expect to see a hydrogen-powered car in a BMW showroom.


This is an interesting Bloomberg article reflecting on the current state of renewable energy investments. The graphs and gifs are must views - they say it all.
Government subsidies have helped wind and solar get a foothold in global power markets, but economies of scale are the true driver of falling prices: The cost of solar power has fallen to 1/150th of its level in the 1970s, while the total amount of installed solar has soared 115,000-fold.
The reason solar-power generation will increasingly dominate: It’s a technology, not a fuel. As such, efficiency increases and prices fall as time goes on. What's more, the price of batteries to store solar power when the sun isn't shining is falling in a similarly stunning arc.
The best minds in energy keep underestimating what solar and wind can do. Since 2000, the International Energy Agency has raised its long-term solar forecast 14 times and its wind forecast five times. Every time global wind power doubles, there's a 19 percent drop in cost, according to BNEF, and every time solar power doubles, costs fall 24 percent.
Wind and Solar Are Crushing Fossil Fuels
Record clean energy investment outpaces gas and coal 2 to 1.
Wind and solar have grown seemingly unstoppable.
While two years of crashing prices for oil, natural gas, and coal triggered dramatic downsizing in those industries, renewables have been thriving. Clean energy investment broke new records in 2015 and is now seeing twice as much global funding as fossil fuels.

One reason is that renewable energy is becoming ever cheaper to produce. Recent solar and wind auctions in Mexico and Morocco ended with winning bids from companies that promised to produce electricity at the cheapest rate, from any source, anywhere in the world, said Michael Liebreich, chairman of the advisory board for Bloomberg New Energy Finance (BNEF).  

"We're in a low-cost-of-oil environment for the foreseeable future," Liebreich said during his keynote address at the BNEF Summit in New York on Tuesday. "Did that stop renewable energy investment? Not at all."


Talk about the phase transition in energy geopolitics - Planning for strategic shifting is getting ‘chic’ (bad pun I( know) still this is interesting - should make other oil heavy investment approaches pause to re-think. I would be convenient if the availability of oil were reduced to increase the price - at least until sales are made.
“IPOing Aramco and transferring its shares to PIF will technically make investments the source of Saudi government revenue, not oil,” the prince said in an interview at the royal compound in Riyadh that ended at 4 a.m. on Thursday. “What is left now is to diversify investments. So within 20 years, we will be an economy or state that doesn’t depend mainly on oil.”
….The sale of Aramco, or Saudi Arabian Oil Co., is planned for 2018 or even a year earlier, according to the prince. The fund will then play a major role in the economy, investing at home and abroad. It would be big enough to buy Apple Inc., Google parent Alphabet Inc., Microsoft Corp. and Berkshire Hathaway Inc. -- the world’s four largest publicly traded companies.
Saudi Arabia Plans $2 Trillion Megafund for Post-Oil Era: Deputy Crown Prince
Saudi Arabia is getting ready for the twilight of the oil age by creating the world’s largest sovereign wealth fund for the kingdom’s most prized assets.

Over a five-hour conversation, Deputy Crown Prince Mohammed bin Salman laid out his vision for the Public Investment Fund, which will eventually control more than $2 trillion and help wean the kingdom off oil. As part of that strategy, the prince said Saudi will sell shares in Aramco’s parent company and transform the oil giant into an industrial conglomerate. The initial public offering could happen as soon as next year, with the country currently planning to sell less than 5 percent.


The idea of the domestication of DNA arises with the digital environment and the realization that biology has become an information science and DNA is code - this is moving even closer to a reality.
However, designing each circuit is a laborious process that requires great expertise and often a lot of trial and error. "You have to have this really intimate knowledge of how those pieces are going to work and how they're going to come together," Voigt says.
Users of the new programming language, however, need no special knowledge of genetic engineering.
"You could be completely naive as to how any of it works. That's what's really different about this," Voigt says. "You could be a student in high school and go onto the Web-based server and type out the program you want, and it spits back the DNA sequence."
A programming language for living cells
MIT biological engineers have created a programming language that allows them to rapidly design complex, DNA-encoded circuits that give new functions to living cells.

Using this language, anyone can write a program for the function they want, such as detecting and responding to certain environmental conditions. They can then generate a DNA sequence that will achieve it.

"It is literally a programming language for bacteria," says Christopher Voigt, an MIT professor of biological engineering. "You use a text-based language, just like you're programming a computer. Then you take that text and you compile it and it turns it into a DNA sequence that you put into the cell, and the circuit runs inside the cell."

Voigt and colleagues at Boston University and the National Institute of Standards and Technology have used this language, which they describe in the April 1 issue of Science, to build circuits that can detect up to three inputs and respond in different ways. Future applications for this kind of programming include designing bacterial cells that can produce a cancer drug when they detect a tumor, or creating yeast cells that can halt their own fermentation process if too many toxic byproducts build up.

The researchers plan to make the user design interface available on the Web.


This is an hour long reality TV docudrama - exploring the what is called social compliance - and in particular explores whether this tendency to ‘listen to authority’ can be used to ‘push’ people to commit murder. This may not be for everyone - but explores in a very dramatic way the ideas implied by the Milgram experiments. I think people will either love or hate this program. What is interesting is that each step in creating conditions of compliance are carefully explained.
Derren Brown - Pushed to the Edge


For Fun
Here is an academically developed game aimed at helping kids learn positive social skills.
Crystals of Kaydor
A team of researchers and game designers, led by Richard Davidson and Center Collaborator Constance Steinkuehler, developed the video game Crystals of Kaydor from the ground up aimed at teaching children pro-social behaviors, including recognizing others’ emotions.


This is a fantastic list by Charles Stross - for anyone interested in science fiction this is a must read.
Towards a taxonomy of cliches in Space Opera
So I'm chewing over the idea of eventually returning to writing far future SF-in-spaaaace, because that's what my editors tell me is hot right now (subtext: "Charlie, won't you write us a space opera?"). A secondary requirement is that it has to be all new—no sequels to earlier work need apply. But I have a headache, because the new space opera turns 30 this year, with the anniversary of the publication of "Consider Phlebas" (or maybe "Schismatrix")—or even 40 (with the anniversary of the original "Star Wars"). There's a lot of prior art, much of it not very good, and the field has accumulated a huge and hoary body of cliches.

This is not an exhaustive list—it's merely a start, the tip of a very large iceberg glimpsed on the horizon. And note that I'm specifically excluding the big media franchise products—Star Wars, Star Trek, Firefly, and similar—from consideration: any one of them could provide a huge cliche list in its own right, but I'm interested in the substance of the literary genre rather than in what TV and film have built using the borrowed furniture of the field.


This is one of my favorite April fool’s jokes - the video is 1.5 min. The smile is worth the view..
Sonified Higgs data show a surprising result
Scientists at CERN have been using new techniques to try and learn more about the tiniest particles in our universe. One unusual method they’ve utilised is to turn data from the Large Hadron Collider (LHC) into sounds – using music as a language to translate what they find.
This is exactly what happened this week when physicists at CERN sonified the Higgs boson data. They were shocked when, after listening to random notes as the data played its random tune, a bump in the graph translated into a well-known pattern of recognisable notes.