Thursday, May 10, 2018

Friday Thinking 11 May 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:



While the building blocks have begun to emerge, the principles for putting these blocks together have not yet emerged, and so the blocks are currently being put together in ad-hoc ways.

Thus, just as humans built buildings and bridges before there was civil engineering, humans are proceeding with the building of societal-scale, inference-and-decision-making systems that involve machines, humans and the environment. Just as early buildings and bridges sometimes fell to the ground — in unforeseen ways and with tragic consequences — many of our early societal-scale inference-and-decision-making systems are already exposing serious conceptual flaws.

And, unfortunately, we are not very good at anticipating what the next emerging serious flaw will be. What we’re missing is an engineering discipline with its principles of analysis and design.

… let us conceive broadly of a discipline of “Intelligent Infrastructure” (II), whereby a web of computation, data and physical entities exists that makes human environments more supportive, interesting and safe. Such infrastructure is beginning to make its appearance in domains such as transportation, medicine, commerce and finance, with vast implications for individual humans and societies. This emergence sometimes arises in conversations about an “Internet of Things,” but that effort generally refers to the mere problem of getting “things” onto the Internet — not to the far grander set of challenges associated with these “things” capable of analyzing those data streams to discover facts about the world, and interacting with humans and other “things” at a far higher level of abstraction than mere bits.

It is not hard to pinpoint algorithmic and infrastructure challenges in II systems that are not central themes in human-imitative AI research. II systems require the ability to manage distributed repositories of knowledge that are rapidly changing and are likely to be globally incoherent. Such systems must cope with cloud-edge interactions in making timely, distributed decisions and they must deal with long-tail phenomena whereby there is lots of data on some individuals and little data on most individuals. They must address the difficulties of sharing data across administrative and competitive boundaries. Finally, and of particular importance, II systems must bring economic ideas such as incentives and pricing into the realm of the statistical and computational infrastructures that link humans to each other and to valued goods. Such II systems can be viewed as not merely providing a service, but as creating markets. There are domains such as music, literature and journalism that are crying out for the emergence of such markets, where data analysis links producers and consumers. And this must all be done within the context of evolving societal, ethical and legal norms.

We need to realize that the current public dialog on AI — which focuses on a narrow subset of industry and a narrow subset of academia — risks blinding us to the challenges and opportunities that are presented by the full scope of AI, IA (Intelligence Augmentation) and II.

Artificial Intelligence — The Revolution Hasn’t Happened Yet



As recently as two decades ago, most people would have thought it absurd to countenance a free and open encyclopaedia, produced by a community of dispersed enthusiasts primarily driven by other motives than profit-maximisation, and the idea that this might displace the corporate-organised Encyclopaedia Britannica and Microsoft Encarta would have seemed preposterous. Similarly, very few people would have thought it possible that the top 500 supercomputers and the majority of websites would run on software produced in the same way, or that non-coercive cooperation using globally shared resources could produce artifacts as effectively as those produced by industrial capitalism, but more sustainably. It would have been unimaginable that such things should have been created through processes that were far more pleasant than the work conditions that typically result in such products.

Commons-based production goes against many of the assumptions of mainstream, standard-textbook economists. Individuals primarily motivated by their interest to maximise profit, competition and private property are the Holy Grail of innovation and progress – more than that: of freedom and liberty themselves. One should never forget these two everlasting ‘truths’ if one wants to understand the economy and the world, we are told. These are the two premises of the free-market economics that have dominated the discourse until today.

Already a decade ago (when smartphones were a novelty), Benkler argued in The Wealth of Networks (2006) that a new mode of production was emerging that would shape how we produce and consume information. He called this mode ‘commons-based peer production’ and claimed that it can deliver better artifacts while promoting another aspect of human nature: social cooperation. Digitisation does not change the human person (in this respect), it just allows her to develop in ways that had previously been blocked, whether by chance or design.

No matter where they are based, people today can use the internet to cooperate and globally share the products of their cooperation as a commons. Commons-based peer production (usually abbreviated as CBPP) is fundamentally different from the dominant modes of production under industrial capitalism. In the latter, owners of means of production hire workers, direct the work process, and sell products for profit-maximisation. Think how typical multinational corporations are working. Such production is organised by allocating resources through the market (pricing) and through hierarchical command. In contrast, CBPP is in principle open to anyone with the relevant skills to contribute to a common project: the knowledge of every participant is pooled.

Utopia now




For those who just got here, "Old people in big cities afraid of the sky" is my one-line description of mid-21st century life.  "Aging demographics, global urbanization, climate disasters." Already here, will be much further distributed

Bruce Sterling on Twitter




I’m still asking myself the same question that I asked myself ten years ago: "What is going on in my community?" I work in the foundations of physics, and I see a lot of strange things happening there. When I look at the papers that are being published, many of them seem to be produced simply because papers have to be produced. They don’t move us forward in any significant way. I get the impression that people are working on them not so much because it’s what they’re interested in but because they have to produce outcomes in a short amount of time. They sit on short-term positions and have short-term contracts, and papers must be produced.

If that is the case, then you work on what’s easy to do and what can quickly be finished. Of course, that is not a new story. I believe it explains a lot of what I see happening in my field and in related fields. The ideas that survive are the ideas that are fruitful in the sense of quickly producing a lot of publications, and that’s not necessarily correlated with these ideas being important to advancing science.

...For this we need science to work properly. First of all, to get this done will require that we understand better how science works. I find it ironic that we have models for how political systems work. We have voting models. We have certain understanding for how these things go about.

We also have a variety of models for the economic system and for the interaction with the political system. But we pretty much know nothing about the dynamics of knowledge discovery. We don’t know how the academic system works, for how people develop their ideas, for how these ideas get selected, for how these ideas proliferate. We don’t have any good understanding of how that works. That will be necessary to solve these problems. We will also have to get this knowledge about how science works closer to the people who do the science. To work in this field, you need to have an education for how knowledge discovery works and what it takes to make it work properly. And that is currently missing.

Looking in the Wrong Places




This is an excellent account of our current media environment - the message can no longer be controlled - and the environment is rife with what seems like self-replicating meme-propaganda. Perhaps honest can live is we can sustain conversations to promote better narratives for a global world.

Memes That Kill: The Future Of Information Warfare

Memes and social networks have become weaponized, while many governments seem ill-equipped to understand the new reality of information warfare. How will we fight state-sponsored disinformation and propaganda in the future?
In 2011, a university professor with a background in robotics presented an idea that seemed radical at the time.

After conducting research backed by DARPA — the same defense agency that helped spawn the internet — Dr. Robert Finkelstein proposed the creation of a brand new arm of the US military, a “Meme Control Center.”

In internet-speak the word “meme” often refers to an amusing picture that goes viral on social media. More broadly, however, a meme is any idea that spreads, whether that idea is true or false.

It is this broader definition of meme that Finklestein had in mind when he proposed the Meme Control Center and his idea of “memetic warfare.”

The presentation by Finklestein can be found Here


And here’s a more recent exploration from RAND into the challenge if the accelerating ‘infoban’ and the need for rapid ‘response-ability’.
"The single biggest change that I experienced in almost 25 years . . . was in the area of speed," said Blinken, a RAND adjunct researcher who shared his expertise on the project. "Nothing had a more profound effect on government and the challenges of government."

Can Humans Survive a Faster Future?

Life is moving faster and faster. Just about everything—transportation, weapons, the flow of information—is accelerating. How will decisionmakers preserve our personal and national security in the face of hyperspeed?
As the velocity of information—and just about everything else—accelerates, leaders face immense pressure to act or respond quickly. To help them adapt, researchers at the RAND Corporation are studying the phenomenon of speed as part of a special project, known as Security 2040, which looks over the horizon to evaluate future threats.

In his former roles as Deputy Secretary of State and Deputy National Security Adviser, Antony Blinken was one of those leaders responding to speed-driven crises.


This is a great signal of the emergence of the ‘Smart Nation’ - well worth the view. The interactive website provides lots of information and examples.

we have built a digital society and so can you

Named ‘the most advanced digital society in the world’ by Wired, ingenious Estonians are pathfinders, who have built an efficient, secure and transparent ecosystem that saves time and money. e-Estonia invites you to follow the digital journey.

Ambitious Future
Successful countries need to be ready to experiment. Building e-Estonia as one of the most advanced e-societies in the world has involved continuous experimentation and learning from our mistakes. Estonia sees the natural next step in the evolution of the e-state as moving basic services into a fully digital mode. This means that things can be done for citizens automatically and in that sense invisibly.

In order to remain an innovative, effective and successful Northern country that leads by example, we need to continue executing our vision of becoming a safe e-state with automatic e-services available 24/7.


The change in conditions of change also involve a massive phase transition in population demographics - the unprecedented reversal of the classic age pyramid and the increases of life expectancy and age inflation. One consequence is…
"This means the arc of our lives must be re-examined. Future jobs will be filled by healthy, vibrant people over 75, perhaps in non-profit work (such as my 78-year-old father) or just helping out with the family. Women will be able to have children into their 40s with new technologies, allowing them to postpone starting a family."

The next great workplace challenge: 100-year careers

Scientists expect people to live routinely to 100 in the coming decades, and as long as 150. Which also suggests a much longer working life lasting well into the 70s, 80s, and even 100, according to researchers with Pearson and Oxford University.

Quick take: Thinkers of various types are absorbed in navigating the age of automation and flat wages, but their challenge will be complicated by something few have considered — a much-extended bulge of older workers.

That includes an even harder time balancing new blood and experience, and sussing out the best basic education for lives probably traversing numerous professions. "How will we ever prepare someone in 16 years for a 100-year career?" Pearson's Amar Kumar tells Axios.

In researching the future of work, the Pearson-Oxford team began with a question — if a child were starting school today, what skills would he or she ideally learn in order to be ready for a possibly century-long career (the list they came up with is below)?


This is a summary of an OECD study (the one I wanted to post was behind the Economist paywall) - given the 100+ year life - how many careers will be part of such a life? This is a more complex and challenging question than ‘how many jobs will a person have?’

Study finds nearly half of jobs are vulnerable to automation

Job-grabbing robots are no longer science fiction. In 2013 Carl Benedikt Frey and Michael Osborne of Oxford University used—what else?—a machine-learning algorithm to assess how easily 702 different kinds of job in America could be automated. They concluded that fully 47% could be done by machines “over the next decade or two”.

A new working paper by the OECD, a club of mostly rich countries, employs a similar approach, looking at other developed economies.Its technique differs from Mr Frey and Mr Osborne’s study by assessing the automatability of each task within a given job, based on a survey of skills in 2015. Overall, the study finds that 14% of jobs across 32 countries are highly vulnerable, defined as having at least a 70% chance of automation. A further 32% were slightly less imperilled, with a probability between 50% and 70%. At current employment rates, that puts 210m jobs at risk across the 32 countries in the study.

The pain will not be shared evenly. The study finds large variation across countries: jobs in Slovakia are twice as vulnerable as those in Norway. In general, workers in rich countries appear less at risk than those in middle-income ones. But wide gaps exist even between countries of similar wealth.


There are many signals emerging that are related to the development of a new economic paradigm - one that is suited to the massive and still emerging collaborative commons and managing viability of the homeostasis of our environmental conditions - this is a good summary of one contribution related to ‘Donut Economics’ The actual talk given is not yet available - the graphic in this article provide good insight into these ideas.
“The goal of the 21st century economy should be to meet the needs of all within the means of the planet.” In today’s world we are addicted to growth. That’s why Kate Raworth, renegade economist and author of Financial Times and Forbes book of the year ‘Doughnut Economics’, proposes a new 21st century economy fit for our future.

An economist gave the most compelling design talk at TED—about doughnuts

There were plenty of great design talks at the TED conference this year. Over the five-day ideas conference held in Vancouver last week, revered architects, urbanists, engineers, and a winsome illustrator took turns regaling the audience about the power of design to create a beautiful, more humane future.

But it was an economist who arguably gave the most compelling and consequential design talk of all.

In a 15-minute lecture, Oxford University researcher Kate Raworthtraced today’s economic ills to one obsolete graphic: the growth chart. Used by every government and corporation as the single metric for progress since the 1960’s, the growth chart instills the fantasy of unending growth without regard for the finite amount of resources available. Specifically, every government thinks that the solution to all problems lies in more and more GDP growth, she said.

Here is her 2014 TED Talk

Why it's time for 'Doughnut Economics' | Kate Raworth | TEDxAthens

Economic theory is centuries out of date and that's a disaster for tackling the 21st century's challenges of climate change, poverty, and extreme inequality. Kate Raworth flips economic thinking on its head to give a crash course in alternative economics, explaining in three minutes what they'll never teach you in three years of a degree. Find out why it's time to get into the doughnut...

And here is her most recent 20 min video presentation

Kate Raworth, Doughnut Economics | Fixing the future, CCCB, Barcelona 2018

The bad news: the world is broken. The good? We can fix it. And now for the ugly: it’s going to get messy. Luckily there are plenty of people who are happy to get stuck in. Having now mapped over 500 planet-changing projects, Atlas of the Future sees our role as providing a window to the work of these innovators. On 13 March 2018 a future-‘supergroup’ gathered at the CCCB in Barcelona, ‘City of the Possible’, for our first event: ‘Fixing the future: adventures in a better tomorrow’.


The sound examples in this blog post are a MUST HEAR - for anyone who wants to get a sense of how our personal AI-ssistant will help us in the very near future - this is STUNNING. This anticipates how we interact with our phones and ‘Google home’ in the very near future.
While sounding natural, these and other examples are conversations between a fully automatic computer system and real businesses.

The Google Duplex technology is built to sound natural, to make the conversation experience comfortable. It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months.

Exclusive: Google's Duplex could make Assistant the most lifelike AI yet

Experimental technology, rolling out soon in a limited release, makes you think you’re talking to a real person.
This could be the next evolution of the Assistant, Google's rival to Amazon's Alexa, Apple's Siri and Microsoft's Cortana. It sounds remarkably -- maybe even eerily -- human, pausing before responding to questions and using verbal ticks, like "um" and "uh." It says "mm hmm" as if it's nodding in agreement. It elongates certain words as though it's buying time to think of an answer, even though its responses are instantaneously programmed by algorithms.

Built with technology Google calls "Duplex" -- and developed by engineers and product designers in Tel Aviv, New York and Mountain View -- the AI sounds as though the future of voice assistants has arrived.
Well, almost arrived.

The original Google post is here with other stunning examples

Google Duplex: An AI System for Accomplishing Real World Tasks Over the Phone




This is another signal in the blending of basic science, the digital environment and the emerging new economy - new paradigms, new accounting systems, new analytics - to enable new ways to value the flows of our values.
“Having an actual physical model and showing that this is a naturally occurring process might open up new ways to think about those functions,”

Stanford physicist finds that swirling liquids work similarly to bitcoin

The physics involved with stirring a liquid operate the same way as the mathematical functions that secure digital information. This parallel could help in developing even more secure ways of protecting digital information.
Fluid dynamics is not something that typically comes to mind when thinking about bitcoin. But for one Stanford physicist, the connection is as simple as stirring your coffee.

In a study published April 23 in Proceedings of the National Academy of Sciences, Stanford applied physics doctoral student William Gilpin described how swirling liquids, such as coffee, follow the same principles as transactions with cryptocurrencies such as bitcoin. This parallel between the mathematical functions governing cryptocurrencies and natural, physical processes may help in developing more advanced digital security and in understanding physical processes in nature.


While most people accept we live in time of accelerating change and some even grasp that we are living in a change in conditions of change. The paradox we live in every day is that some of our assumptions haven’t caught up with the Newtonian world (Look, the sun is going down). Many know of Einstein yet few can claim to truly grasp the notion of ‘space-time-curvature’ because we continue to breath the idea of Newtonian gravity. Even harder to grasp is the evaporation of an ‘objective frame of reference’ that is the consequence of Einstein’s Theory of Relativity. But now we are enacting sciences of Quantum Theory that is on the verge of becoming part of our everyday technology (which derives from Techne = knowledge as ‘know-how’).
While the article is a lightweight and accessible account of this rapidly developing frontier - it’s still mind-boggling.
Entanglement - may be an emerging metaphor for connectedness and complexity.
entanglement can also help link and combine the computing power of systems in different parts of the globe. It is easy to see how that makes it a crucial aspect of quantum computation. Another promising avenue is truly secure communications. That’s because any attempt to interfere with systems involving entangled particles immediately disrupts the entanglement, making it obvious that a message has been tampered with.

Scientists Discover How to Harness the Power of Quantum Spookiness by Entangling Clouds of Atoms

From tunneling through impenetrable barriers to being in two places at the same time, the quantum world of atoms and particles is famously bizarre. Yet the strange properties of quantum mechanics are not mathematical quirks—they are real effects that have been seen in laboratories over and over.

One of the most iconic features of quantum mechanics is “entanglement”—describing particles that are mysteriously linked regardless of how far away from each other they are. Now three independent European research groups have managed to entangle not just a pair of particles, but separated clouds of thousands of atoms. They’ve also found a way to harness their technological potential.

When particles are entangled they share properties in a way that makes them dependent on each other, even when they are separated by large distances. Einstein famously called entanglement “spooky action at a distance,” as altering one particle in an entangled pair affects its twin instantaneously—no matter how far away it is.


This is a strong signal of the acceleration of the emerging domestication of DNA.

The Genome Project-write (GP-write)

The Genome Project-write (GP-write) is an open, international research project led by a multi-disciplinary group of scientific leaders who will oversee a reduction in the costs of engineering and testing large genomes in cell lines more than 1,000-fold within ten years.

GP-write will include whole genome engineering of human cell lines and other organisms of agricultural and public health significance. Thus, the Human Genome Project-write (HGP-write) will be a critical core activity within GP-write focused on synthesizing human genomes in whole or in part. It will also be explicitly limited to work in cells, and organoids derived from them only. Because of the special challenges surrounding human genomes, this activity will include an expanded examination of the ethical, legal and social implications of the project.

The overarching goal of such an effort is to understand the blueprint for life provided by the Human Genome Project (HGP-read).

HGP-read aimed to “read” a human genome. Successfully completed in 2003, HGP-read is now widely recognized as one of the great feats of exploration, one that sparked a global revolution in science and medicine, particularly in genomic-based diagnostics and therapeutics.

But our understanding of the human genome – and the full benefits to humanity to be obtained from this knowledge — remains far from complete. Many scientists now believe that to truly understand our genetic blueprint, it is necessary to “write” DNA and build human (and other) genomes from scratch. Such an endeavor will require research and development on a grand scale.


Well this is a very interesting signal in the continued domestication of DNA - still discovering some fundamentals.

Scientists discover new DNA structure that's not a double helix

In a paper published in Nature Chemistry, researchers from Australia describe the first-ever sighting of a DNA component—called the intercalated motif (i-motif)—within living human cells. The shape of the structure has been likened to a “twisted knot.”

“The i-motif is a four-stranded ‘knot’ of DNA,” said genomicist Marcel Dinger, who co-led the research. “In the knot structure, C [cytosine] letters on the same strand of DNA bind to each other—so this is very different from a double helix, where ‘letters’ on opposite strands recognize each other, and where Cs bind to Gs [guanines].”

To identify the i-motif, which had been previously identified in vitro but never in living cells, the researchers developed antibody fragments dubbed “iMabs” that could recognize and bind to i-motifs in cells. The researchers added fluorescent dyes to the iMabs to make them easy to spot.


This is another signal related to a deeper understanding of communication in the domains of bacteria.

Study reveals how bacteria communicate in groups to avoid antibiotics

In a new study published in the Journal of Biological Chemistry (JBC), researchers from the University of Notre Dame and the University of Illinois at Urbana-Champaign have found that the bacterium Pseudomonas aeruginosa, a pathogen that causes pneumonia, sepsis and other infections, communicates distress signals within a group of bacteria in response to certain antibiotics. This communication was found to vary across the colony and suggests that this bacterium may develop protective behaviors that contribute to its ability to tolerate some antibiotics.

"There is a general lack of understanding about how communities of bacteria, like the opportunistic pathogen P. aeruginosa, respond to antibiotics," said Nydia Morales-Soto, senior research scientist in civil and environmental engineering and earth sciences (CEEES) at the University of Notre Dame and lead author of the paper. "Most of what we know is from studies about stationary biofilm communities, whereas less is known about the process beforehand when bacteria are colonizing, spreading and growing. In this study, our research team specifically reviewed the behavior of bacteria during this period and what that may mean for antibiotic resistance."


Research continues to discover that biological communications at all levels is increasingly complex. From horizontal gene transfer to exchanges of all forms of biological components and signals. The mechanisms of individual-species-environment evolution involve more that we have imagined. This is worth the read.
“There are fundamental differences between viruses and vesicles: Viruses can replicate and vesicles cannot,” Margolis said. “But there are many variants in between. Where do viruses start, and where do extracellular vesicles start?”
“Cell-cell communication is one of the most ancient mechanisms that makes us who we are, Since vesicles resemble viruses, the question of course is whether the first extracellular vesicles were primitive viruses and the viruses learned from extracellular vesicles or vice versa.”
Around 8 percent of the human genome is ultimately derived from viruses. Although some of this DNA is, in fact, “junk,” scientists are learning that much of it plays a role in our biology
“Although these viruses aren’t good for individuals, they provide the raw materials for new genes, They’re a potential gold mine.”

it’s now clear that extracellular vesicles are far from simple cellular debris, and the viral genes littering our DNA aren’t exactly junk, researchers have only just begun to crack the mystery of what they can do

Cells Talk in a Language That Looks Like Viruses

Live viruses may seem completely different from the message-carrying vesicles that cells release. But a vast population of particles intermediate between the two hints at their deep evolutionary connection.
Is it a live virus? An extracellular vesicle that delivers information about a cell? An incomplete and defective virus particle? A vesicle carrying viral components? Classifying the closely related particles that cells release can be a challenge.

For cells, communication is a matter of life and death. The ability to tell other members of your species — or other parts of the body — that food supplies are running low or that an invading pathogen is near can be the difference between survival and extinction. Scientists have known for decades that cells can secrete chemicals into their surroundings, releasing a free-floating message for all to read. More recently, however, scientists discovered that cells could package their molecular information in what are known as extracellular vesicles. Like notes passed by children in class, the information packaged in an extracellular vesicle is folded and delivered to the recipient.

The past five years have seen an explosion of research into extracellular vesicles. As scientists uncovered the secrets about how the vesicles are made, how they package their information and how they’re released, it became clear that there are powerful similarities between vesicles and viruses.

….this similarity is more than mere coincidence. It’s not just that viruses appear to hijack the cellular pathways used to make extracellular vesicles for their own production — or that cells have also taken on some viral components to use in their vesicles. Extracellular vesicles and viruses, Margolis argues, are part of a continuum of membranous particles produced by cells. Between these two extremes are lipid-lined sacs filled with a variety of genetic material and proteins — some from hosts, some from viruses — that cells can use to send messages to one another.


This is a very short article presenting about 7 graphs - well worth the view to see just how fasts the energy mix is changing in the last 5 years in the UK.
For example, in 2012 coal was 43.2% of electricity generation - in 2017 it was down to 7%.

Electricity since 2012

On this page I chart how electricity is changing year on year. This is done using a series of charts with commentary provided through my blogs. All of the charts are automatically recalculated on a monthly basis.

A blog piece on the data as it stood at the beginning of April 2017 can be found here.


And another important signal of the change in global energy geopolitics and transportation.

Electric Vehicles Begin To Bite Into Oil Demand

The latest report from Bloomberg New Energy shows that economics are driving the change, with the total cost of ownership of electric buses far outperforming the alternatives. The report says a 110kWh battery e-bus coupled with the most expensive wireless charging reaches parity with a diesel bus on total cost of ownership at around 60,000 km traveled per year (37,000 miles). This means that a bus with the smallest battery, even when coupled with the most expensive charging option, would be cheaper to run in a medium-sized city, where buses travel on average 170km/day (106 miles).

Today large cities with high annual bus mileages therefore choose from a number of electric options, all cheaper than diesel and CNG buses. The BNEF report says, ‘Even the most expensive electric bus at 80,000km per year has a TCO of $0.92/km, just at par with diesel buses. Compared to a CNG bus, it is around $0.11/km cheaper in terms of the TCO. This indicates that in a megacity, where buses travel at least 220km/day, using even the most expensive 350kWh e-bus instead of a CNG bus could bring around $130,000 in operational cost savings over the 15-year lifetime of a bus.

For every 1,000 battery-powered buses on the road, about 500 barrels a day of diesel fuel will be displaced from the market, according to BNEF calculations. In 2018, the volume of oil-based fuel demand that buses remove from the market may rise 37 % to 279,000 barrels a day, or approximately the equivalent of the oil consumption of Greece. By 2040, this number could rise as high as 8 million barrels per day (bpd).


This is an interesting signal for a couple of reasons. Of course the obvious one stated in the title - invention of a new device helping the progress of computational optics. Another key reason is the role played by interdisciplinary scientist in bridging different domains. The 9 min video by the key author doesn’t really explain the invention of the particular device - but illuminates the authors ability to bring different domains together.

Plasmonic modulator could lead to a new breed of electro-optic computer chips

Researchers have created a miniaturized device that can transform electronic signals into optical signals with low signal loss. They say the electro-optic modulator could make it easier to merge electronic and photonic circuitry on a single chip. The hybrid technology behind the modulator, known as plasmonics, promises to rev up data processing speeds. “As with earlier advances in information technology, this can dramatically impact the way we live,” Larry Dalton, a chemistry professor emeritus at the University of Washington, said in a news release. Dalton is part of the team that reported the advance today in the journal Nature.


New forms of computation? Maybe. This next article signals another shift in domesticating DNA using CRISPR for fast inexpensive diagnosis. The 3 min video provides an excellent accessible explanation.

Mammoth Biosciences launches a CRISPR-powered search engine for disease detection

Most people tend to think of CRISPR as a groundbreaking gene-editing technology that can hunt down and snip away bits of DNA, like the cut and paste function on a keyboard. While many research projects tend to emphasize the potential of that process in replacing target bits of genetic material, for Mammoth Biosciences, the search function is the real game changer.

“Control + F is the exciting part,” Mammoth co-founder and CEO Trevor Martin told TechCrunch in an interview. “At core it’s just this amazing search engine that we can use to find things. The way that we search for things is just like Google.”


This is an interesting short piece with a 3 min video and 36 awesome pictures of Mars and interesting links to other related articles - and a question of the speed of evolution even without domesticating DNA.

Chernobyl's Mutated Species May Help Protect Astronauts

Some species in the radioactive site show resistances to radiation—and their genetic protections may one day be applied to humans.
A former power plant in what is today northern Ukraine, Chernobyl experienced a catastrophic nuclear reactor accident in April 1986, and it is still contaminated with same kind of gamma radiation that astronauts will encounter in deep space. Creatures big and small, from wolves to microbes, continue to live inside the thousand-square-mile Exclusion Zone.

Mousseau has visited the site regularly since 2000, looking at hundreds of species to see how they react to the environment. Some, like the radiantly red firebug, mutate aesthetically, their normally symmetrical designs warped and fractured. Others, including certain species of birds and bacteria, have shown an increased tolerance and resistance to the radiation.

These differences may offer clues to help with human spaceflight.
“I think that within human genomes, there are secrets to biological mechanisms for resisting or tolerating the effects of radiation,” Mousseau says. “The trick is to figure out what those mechanisms are, and to maybe turn them on or enhance them in some way.”


This is cool - although there was an effort to develop similar prototypes for car tires about 10 years ago - but they weren’t 3D printed. Maybe to a bike store near you soon. The video is 1 min.

Company Introduces First 3-D Printed Flexible, Airless Bicycle Tires

This is a video of industrial 3D printer manufacturer BigRep creating the the world's first 3-D printed airless bicycle tires with their new PRO FLEX TPU (thermoplastic polyurethane) based flexible filament. They do seem to hold a bike up, although I doubt they get any grip on the pavement and probably drift like plastic Big Wheels tires when you try to stop. Still, add an outer rubber tread and you might be onto something. Now -- I want you to get into something. "What are you saying?" I want you to ride in my bicycle basket. "Um, don't you remember what happened the last time somebody rode in your bicycle basket?" Of course, Toto and I went to Oz and had the time of our lives. "No, the LAST last time." Oh shit -- E.T.! I almost forgot about that little creeper. You know he was just phoning sex-chat lines, right?