Thursday, July 13, 2017

Friday Thinking 14 July 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Contents
Quotes:

Articles:




There’s also something else very important to understand about failure and success. One success can outweigh 100,000 failures. Venture capitalist Paul Graham of Y Combinator has described this as Black Swan Farming. When it comes to truly transformative ideas, they aren’t obviously great ideas, or else they’d already be more than just an idea, and when it comes to taking a risk on investing in a startup, the question is not so much if it will succeed, but if it will succeed BIG. What’s so interesting is that the biggest ideas tend to be seen as the least likely to succeed.

Now translate that to people themselves. What if the people most likely to massively change the world for the better, the Einsteins so to speak — the Black Swans, are oftentimes those least likely to be seen as deserving social investment? In that case, the smart approach would be to cast an extremely large net of social investment, in full recognition that even at such great cost, the ROI from the innovation of the Black Swans would far surpass the cost.

This happens to be exactly what Buckminster Fuller was thinking when he said, “We must do away with the absolutely specious notion that everybody has to earn a living. It is a fact today that one in ten thousand of us can make a technological breakthrough capable of supporting all the rest.” That is a fact, and it then begs the question, “How do we make sure we invest in every single one of those people such that all of society maximizes its collective ROI?”

“I must not fear. Fear is the mind-killer. Fear is the little-death that brings total obliteration. I will face my fear.” — Frank Herbert, Dune

What it all comes down to is fear. FDR was absolutely right when he said the only thing we have to fear is fear itself. Fear prevents risk-taking, which prevents failure, which prevents innovation. If the great fears are of hunger and homelessness, and they prevent many people from taking risks who would otherwise take risks, then the answer is to simply take hunger and homelessness off the table. Don’t just hope some people are unafraid enough. Eliminate what people fear so they are no longer afraid.

If everyone received as an absolute minimum, a sufficient amount of money each month to cover their basic needs for that month no matter what — an unconditional basic income — then the fear of hunger and homelessness is eliminated. It’s gone. And with it, the risks of failure considered too steep to take a chance on something.

Markets work best when everyone can vote with their dollars, and have enough dollars to vote for products and services.

Universal Basic Income Will Reduce Our Fear of Failure




The core of Elsevier’s operation is in scientific journals, the weekly or monthly publications in which scientists share their results. Despite the narrow audience, scientific publishing is a remarkably big business. With total global revenues of more than £19bn, it weighs in somewhere between the recording and the film industries in size, but it is far more profitable. In 2010, Elsevier’s scientific publishing arm reported profits of £724m on just over £2bn in revenue. It was a 36% margin – higher than Apple, Google, or Amazon posted that year.

But Elsevier’s business model seemed a truly puzzling thing. In order to make money, a traditional publisher – say, a magazine – first has to cover a multitude of costs: it pays writers for the articles; it employs editors to commission, shape and check the articles; and it pays to distribute the finished product to subscribers and retailers. All of this is expensive, and successful magazines typically make profits of around 12-15%.

The way to make money from a scientific article looks very similar, except that scientific publishers manage to duck most of the actual costs. Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place.

It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free, and asked the government to foot the bill.

A 2005 Deutsche Bank report referred to it as a “bizarre” “triple-pay” system, in which “the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product”.

Is the staggeringly profitable business of scientific publishing bad for science?




This is an interesting article for anyone interested in ‘you can’t manage what you can’t measure’ paradigm - it also interesting for anyone interested in how new emerging forms of data enable innovation in measurement.

Innovations in measurement and evaluation can help us unlock social change, report says

Charity think tank NPC has published a new report that highlights eight global trends in measurement and evaluation, which open new opportunities for charity organisations.

These innovations expand the measurement and evaluation toolkit, allowing for increased effectiveness and understanding of social interventions, something that is ever more critical in difficult times.

Global innovations in measurement and evaluation illustrates exciting developments that challenge traditional measurement and evaluation practice, making it easier and more useful for charities and social enterprises.

The report highlights how technology is enabling us to gather different types of data on bigger scales, and how increased data availability and processing power enables us to gain insights in ways not previously possible.  At the same time, organisations are trying harder to listen to and involve users, to assess change at a systemic level and to respond quickly to data.
Big data: Social
Remote sensing
Shared measurement and evaluation
Data linkage
Impact management


This is a good exploration of how big data and algorithmic intelligence is changing science research - these examples loom over the world of social science research as the sensor technology for capturing real-time behavior begins to spread and disrupt older survey methodologies that rely on self-reported memory of past behavior. The sensors here in our mobile and wearable devices - and will increase. The article discusses a number of domains - including a few examples from social science.

AI is changing how we do science. Get a glimpse

With billions of users and hundreds of billions of tweets and posts every year, social media has brought big data to social science. It has also opened an unprecedented opportunity to use artificial intelligence (AI) to glean meaning from the mass of human communications, psychologist Martin Seligman has recognized. At the University of Pennsylvania’s Positive Psychology Center, he and more than 20 psychologists, physicians, and computer scientists in the World Well-Being Project use machine learning and natural language processing to sift through gobs of data to gauge the public’s emotional and physical health.

That’s traditionally done with surveys. But social media data are “unobtrusive, it’s very inexpensive, and the numbers you get are orders of magnitude greater,” Seligman says. It is also messy, but AI offers a powerful way to reveal patterns.
In one recent study, Seligman and his colleagues looked at the Facebook updates of 29,000 users who had taken a self-assessment of depression. Using data from 28,000 of the users, a machine-learning algorithm found associations between words in the updates and depression levels. It could then successfully gauge depression in the other users based only on their updates.

In another study, the team predicted county-level heart disease mortality rates by analyzing 148 million tweets; words related to anger and negative relationships turned out to be risk factors. The predictions from social media matched actual mortality rates more closely than did predictions based on 10 leading risk factors, such as smoking and diabetes. The researchers have also used social media to predict personality, income, and political ideology, and to study hospital care, mystical experiences, and stereotypes. The team has even created a map coloring each U.S. county according to well-being, depression, trust, and five personality traits, as inferred from Twitter.

“There’s a revolution going on in the analysis of language and its links to psychology,” says James Pennebaker, a social psychologist at the University of Texas in Austin. He focuses not on content but style, and has found, for example, that the use of function words in a college admissions essay can predict grades. Articles and prepositions indicate analytical thinking and predict higher grades; pronouns and adverbs indicate narrative thinking and predict lower grades. He also found support for suggestions that much of the 1728 play Double Falsehood was likely written by William Shakespeare: Machine-learning algorithms matched it to Shakespeare’s other works based on factors such as cognitive complexity and rare words. “Now, we can analyze everything that you’ve ever posted, ever written, and increasingly how you and Alexa talk,” Pennebaker says. The result: “richer and richer pictures of who people are.”


What we measure is as important as how. I remember the first time I read an RE report in the mid 2000’s. RE is an insurance company that insures insurance companies - there report stated that future held - ‘fewer accidents - more disasters’. There were fewer accidents because we built things better and thus could build them bigger (e.g. planes, buildings, roads, cities, etc.) - there were more disasters because when an accident did happen - it was a bigger incident. This article is a signal worth listening to as we head into the phase transition (change in the conditions of change) emerging from the consequences of the digital environment.

Swiss Re shifts $130 billion investments to track ethical indices

Swiss Re (SRENH.S) is switching the entire $130 billion it holds in liquid assets to track ethical indices, the latest move towards principled investments by the insurance industry.

The world's second-largest reinsurer is 90 percent of the way through shifting its holdings from tracking traditional benchmarks, a process it expects to complete by the end of the third quarter in 2017.

It said taking social and governance (ESG) criteria into account reduced the risk of losses especially for long term investors.

"This is not only about doing good, we have done it because it makes economic sense," Swiss Re Chief Investment Officer Guido Fuerer told Reuters on Thursday.
"Equities and fixed income products from companies and sectors with a high ESG ratings have better risk-return ratios."

Institutional investors are increasingly looking at how companies perform on environmental, social and governance-related issues, given the potential for poor behavior to lead to a share price hit.


Talking about new forms of measurement this is a very interesting signal about the future of work - as Edward Castronova noted in his 2007 book “Exodus to Virtual Worlds” there are many aspects to online life that are far superior to many form of work (e.g. McJobs). Jane McGonigal also noted that ‘Reality is Broken it should be more like a Game’. When thinking about games we have to extend our imagining to what games can be and become - we have to remember game environments as designed platforms for crowdsourcing exploration of problem and solutions spaces.
“Games provide a sense of waking in the morning with one goal: I’m trying to improve this skill, teammates are counting on me, and my online community is relying on me,” said Jane McGonigal, a video game scholar and game designer. “There is a routine and daily progress that does a good job at replacing traditional work.”

Why Some Men Don’t Work: Video Games Have Gotten Really Good

If innovations in housework helped free women to enter the labor force in the 1960s and 1970s, could innovations in leisure — like League of Legends — be taking men out of the labor force today?

That’s the logic behind a new working paper released on Monday by the National Bureau of Economic Research. The paper — by the economists Erik Hurst, Mark Aguiar, Mark Bils and Kerwin Charles — argues that video games help explain why younger men are working fewer hours.

That claim got a lot of attention last year when the University of Chicago published a graduation speech given by Mr. Hurst at its business school, where he discussed some of his preliminary findings. He says the paper is now ready to be read by the public.

By 2015, American men 31 to 55 were working about 163 fewer hours a year than that same age group did in 2000. Men 21 to 30 were working 203 fewer hours a year. One puzzle is why the working hours for young men fell so much more than those of their older counterparts. The gap between the two groups grew by about 40 hours a year, or a full work week on average.

Other experts have pointed to a host of reasons — globalization, technological change, the shift to service work — that employers may not be hiring young men. Instead of looking at why employers don’t want young men, this group of economists considered a different question: Why don’t young men want to work?


This is an interesting signal - especially considering where it’s coming from and how long it’s been going on.

Alaska shows even people in the most conservative states prefer a basic income to lower taxes

Low taxes are good, but a basic income is better, at least in Alaska.
The Economic Security Project (ESP), a group backing efforts to collect data on unconditional cash stipends, recently commissioned a survey of 1,004 Alaskan voters to see how they felt about the Alaska Permanent Fund, a $60.1 billion state fund established in 1976 to collect revenue from Alaska’s oil and mineral leases. The money provides an annual stipend to Alaskans, as well as general revenue. Each October, the fund sends a dividend check to every Alaskan resident of up to $2,072 per person, or $8,288 for a family of four (it was reduced last year amid a budget crisis).

The survey by the ESP illustrates how a more robust basic income—a guaranteed minimum payment for all citizens—might play out in the rest of the country. Public support for the program has deepened in the last generation, despite the prospect of raising taxes.

In 1984, a survey of Alaskans found 71% would prefer to end the dividend if it meant raising taxes. By 2017, that ratio had nearly reversed with only 36% of residents agreeing with that position. “Alaskans have become committed to the notion of dividends so much so that they are willing to pay taxes to preserve the Permenant Fund Dividend (PFD) system,” the survey states. The study found no major differences in views from respondents’ political views or income levels.


One more signal about the looming role of algorithmic intelligences (yes the plural - because there are will be an increasing number of different forms of such ‘alien’ intelligences) in transforming what human do and how they do it. In this particular case the comparative graph of the results is very compelling.
‘The formation of new clusters is serendipitous, and the crystallisation process is serendipitous – it’s a random process – and we wanted to use a robot to see if we could explore those two processes together more efficiently,’ explains Cronin. ‘The results were quite striking.’
‘The robot extends the intuition of the human,’ says Cronin. ‘The robot frees the human from bias because it has more time to do reactions that a human wouldn’t otherwise choose to do. If you’re given a random set of reactions, you use your bias – or your training – to choose those reactions.’

Humans come out second best against efficient robot chemist

Chemists aren’t out of a job but the robot did perform well when it came to discovering and creating giant self-assembling structures
A robot chemist has been shown to be more efficient than its human counterparts – at least when it comes to discovering and crystallising gigantic self-assembling molecules. While these synthetic scientists aren’t about to replace the real thing any time soon, they could help humans to tackle their own blind spots and biases when it comes to research.

A team led by Lee Cronin at the University of Glasgow created a sophisticated algorithm connected to a liquid handling system able to conduct crystallisation experiments on giant polyoxometalates. The robot was then pitted against Cronin’s team with both given the starting materials, an experimental protocol for the synthesis and crystallisation process, and an initial data set detailing previous successful and failed crystallisation attempts. Using this, they were then required to establish their own experimental conditions and procedures in order to analyse the maximum amount of chemical space.

Although the human experimenters found more crystal points than their robotic counterparts, the algorithm-based method of searching chemical space was able to explore approximately six times more crystallisation space than the trained chemists. The robot’s crystallisation predictions were also five percentage points more accurate than the human chemists.


This is an important signal - confirming a trend evident for the last 150 years as living conditions improve, access to good water and medical support increases, and medical sciences progress. Significant evidence has suggested that we are not only living longer, but also ‘weller’ as ‘age inflation’ indicates that we are healthier by a decade from our parents - e.g. in the 60s the chances of a 65 year old of dying in their 65th year was about 3% - today to have the same probability of death one has to be almost 75. Thus 50 is the new 40.

No detectable limit to how long people can live

New study finds no evidence that maximum lifespan has stopped increasing
By analyzing the lifespan of the longest-living individuals from the USA, the UK, France and Japan for each year since 1968, investigators found no evidence for such a limit, and if such a maximum exists, it has yet to be reached or identified.

Super- centenarians, such as Morano and Jeanne Calment of France, who famously lived to be 122 years old, continue to fascinate scientists and have led them to wonder just how long humans can live. A study published in Nature last October concluded that the upper limit of human age is peaking at around 115 years.

Now, however, a new study in Nature by McGill University biologists Bryan G. Hughes and Siegfried Hekimi comes to a starkly different conclusion. By analyzing the lifespan of the longest-living individuals from the USA, the UK, France and Japan for each year since 1968, Hekimi and Hughes found no evidence for such a limit, and if such a maximum exists, it has yet to be reached or identified, Hekimi says.


Another signal to support the application and acceleration of our domestication of DNA.

This New Gene-Editing Technique Can Spot CRISPR’s Mistakes

Scientists have developed a tool that can test an entire genome against a CRISPR molecule to predict potential errors and interactions. This will allow doctors to ensure treatments are safer and more effective.
The CRISPR gene-editing tool is already in use by scientists all over the world who are racing to cure deadly diseases by editing the genomes of patients. However, as human trials for various treatments are slated to begin, we still face the hurdle of ensuring that any errors in CRISPR edits won’t causing problems. Scientists from The University of Texas at Austin may have come up with a possible solution. They’ve developed something that works like a predictive editor for CRISPR: a method for anticipating and catching the tool’s mistakes as it works, thereby allowing for the editing of disease-causing errors out of genomes.


This is a very important signal indicating one more step in developing a biological computational paradigm - where one can imagine not just bio-computers - but rather the integration of bio-computation as a internal human enhancement. The gif is worth the view.
The modern world is increasingly generating massive amounts of digital data, and scientists see DNA as a compact and enduring way of storing that information. After all, DNA from thousands or even hundreds of thousands of years ago can still be extracted and sequenced in a lab.

Scientists Used CRISPR to Put a GIF Inside a Living Organism’s DNA

Harvard researchers embedded images in the genomes of bacteria to test the limits of DNA storage.
The promise of using DNA as storage means you could conceivably save every photo you’ve ever taken, your entire iTunes library, and all 839 episodes of Doctor Who in a tiny molecule invisible to the naked eye—with plenty of room to spare.

But what if you could keep all that digital information on you at all times, even embedded in your skin? Harvard University geneticist George Church and his team think it might be possible one day.

They’ve used the gene-editing system CRISPR to insert a short animated image, or GIF, into the genomes of living Escherichia coli bacteria. The researchers converted the individual pixels of each image into nucleotides, the building blocks of DNA.

They delivered the GIF into the living bacteria in the form of five frames: images of a galloping horse and rider, taken by English photographer Eadweard Muybridge, who produced the first stop-motion photographs in the 1870s. The researchers were then able to retrieve the data by sequencing the bacterial DNA. They reconstructed the movie with 90 percent accuracy by reading the pixel nucleotide code.


This is a great article related to one of my favorite research topics - signalling the growing evidence that the ‘self’ as we currently know ourselves is actually an ecology of societies. While in math and physics 1 + 1 always must = 2 - in biology the case is very different where many times 1 + 1 = 1. The section highted below is not intended as an ‘anti-antibiotic’ intention - we need antibiotics - they save lives - but with all solutions - there arise other problems.

Can Microbes Encourage Altruism?

If gut bacteria can sway their hosts to be selfless, it could answer a riddle that goes back to Darwin.
Parasites are among nature’s most skillful manipulators — and one of their specialties is making hosts perform reckless acts of irrational self-harm. There’s Toxoplasma gondii, which drives mice to seek out cats eager to eat them, and the liver fluke Dicrocoelium dendriticum, which motivates ants to climb blades of grass, exposing them to cows and sheep hungry for a snack. There’s Spinochordodes tellinii, the hairworm that compels crickets to drown themselves so the worm can access the water it needs to breed. The hosts’ self-sacrifice gains them nothing but serves the parasites’ hidden agenda, enabling them to complete their own life cycle.

Now researchers are beginning to explore whether parasitic manipulations may spur host behaviors that are selfless rather than suicidal. They are wondering whether microbes might be fundamentally responsible for many of the altruistic behaviors that animals show toward their own kind. Altruism may seem easy to justify ethically or strategically, but explaining how it could have persisted in a survival-of-the-fittest world is surprisingly difficult and has puzzled evolutionary theorists going all the way back to Darwin. If microbes in the gut or other tissues can nudge their hosts toward generosity for selfish reasons of their own, altruism may become less enigmatic.

A recently developed mathematical model and related computer simulations by a trio of researchers at Tel Aviv University appear to validate this theory. The researchers showed that transmissible microbes that promoted altruism in their hosts won the survival battle over microbes that did not — and when this happened, altruism became a stable trait in the host population. The research was published in Nature Communications earlier this year.

Early experimental results point to at least some connection between antibiotic use and social behavior. When Bienenstock exposed mice to low-dose antibiotics in utero and soon after birth, the treated mice showed lower levels of sociability and higher levels of aggression than mice in a control group — results Bienenstock reported in April 2017. Further studies need to be done to confirm causation, Bienenstock noted, since it’s possible the results could be due to the antibiotics’ direct influence on the brain or other effects they may have on development. But “the very good chance is that this is an effect on the [gut] bacteria, which are producing materials which in turn are needed by the brain,” Bienenstock said. When these biological building blocks are in short supply, he believes, the brain’s normal social programs do not function optimally — which could, at least in theory, give rise to more selfish individuals.


Here’s another signal about the changing paradigm of our understanding of the gene pool both within and perhaps among cells, individuals, species, and ecologies. Diversity is important as the core of the gene pool as it can be argued that species are simply assemblages of instantiations of the gene pool in particular environments. As environments change and are changed by species - the pressure is to instantiate new assemblages that adapt and survive.
Co-first authors, Dr Takashi Nagano in the UK and Yaniv Lubling in Israel have collected and individually analysed information one-by-one from over 4000 cells for this study. Speaking about the work, Dr Nagano said: "We've never had access to this level of information about how genes are organised before. Being able to compare between thousands of individual cells is an extremely powerful tool and adds an important dimension to our understanding of how cells position their genes."

Detailed study reveals genes are constantly rearranged by cells

Moving genes about could help cells to respond to change according to scientists at the Babraham Institute in Cambridge, UK and the Weizmann Institute, Israel. Changing the location of a gene within a cell alters its activity. Like mixing music, different locations can make a gene 'louder' or 'quieter', with louder genes contributing more actively to the life of a cell.

Contrary to expectations, this latest study reveals that each gene doesn't have an ideal location in the cell nucleus. Instead, genes are always on the move. Published in the journal Nature, researchers examined the organisation of genes in stem cells from mice. They revealed that these cells continually remix their genes, changing their positions as they progress through different stages. This work, which has also inspired a musical collaboration, suggests that moving genes about in this way could help cells to fine-tune the volume of each gene to suit the cell's needs.

Scientists had believed that the location of genes in cells are relatively fixed with each gene having its rightful place. Different types of cells could organise their genes in different ways, but genes weren't thought to move around much except when cells divide. This is the first time that gene organisation in individual cells has been studied in detail. The results provide snapshots of gene organisation, with each cell arranging genes in unique ways.


This is a wonderful signal of new forms of prosthetics that can emerge with new materials and new mind-machine interfaces. The photos and gif’s are worth the view.
"The origin of the word 'prosthesis' meant 'to add, put onto', so not to fix or replace, but to extend," said Clode. "The Third Thumb is inspired by this word origin, exploring human augmentation and aiming to reframe prosthetics as extensions of the body."
"It is part tool, part experience, and part self-expression," she added. "It instigates necessary conversation about the definition of 'ability'."

Controllable Third Thumb lets wearers extend their natural abilities

For her graduate work at the Royal College of Art, Dani Clode created a wearable third thumb that can help its user carry more objects, squeeze lemons or play complex chords on the guitar.

The Third Thumb is a motorised, controllable extra digit, designed for anyone who wants to extend their natural abilities.

A student of the school's product design masters, Clode created the device as a way to challenge conventional ideas about prosthetics – usually thought of as devices only for people with disabilities.


Here’s another signal for the “Moore’s Law is Dead - Long Live Moore’s Law file. - A new paradigm of chip architecture promising exponential energy efficiency and performance.
Instead, three-dimensional integration is the most promising approach to continue the technology-scaling path set forth by Moore’s law, allowing an increasing number of devices to be integrated per unit volume, according to Jan Rabaey, a professor of electrical engineering and computer science at the University of California at Berkeley, who was not involved in the research.

Three-dimensional integration “leads to a fundamentally different perspective on computing architectures, enabling an intimate interweaving of memory and logic,” he says. “These structures may be particularly suited for alternative learning-based computational paradigms such as brain-inspired systems and deep neural nets, and the approach presented by the authors is definitely a great first step in that direction.”

Radical new vertically integrated 3D chip design combines computing and data storage

Aims to process and store massive amounts of data at ultra-high speed in the future
A radical new 3D chip that combines computation and data storage in vertically stacked layers — allowing for processing and storing massive amounts of data at high speed in future transformative nanosystems — has been designed by researchers at Stanford University and MIT.

The new 3D-chip design* replaces silicon with carbon nanotubes (sheets of 2-D graphene formed into nanocylinders) and integrates resistive random-access memory (RRAM) cells.

Carbon-nanotube field-effect transistors (CNFETs) are an emerging transistor technology that can scale beyond the limits of silicon MOSFETs (conventional chips), and promise an order-of-magnitude improvement in energy-efficient computation. However, experimental demonstrations of CNFETs so far have been small-scale and limited to integrating only tens or hundreds of devices (see earlier 2015 Stanford research, “Skyscraper-style carbon-nanotube chip design…”).

The researchers integrated more than 1 million RRAM cells and 2 million carbon-nanotube field-effect transistors in the chip, making it the most complex nanoelectronic system ever made with emerging nanotechnologies, according to the researchers. RRAM is an emerging memory technology that promises high-capacity, non-volatile data storage, with improved speed, energy efficiency, and density, compared to dynamic random-access memory (DRAM).


The domains of modern theoretical and applied physics continue produce mind boggling theories and results (including quantum computing and new meta-materials). The idea of retrocausality - the future changing the past is imbued in the concepts of modern psychology - where we seek to heal ourselves through insight and experience (including with the use of psychoactive plants and pharmeceuticals). However, this is an interesting area of theoretical physics.

Physicists provide support for retrocausal quantum theory, in which the future influences the past

Although there are many counterintuitive ideas in quantum theory, the idea that influences can travel backwards in time (from the future to the past) is generally not one of them. However, recently some physicists have been looking into this idea, called "retrocausality," because it can potentially resolve some long-standing puzzles in quantum physics. In particular, if retrocausality is allowed, then the famous Bell tests can be interpreted as evidence for retrocausality and not for action-at-a-distance—a result that Einstein and others skeptical of that "spooky" property may have appreciated.

In a new paper published in Proceedings of The Royal Society A, physicists Matthew S. Leifer at Chapman University and Matthew F. Pusey at the Perimeter Institute for Theoretical Physics have lent new theoretical support for the argument that, if certain reasonable-sounding assumptions are made, then quantum theory must be retrocausal.

First, to clarify what retrocausality is and isn't: It does not mean that signals can be communicated from the future to the past—such signaling would be forbidden even in a retrocausal theory due to thermodynamic reasons. Instead, retrocausality means that, when an experimenter chooses the measurement setting with which to measure a particle, that decision can influence the properties of that particle (or another particle) in the past, even before the experimenter made their choice. In other words, a decision made in the present can influence something in the past.


This is an awesome 1.5 min video about concept tires that could enable the future of transportation. Anyone familiar with the science fiction novel ‘Snow Crash’ by Neil Stephenson, will recognize on tire. This is a fun view.

These Tires Can Even Climb Stairs



These photos are spectacular - the human habitus.

Urban jungles: NatGeo's cities travel photographer of the year – in pictures

From stunning skylines to architectural marvels, here are some of the entries for the magazine’s 2017 competition


This is a great signal for the future of algorithmic intelligences as art mediums. The video is interesting.
“A photographer goes out into the world and frames good spots, I go inside these neural networks, which are like their own multidimensional worlds, and say ‘Tell me how it looks at this coordinate, now how about over here?’” Klingemann says. With tongue in cheek, he describes himself as a “neurographer.”

A ‘NEUROGRAPHER’ PUTS THE ART IN ARTIFICIAL INTELLIGENCE

CLAUDE MONET USED brushes, Jackson Pollock liked a trowel, and Cartier-Bresson toted a Leica. Mario Klingemann makes art using artificial neural networks.

In the past few years this kind of software—loosely inspired by ideas from neuroscience—has enabled computers to rival humans at identifying objects in photos. Klingemann, who has worked part-time as an artist in residence at Google Cultural Institute in Paris since early 2016, is a prominent member of a new school of artists who are turning this technology inside out. He builds art-generating software by feeding photos, video, and line drawings into code borrowed from the cutting edge of machine learning research. Klingemann curates what spews out into collections of hauntingly distorted faces and figures, and abstracts. You can follow his work on a compelling Twitter feed. - here https://twitter.com/quasimondo

No comments:

Post a Comment