Thursday, December 7, 2017

Friday Thinking 8 Dec. 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



Content
Quotes:

Articles:




In a new study that is optimistic about automation yet stark in its appraisal of the challenge ahead, McKinsey says massive government intervention will be required to hold societies together against the ravages of labor disruption over the next 13 years. Up to 800 million people—including a third of the work force in the U.S. and Germany—will be made jobless by 2030, the study says.

The bottom line: The economy of most countries will eventually replace the lost jobs, the study says, but many of the unemployed will need considerable help to shift to new work, and salaries could continue to flatline. "It's a Marshall Plan size of task," Michael Chui, lead author of the McKinsey report, tells Axios.

In the eight-month study, the McKinsey Global Institute, the firm's think tank, found that almost half of those thrown out of work—375 million people, comprising 14% of the global work force—will have to find entirely new occupations, since their old one will either no longer exist or need far fewer workers. Chinese will have the highest such absolute numbers—100 million people changing occupations, or 12% of the country's 2030 work force.

The transition compares to the U.S. shift from a largely agricultural to an industrial-services economy in the early 1900s forward. But this time, it's not young people leaving farms, but mid-career workers who need new skills. "There are few precedents in which societies have successfully retrained such large numbers of people," the report says, and that is the key question: how do you retrain people in their 30s, 40s and 50s for entirely new professions?

McKinsey: automation may wipe out 1/3 of America’s workforce by 2030




The static world of skills
We’re focusing on the wrong thing. Focusing on skills betrays a static view of the world. The assumption is that if we acquire certain skills, we will be protected from the onslaught of the robots and the rapidly changing world around us. It ignores the fact that the average half-life of a skill is now about five years and continuing to shrink.
It’s precisely that static view of the world that is our biggest barrier. We need to find ways to prepare ourselves for a world where learning is a lifetime endeavor. The question then becomes: what will help us to learn faster so that we can quickly acquire whatever skills are required in the moment?

Capabilities that drive learning
Supporting the development of skills and a deeper understanding of our contexts are more fundamental capabilities. These capabilities can take many different forms but, in my mind, the core capabilities are curiosity, imagination, creativity, critical thinking and social and emotional intelligence. If we cultivate these capabilities, we'll be able to quickly understand the evolving contexts we live in and acquire the skills that will help us to operate successfully in very specific contexts.

Re-thinking our institutions
At a broader societal level, the learning pyramid can help us to understand how our institutions will need to evolve to support life-long learning. If I’m right about the learning pyramid, our educational system will need to be re-thought and re-designed from the ground up. Rather than focusing on transmitting broad-based knowledge and building skills, our schools will need to shift their focus to cultivating capabilities and drawing out and nurturing the passion that is latent within all of us. Rather than giving out certificates verifying that specific knowledge or skills have been acquired, schools will need to expand their horizons and become life-long learning coaches that get to know each of us individually at a very deep level and can help and challenge us to learn even faster throughout our lives by building deep and long-term trust-based relationships.

John Hagel - Mastering the Learning Pyramid




what AI is already doing and how quickly it is improving. The biggest advances have been in two broad areas: perception and cognition. In the former category some of the most practical advances have been made in relation to speech. Voice recognition is still far from perfect, but millions of people are now using it — think Siri, Alexa, and Google Assistant. The text you are now reading was originally dictated to a computer and transcribed with sufficient accuracy to make it faster than typing. A study by the Stanford computer scientist James Landay and colleagues found that speech recognition is now about three times as fast, on average, as typing on a cell phone. The error rate, once 8.5%, has dropped to 4.9%. What’s striking is that this substantial improvement has come not over the past 10 years but just since the summer of 2016.

Image recognition, too, has improved dramatically. You may have noticed that Facebook and other apps now recognize many of your friends’ faces in posted photos and prompt you to tag them with their names. An app running on your smartphone will recognize virtually any bird in the wild. Image recognition is even replacing ID cards at corporate headquarters. Vision systems, such as those used in self-driving cars, formerly made a mistake when identifying a pedestrian as often as once in 30 frames (the cameras in these systems record about 30 frames a second); now they err less often than once in 30 million frames. The error rate for recognizing images from a large database called ImageNet, with several million photographs of common, obscure, or downright weird images, fell from higher than 30% in 2010 to about 4% in 2016 for the best systems.

Note that machine learning systems hardly ever replace the entire job, process, or business model. Most often they complement human activities, which can make their work ever more valuable. The most effective rule for the new division of labor is rarely, if ever, “give all tasks to the machine.” Instead, if the successful completion of a process requires 10 steps, one or two of them may become automated while the rest become more valuable for humans to do. For instance, the chat room sales support system at Udacity didn’t try to build a bot that could take over all the conversations; rather, it advised human salespeople about how to improve their performance. The humans remained in charge but became vastly more effective and efficient. This approach is usually much more feasible than trying to design machines that can do everything humans can do. It often leads to better, more satisfying work for the people involved and ultimately to a better outcome for customers.

The Business of Artificial Intelligence





For anyone concerned with the 'threat' of AI - this is an excellent analysis - a 16 minute Must Read. What this article make abundantly clear is that it is human civilization that evolves human intelligence - a debt that no individual can repay or honestly discount.
If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.

What would happen if we were to put a human — brain and body — into an environment that does not feature human culture as we know it? Would Mowgli the man-cub, raised by a pack of wolves, grow up to outsmart his canine siblings? To be smart like us? And if we swapped baby Mowgli with baby Einstein, would he eventually educate himself into developing grand theories of the universe? Empirical evidence is relatively scarce, but from what we know, children that grow up outside of the nurturing environment of human culture don’t develop any intelligence beyond basic animal-like survival behaviors. As adults, they cannot even acquire language.
Intelligence expansion can only come from a co-evolution of the mind, its sensorimotor modalities, and its environment.
Most of our intelligence is not in our brain, it is externalized as our civilization

The impossibility of intelligence explosion

In 1965, I. J. Good described for the first time the notion of “intelligence explosion”, as it relates to artificial intelligence (AI):
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Decades later, the concept of an “intelligence explosion” — leading to the sudden rise of “superintelligence” and the accidental end of the human race — has taken hold in the AI community. Famous business leaders are casting it as a major risk, greater than nuclear war or climate change. Average graduate students in machine learning are endorsing it. In a 2015 email survey targeting AI researchers, 29% of respondents answered that intelligence explosion was “likely” or “highly likely”. A further 21% considered it a serious possibility.

The basic premise is that, in the near future, a first “seed AI” will be created, with general problem-solving abilities slightly surpassing that of humans. This seed AI would start designing better AIs, initiating a recursive self-improvement loop that would immediately leave human intelligence in the dust, overtaking it by orders of magnitude in a short time. Proponents of this theory also regard intelligence as a kind of superpower, conferring its holders with almost supernatural capabilities to shape their environment — as seen in the science-fiction movie Transcendence (2014), for instance. Superintelligence would thus imply near-omnipotence, and would pose an existential threat to humanity.

This science-fiction narrative contributes to the dangerously misleading public debate that is ongoing about the risks of AI and the need for AI regulation. In this post, I argue that intelligence explosion is impossible — that the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems. I attempt to base my points on concrete observations about intelligent systems and recursive systems.


Here is another excellent article with a clear description of what machine learning is and what the inherent limitations are. Worth the read.
A fundamental feature of the human mind is our "theory of mind", our tendency to project intentions, beliefs and knowledge on the things around us. Drawing a smiley face on a rock suddenly makes it "happy"—in our minds.

Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.

The limitations of deep learning

Deep learning: the geometric view
The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, "It's not complicated, it's just a lot of it".

In deep learning, everything is a vector, i.e. everything is a point in a geometric space. Model inputs (it could be text, images, etc) and targets are first "vectorized", i.e. turned into some initial input vector space and target vector space. Each layer in a deep learning model operates one simple geometric transformation on the data that goes through it. Together, the chain of layers of the model forms one very complex geometric transformation, broken down into a series of simple ones. This complex transformation attempts to maps the input space to the target space, one point at a time. This transformation is parametrized by the weights of the layers, which are iteratively updated based on how well the model is currently performing. A key characteristic of this geometric transformation is that it must be differentiable, which is required in order for us to be able to learn its parameters via gradient descent. Intuitively, this means that the geometric morphing from inputs to outputs must be smooth and continuous—a significant constraint.

The whole process of applying this complex geometric transformation to the input data can be visualized in 3D by imagining a person trying to uncrumple a paper ball: the crumpled paper ball is the manifold of the input data that the model starts with. Each movement operated by the person on the paper ball is similar to a simple geometric transformation operated by one layer. The full uncrumpling gesture sequence is the complex transformation of the entire model. Deep learning models are mathematical machines for uncrumpling complicated manifolds of high-dimensional data.

That's the magic of deep learning: turning meaning into vectors, into geometric spaces, then incrementally learning complex geometric transformations that map one space to another. All you need are spaces of sufficiently high dimensionality in order to capture the full scope of the relationships found in the original data.


Here’s a great signal of one trajectory of AI in augmenting traditional reporting - how to harvest crowdsourced intel - for news.
...there is significant pressure on other news agencies to automate news production. And today, Reuters outlines how it has almost entirely automated the identification of breaking news stories. Xiaomo Liu and pals at Reuters Research and Development and Alibaba say the new system performs well. Indeed, it has the potential to revolutionize the news business. But it also raises concerns about how such a system could be gamed by malicious actors.

The system processes 12 million tweets every day, rejecting almost 80 percent of them as noise. The rest fall into about 6,000 clusters that the system categorizes as different types of news events. That’s all done by 13 servers running 10 different algorithms.

By comparison, Reuters employs some 2,500 journalists around the world who together generate about 3,000 news alerts every day, using a variety of sources, including Twitter. Of these, around 250 are written up as news stories.

How Reuters’s Revolutionary AI System Gathers Global News

Reuters is scooping its rivals using intelligent machines that mine Twitter for news stories.
“The advent of the internet and the subsequent information explosion has made it increasingly challenging for journalists to produce news accurately and swiftly.” So begin the research and development team at the global news agency Reuters in a paper on the arXiv this week.

For Reuters, the problem has been made more acute by the emergence of fake news as an important factor in distorting the perception of events.
Nevertheless, news agencies such as the Associated Press have moved ahead with automated news writing services. These report standard announcements such as financial news and certain sports results by pasting the data into pre-written templates: “X reported profit of Y million in Q3, in results that beat Wall Street forecasts ... ”

The new system is called Reuters Tracer. It uses Twitter as a kind of global sensor that records news events as they are happening. The system then uses various kinds of data mining and machine learning to pick out the most relevant events, determine their topic, rank their priority, and write a headline and a summary. The news is then distributed around the company’s global news wire.

The first step in the process is to siphon the Twitter data stream. Tracer examines about 12 million tweets a day, 2 percent of the total. Half of these are sampled at random; the other half come from a list of Twitter accounts curated by Reuters’s human journalists. They include the accounts of other news organizations, significant companies, influential individuals, and so on.


Here’s a very important signal of the rapidly emerging new computational paradigm that will inevitably include new and evolving AI.

Topology yields miniaturisation breakthrough for quantum computing

Exploiting an exotic state of matter has led to a major step forward in the race to revolutionise computing.
The research that led to the 2016 Nobel Prize in Physics has found concrete expression and practical application, thanks to research by the University of Sydney and Microsoft.

The prize that year was awarded to American physicists David Thouless, Duncan Haldane and Michael Kosterlitz, ”for theoretical discoveries of topological phase transitions and topological phases of matter”.

In a paper in the journal Nature Communications, researchers led by David Reilly, director of the Microsoft Quantum Laboratory at Sydney Uni, demonstrate how topological insulating materials can be manipulated to produce a miniaturised version of an electronic engineering component called a microwave circulator.

Microwave circulators are already commonplace in large structures such as mobile phone relay stations and radar systems, but until now no one has been able to build one any smaller than roughly the size of a human hand – thus excluding them from use in computers.

All that is now about to change. By exploiting the properties of topological insulators, Reilly and his colleagues have succeeded in building a microwave circulator that is 1000 times smaller than its predecessors.

So small is the new device that potentially hundreds can be integrated onto a single computer chip – paving the way for the kind of precise manipulation of the large numbers of quantum bits (“qubits”) that are an essential prerequisite for quantum computing.


This article has some important signals of the future of work from the developing economies - this is worth the read for anyone interested in Platform Economies.

The Platform Economy

While developed countries in Europe, North America, and Asia are rapidly aging, emerging economies are predominantly youthful. Nigerian, Indonesian, and Vietnamese young people will shape global work trends at an increasingly rapid pace, bringing to bear their experience in dynamic informal markets on a tech-enabled gig economy.
Some 80% of the global population lives in emerging economies – defined by informal markets and fluid employment structures. The SHIFT: Commission on Work, Workers, and Technology invited groups in five cities across the United States to imagine four scenarios along two axes of change – more or less work, and more jobs or more tasks. Participants were divided as to the amount of future work, but almost all foresaw the continuing disaggregation of jobs into tasks in both low- and high-end work, from driving to lawyering. That is the reality in emerging economies today.

Examining work patterns in these diverse countries yields three key lessons. First, people layer multiple work streams and derive income from more than one source. Second, platform economies are emerging rapidly and build on traditional networks. Finally, these work patterns often go hand in hand with dramatic income inequality.

Flexibility and uncertainty define informal markets in developing countries. Those lucky men and women who have formal jobs (less than 40%) often have “side hustles” through which they sell their time, expertise, network, or ideas to others in an effort to hedge against an uncertain labor market. A Nigerian saying – “you have a 9 to 5, a 5 to 9 and a weekend job” – aptly describes the environment of layered work.

The same pattern is starting to emerge in developed countries. A report by the JPMorgan-Chase Institute concludes that platform jobs are largely a secondary source of income, used to offset dips in regular income.


This article presents a very concise summary of various national central banks approaches to Bitcoin - this also hints at similar considerations to other cryptocurrencies and Blockchain and distributed ledger technologies.

Here’s What the World's Central Banks Really Think About Bitcoin

Eight years since the birth of bitcoin, central banks around the world are increasingly recognizing the potential upsides and downsides of digital currencies.

The guardians of the global economy have two sets of issues to address. First is what to do, if anything, about emergence and growth of the private cryptocurrencies that are grabbing more and more attention -- with bitcoin now surging toward $10,000. The second question is whether to issue official versions.


This is a great article signaling how even some of the most established views of biology are being challenged with new paradigms. As we learn more about the deeper collaborations in our biology - new metaphors may arise enabling us to reason in new ways about old views. New concepts of gender and power perhaps.
“Female reproductive anatomy is more cryptic and difficult to study, but there’s a growing recognition of the female role in fertilization,” said Mollie Manier, an evolutionary biologist at George Washington University.
“We’ve been blinded by our preconceptions. It’s a different way to think about fertilization with very different implications about the process of fertilization,” Nadeau says.

Females' Eggs May Actively Select Certain Sperm

New evidence challenges the oldest law of genetics.
In the winner-takes-all game of fertilization, millions of sperm race toward the egg that’s waiting at the finish line. Plenty of sperm don’t even make it off the starting line, thanks to missing or deformed tails and other defects. Still others lack the energy to finish the long journey through the female reproductive tract, or they get snared in sticky fluid meant to impede all but the strongest swimmers. For the subset of a subset of spermatozoa that reach their trophy, the final winner would be determined by one last sprint to the end. The exact identity of the sperm was random, and the egg waited passively until the Michael Phelps of gametes finally arrived. Or so scientists have thought.

Joe Nadeau, principal scientist at the Pacific Northwest Research Institute, is challenging this dogma. Random fertilization should lead to specific ratios of gene combinations in offspring, but Nadeau has found two examples just from his own lab that indicate fertilization can be far from random: Certain pairings of gamete genes are much more likely than others. After ruling out obvious alternative explanations, he could only conclude that fertilization wasn’t random at all.

“It’s the gamete equivalent of choosing a partner,” Nadeau said.
His hypothesis—that the egg could woo sperm with specific genes and vice versa—is part of a growing realization in biology that the egg is not the submissive, docile cell that scientists long thought it was. Instead, researchers now see the egg as an equal and active player in reproduction, adding layers of evolutionary control and selection to one of the most important processes in life.


Here’s a hopeful signal - the speed of evolution may not be as slow as popular imagination conceives.

Galapagos study finds that new species can develop in as little as two generations

The arrival 36 years ago of a strange bird to a remote island in the Galapagos archipelago has provided direct genetic evidence of a novel way in which new species arise.

In this week's issue of the journal Science, researchers from Princeton University and Uppsala University in Sweden report that the newcomer belonging to one species mated with a member of another species resident on the island, giving rise to a new species that today consists of roughly 30 individuals.

The study comes from work conducted on Darwin's finches, which live on the Galapagos Islands in the Pacific Ocean. The remote location has enabled researchers to study the evolution of biodiversity due to natural selection.

The direct observation of the origin of this new species occurred during field work carried out over the last four decades by B. Rosemary and Peter Grant, two scientists from Princeton, on the small island of Daphne Major.


This is a must read short article - a powerful signal of our emerging domestication of DNA in fact going beyond … current DNA.
“We wanted to prove the concept that every step of information storage and retrieval could be mediated by an unnatural base pair,” he says. “It’s not a curiosity anymore.”

Semi-Synthetic Life Form Now Fully Armed and Operational

Could life have evolved differently? A germ with “unnatural” DNA letters suggests the answer is yes.
Every living thing on Earth stores the instructions for life as DNA, using the four genetic bases A, G, C, and T.
All except one, that is.

In the San Diego laboratory of Floyd Romesberg—and at a startup he founded—grow bacteria with an expanded genetic code. They have two more letters, an “unnatural” pair he calls X and Y.

Romesberg, head of a laboratory at the Scripps Research Institute, first amended the genes of the bacterium E. Coli to harbor the new DNA components in 2014. Now, for the first time, the germs are using their expanded code to manufacture proteins with equally unusual components.

The bacterium is termed a “semi-synthetic” organism, since while it harbors an expanded alphabet, the rest of the cell hasn’t been changed. Even so, Peter Carr, a biological engineer at MIT’s Lincoln Laboratory, says it suggests that scientists are only beginning to learn how far life can be redesigned, a concept known as synthetic biology.


This is a significant signal of efforts to domesticate DNA for personal reasons - the enhancement of a human. This is only the beginning.

Personal CRISPR genetic experimentation

Tristan Roberts is injecting himself with a previously an untested, experimental gene therapy.
Several individuals have publicly attempted to augment themselves with genes that will inhibit cell death or boost muscle growth, and self-experimentation is also happening in private.

Brian Hanley (picture above) , a microbiologist who gave himself a gene therapy designed to increase his stamina and life span. Hanley designed a plasmid containing a gene coding for growth hormone–releasing hormone. A physician assisted in administration of the plasmid to Hanley’s thigh using electroporation. The plasmids were administered twice: once in summer 2015 and a second larger dose in July 2016.

Hanley said that the treatment has helped him. Results – Testosterone up 20% with a peak increase of 77%. White blood counts up 16% with a peak of 40%. Lipid profile improved: HDLs up to 76, a rise of 20%. LDL down 20%. Triglycerides down 50%, with a low being down 60%. Healing time is much faster. Pulse rate appears to have dropped by 10 beats per minute or more.

Josiah Zayner is using gene therapy to inhibit myostatin which would enable a muscle building effect several times stronger than steroids if it is successful.
In 2015, Liz Parrish, an entrepreneur without a background in biology claimed to have received a dose of gene therapy in Latin America.

A US National Institutes of Health (NIH) study showed N6 neutralized 98% of the HIV virus in lab conditions. There are people who naturally generate N6 antivirus.
Ascendance Biomedical is using plasmids to take CRISPR gene therapy to enable the people without the N6 production mutation to produce N6.

So far the injections have not produced an HIV curing effect. Tristan will try again with 10 to 100 times larger dose of plasmid gene therapy.


This is a fascinating signal of both the domestication of DNA and the computational-manufacturing paradigm. There’s a two minute video

World’s Smallest Tape Recorder Is Built From Microbes

Through a few clever molecular hacks, researchers at Columbia University Medical Center have converted a natural bacterial immune system into a microscopic data recorder, laying the groundwork for a new class of technologies that use bacterial cells for everything from disease diagnosis to environmental monitoring.

The researchers modified an ordinary laboratory strain of the ubiquitous human gut microbe Escherichia coli, enabling the bacteria to not only record their interactions with the environment but also time-stamp the events.

“Such bacteria, swallowed by a patient, might be able to record the changes they experience through the whole digestive tract, yielding an unprecedented view of previously inaccessible phenomena,” says Harris Wang, PhD, assistant professor in the Departments of Pathology & Cell Biology and Systems Biology at CUMC and senior author on the new work, described in today’s issue of Science. Other applications could include environmental sensing and basic studies in ecology and microbiology, where bacteria could monitor otherwise invisible changes without disrupting their surroundings.


So much real innovation arises from the combining of both new and old technologies. Doing old things in new ways and new things in old ways and New things in new ways and sometimes these three enable new contexts for doing old things in old ways.
Where do you see the major agricultural applications for PV systems like this?
Most of all, I see them in southern countries, where solar irradiation is often too strong. Many crops cannot be cultivated in such places because it is too hot and too arid. And also using such cropland for photovoltaics provides enough power to, for example, desalinate sea water and use that to irrigate fields. It would be possible to make the desert bloom – quite literally.

Agrophotovoltaics with great potential

Already in 1981 he had published his stunning idea: Using solar modules on agricultural land or in the hot and dry regions of the world. The shading will make the hard soil bloom, he predicted. In this time Professor Goetzberger was the head of the Fraunhofer Institute for Solar Energy Systems. Now the scientists have built a first pilot installation near Lake Constance.

Why has it taken so long for your idea to be realised?
Adolf Goetzberger: I was ahead of my time. It has been that way with many of my ideas. But you know, it is better to be too early than to be too late. 35 years ago, photovoltaics was still very expensive, so we were looking for ways to get twice as much out of it, for example by combining power generation and agriculture.


This is ready for primetime - and may never power a car - but there are many applications for personal products and sensors that could make them indefinitely autonomous. The two min video provides a good explanation.

Physicists Just Found a Loophole in Graphene That Could Unlock Clean, Limitless Energy

By all measures, graphene shouldn't exist. The fact it does comes down to a neat loophole in physics that sees an impossible 2D sheet of atoms act like a solid 3D material.

New research has delved into graphene's rippling, discovering a physical phenomenon on an atomic scale that could be exploited as a way to produce a virtually limitless supply of clean energy.

The team of physicists led by researchers from the University of Arkansas didn't set out to discover a radical new way to power electronic devices.

By Thibado's calculations, a single ten micron by ten micron piece of graphene could produce ten microwatts of power.

It mightn't sound impressive, but given you could fit more than 20,000 of these squares on the head of a pin, a small amount of graphene at room temperature could feasibly power something small like a wrist watch indefinitely.
Better yet, it could power bioimplants that don't need cumbersome batteries.


Here is another signal about the looming phase transition in transportation and energy geopolitics. The true disruption that self-driving vehicles will enact is the fundamental transformation of mass transit infrastructure - there is no reason that every city can’t provide a transportation platform for all of it’s citizens and residents.

Personal Sedan Sales in Jeopardy as U.S. Auto Market Transitions to “Islands” of Autonomous Mobility: KPMG Research

A $1 trillion market is emerging around mobility and selling miles
KPMG predicts that self-driving cars and mobility services will provide options that will reduce consumer desire to own cars, particularly sedans. Pushing a button for mobility services competes with the utility of sedans, and both give consumers the freedom to buy the car they really want to own or utilize mobility by the trip. In fact, KPMG projects that sales of personally-owned sedans in the U.S. will drop precipitously – from 5.4 million units sold today to just 2.1 million units by 2030.

In addition, according to the KPMG report, the transportation market will evolve from a national or regional one to 150-plus “islands of autonomy” – metropolitan areas that each have their own distinct mix of consumer travel needs delivered by autonomous mobility services.

“Across the world, a $1 trillion market is swiftly developing around a new and disruptive transportation mode: driverless vehicles coupled with mobility services,” said Gary Silberg, Automotive Sector leader at KPMG LLP. “However, the adoption of this new transportation mode will not be immediate, and it will not be everywhere. Instead, it will arrive metro market by metro market in what we call ‘islands of autonomy.’ Each island will need a unique mix of vehicles to meet unique demands, which will greatly impact the breakdown of the car park, especially sedans.”


This is another signal about autonomous vehicles. There is a 2 min video.
"We pitted our algorithms against a human, who flies a lot more by feel," said Rob Reid of JPL, the project's task manager. "You can actually see that the A.I. flies the drone smoothly around the course, whereas human pilots tend to accelerate aggressively, so their path is jerkier."

Drone Race: Human Versus Artificial Intelligence

Drone racing is a high-speed sport demanding instinctive reflexes -- but humans won't be the only competitors for long.
Researchers at NASA's Jet Propulsion Laboratory in Pasadena, California, put their work to the test recently. Timing laps through a twisting obstacle course, they raced drones controlled by artificial intelligence (A.I.) against a professional human pilot.
The race, held on Oct. 12, capped off two years of research into drone autonomy funded by Google. The company was interested in JPL's work with vision-based navigation for spacecraft -- technologies that can also be applied to drones. To demonstrate the team's progress, JPL set up a timed trial between their A.I. and world-class drone pilot Ken Loo.

The team built three custom drones (dubbed Batman, Joker and Nightwing) and developed the complex algorithms the drones needed to fly at high speeds while avoiding obstacles. These algorithms were integrated with Google's Tango technology, which JPL also worked on.

The drones were built to racing specifications and could easily go as fast as 80 mph (129 kph) in a straight line. But on the obstacle course set up in a JPL warehouse, they could only fly at 30 or 40 mph (48 to 64 kph) before they needed to apply the brakes.


This is an interesting development for robotis and many forms of exoskeletons. There’s a 2 min video as well.
“We were very surprised by how strong the actuators [aka, “muscles”] were. We expected they’d have a higher maximum functional weight than ordinary soft robots, but we didn’t expect a thousand-fold increase. It’s like giving these robots superpowers,” says Daniela Rus, Ph.D., the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and one of the senior authors of the paper.

Artificial muscles give soft robots superpowers

Origami-inspired muscles are both soft and strong, and can be made for less than $1
Soft robotics has made leaps and bounds over the last decade as researchers around the world have experimented with different materials and designs to allow once rigid, jerky machines to bend and flex in ways that mimic and can interact more naturally with living organisms. However, increased flexibility and dexterity has a trade-off of reduced strength, as softer materials are generally not as strong or resilient as inflexible ones, which limits their use.

Now, researchers at the Wyss Institute at Harvard University and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created origami-inspired artificial muscles that add strength to soft robots, allowing them to lift objects that are up to 1,000 times their own weight using only air or water pressure, giving much-needed strength to soft robots. The study is published this week in Proceedings of the National Academy of Sciences (PNAS).


The cyborg is here - it’s just unevenly distributed. The image is worth the view as is the 1 min video.

Controllable Cyborg Beetles for Swarming Search and Rescue

Robotics tries very hard to match the agility, versatility, and efficiency of animals. Some robots get very close in a few specific ways, but we’re still chasing the dream of robots that can match our biological friends. One way of getting around this problem is by leveraging biology in the design of robots (and we do see a lot of bioinspiration in a variety of applications), but a more direct approach is to just make the robots themselves mostly biological. We’ve reported on this in the past in the context of flying insects, but this new cyborg beetle from Nanyang Technological University in Singapore is the smallest (and most controllable) yet.


Philip Tetlock is an outstanding researcher whose examined the bounds of reliability in many forms of expertise - particularly political punditry. This may be of interest to anyone wish to play with AI enhance foresight.

DOES (HUMAN + MACHINE) x GEOPOLITICAL FORECASTING = HYPERACCURACY?

The Hybrid Forecasting Competition (HFC) is a government-sponsored research program designed to test the limits of geopolitical forecasting. By combining the ingenuity of human analysts with cutting edge machine systems (including statistical models and algorithms), HFC will develop novel capabilities to help the U.S. Intelligence Community improve their forecasts in an increasingly uncertain world.

Be a part of this breakthrough research program and test your skills as a forecaster while helping to develop systems that could produce the most accurate forecasts ever.