Thursday, October 12, 2017

Friday Thinking 13 Oct. 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



In Leisure: The Basis of Culture, the German philosopher Josef Pieper foretold a time when “total work” would come to pass, a time when, as I see it, our lives would revolve around work, working, and–most important for my purposes–work-thoughts and work-feelings. To say that our entire lives would be wrapped up in work is to make a profound understatement.

The man in private equity working 80-120 hours per week when in the midst of finalizing a big deal. The female executive in Silicon Valley working at least 60 hours, not counting all the work from home she does. These are far cries from the late philosopher Bertrand Russell’s proposal, published in 1935, for a maximum four hours of work each day. The longer story of how we got here would need to begin with the Bourgeois Revolution. Such a history would seek to illuminate (i) the advent and then hegemony of commercial society together with (ii) the radical transvaluation of values leading to a newfound affirmation of the realm of production (work) and reproduction (the intimate sphere of the family). Today I simply wish to compare “total work” with philosophy.

Total work is always on the clock. Ever behind, always in a rush toward, or just behind, an approaching, encroaching deadline. Philosophy occurs when clock time falls away. It seeks to put us in the presence of eternity.

Total work assumes that the logic of the market must penetrate into all aspects of life. A man a cofounder and I interviewed yesterday asserted unequivocally his view that all human relationships are transactional. Philosophy denies the logic of the market, opening up a space defined by the gift.

Total work is the latest, and most potent, assertion that the vita activa is first, last, and everything. Philosophy is one such proponent of the view that the vita contemplativa must come first. It is out of thought(whether considered or, later on, spontaneous thought) that good action arises.

Total work is solipsistic. The entire world, it believes, turns around it. It is so wrapped up in itself that there can, in its eyes, be no other. Philosophy privileges the two, even more so the other who speaks. Philosophy opens up time, eternal time, for the other.

Total work utterly and completely refuses the most basic metaphysical assumption that I believe is true: that life in general and human life in particular is a mystery. Carelessly does total work destroy, even before it begins, the very possibility of questioning what we most basically, fundamentally, ultimately care about. The horrible consequence is that, falling prey to total work, we can live our entire lives without ever having investigated why we’re here.

Total Work, the Chief Enemy of Philosophy

Many people, database experts among them, dismiss Big Data as a fad that's already come and gone, and argue that it was a meaningless term, and that relational databases can do everything NoSQL databases can. That's not the point! The point of Big Data, pointed out by George Dyson, is that computing undergoes a fundamental phase shift when it crosses the Big Data threshold: when it is cheaper to store data than to decide what to do with it. The point of Big Data technologies is not to perversely use less powerful database paradigms, but to defer decision-making about data -- how to model, structure, process, and analyze it -- to when (and if) you need to, using the simplest storage technology that will do the job.A organization that chooses to store all its raw data, developing an eidetic corporate historical memory so to speak, creates informational potential and invests in its own future wisdom.

Next, there is machine learning. Here the connection is obvious. The more you have access to massive amounts of stored data, the more you can apply deep learning techniques to it (they really only work at sufficiently massive data scales) to extract more of the possible value represented by the information.

And finally, there are blockchains. Again, database curmudgeons (what is it about these guys??) complain that distributed databases can do everything blockchains can, more cheaply, and that blockchains are just really awful, low-capacity, expensive distributed databases (pro-tip, anytime a curmudgeon makes an "X is just Y" statement, you should assume by default that the(X-Y) differences they are ignoring are the whole point of X). As with Big Data, they are missing the point. The essential feature of blockchains is not that they can poorly and expensively mimic the capabilities of distributed databases, but do so in a near-trustless decentralized way, with strong irreversibility and immutability properties.

When you step back and consider these three practical trends, you can't help but speculate that we're constructing a planet-scale Maxwell's Historian. Planet-scale computing infrastructure that can potentially save everything, extract all the possible value out of it over time (through machine learning and other means), and optimally balance reversibility and irreversibility needs of computation to be as energy-efficient as theoretically possible.

Be careful with this idea though. These three trends, and the logic of Maxwell's Historian, don't indicate a particular determinate future for computation (ie, Maxwell's Historian is not another name for an AGI or Skynet). Rather, they mark out a design space of possibilities, and an "arc of history" so to speak. The arc of the computational universe bends towards Maxwell's Historian.

Maxwell's Historian

Soon after developing the switch, however, Aono’s group started to see irregular behavior. The more often the switch was used, the more easily it would turn on. If it went unused for a while, it would slowly turn off by itself. In effect, the switch remembered its history. Aono and his colleagues also found that the switches seemed to interact with each other, such that turning on one switch would sometimes inhibit or turn off others nearby.

A Brain Built From Atomic Switches Can Learn

This is a great and informal 10 min interview with Jack Ma - CEO of Alibaba -Really worth the view - interesting that he’s saying things that were proposed in 2001 in HR 2020.

Jack Ma Predicts The Future

Billionaire Jack Ma's Predictions For The Next 10+ Years!

This is another important must read signal of the future of currency, banking, finance in the digital environment.
In the long run, the technology itself can replace national monies, conventional financial intermediation, and even "puts a question mark on the fractional banking model we know today."

IMF head foresees the end of banking and the triumph of cryptocurrency

In a remarkably frank talk at a Bank of England conference, the Managing Director of the International Monetary Fund has speculated that Bitcoin and cryptocurrency have as much of a future as the Internet itself. It could displace central banks, conventional banking, and challenge the monopoly of national monies.  

Christine Lagarde — a Paris native who has held her position at the IMF since 2011 — says the only substantial problems with existing cryptocurrency are fixable over time.

In a lecture that chastised her colleagues for failing to embrace the future, she warned that "Not so long ago, some experts argued that personal computers would never be adopted, and that tablets would only be used as expensive coffee trays. So I think it may not be wise to dismiss virtual currencies."

"Let us start with virtual currencies. To be clear, this is not about digital payments in existing currencies—through Paypal and other “e-money” providers such as Alipay in China, or M-Pesa in Kenya.

Virtual currencies are in a different category, because they provide their own unit of account and payment systems. These systems allow for peer-to-peer transactions without central clearinghouses, without central banks.

For now, virtual currencies such as Bitcoin pose little or no challenge to the existing order of fiat currencies and central banks. Why? Because they are too volatile, too risky, too energy intensive, and because the underlying technologies are not yet scalable. Many are too opaque for regulators; and some have been hacked.

But many of these are technological challenges that could be addressed over time. Not so long ago, some experts argued that personal computers would never be adopted, and that tablets would only be used as expensive coffee trays. So I think it may not be wise to dismiss virtual currencies.

This is another signal - still weak - but a view of a significant change in our scientific worldview - one that Stuart Kaufman has spoken about - the end of a physics worldview. This is worth the read as is Kaufman (et al) paper.

Quantum mysteries dissolve if possibilities are realities

Spacetime events and objects aren’t all that exists, new interpretation suggests
So it would seem that the world needs more quantum interpretations like it needs more Category 5 hurricanes. But until some single interpretation comes along that makes everybody happy (and that’s about as likely as the Cleveland Browns winning the Super Bowl), yet more interpretations will emerge. One of the latest appeared recently (September 13) online at, the site where physicists send their papers to ripen before actual publication. You might say papers on the arXiv are like “potential publications,” which someday might become “actual” if a journal prints them.

And that, in a nutshell, is pretty much the same as the logic underlying the new interpretation of quantum physics. In the new paper, three scientists argue that including “potential” things on the list of “real” things can avoid the counterintuitive conundrums that quantum physics poses. It is perhaps less of a full-blown interpretation than a new philosophical framework for contemplating those quantum mysteries. At its root, the new idea holds that the common conception of “reality” is too limited. By expanding the definition of reality, the quantum’s mysteries disappear. In particular, “real” should not be restricted to “actual” objects or events in spacetime. Reality ought also be assigned to certain possibilities, or “potential” realities, that have not yet become “actual.” These potential realities do not exist in spacetime, but nevertheless are “ontological” — that is, real components of existence.

“This new ontological picture requires that we expand our concept of ‘what is real’ to include an extraspatiotemporal domain of quantum possibility,” write Ruth Kastner, Stuart Kauffman and Michael Epperson.
The original paper is here

Taking Heisenberg's Potentia Seriously

It is argued that quantum theory is best understood as requiring an ontological dualism of res extensa and res potentia, where the latter is understood per Heisenberg's original proposal, and the former is roughly equivalent to Descartes' 'extended substance.' However, this is not a dualism of mutually exclusive substances in the classical Cartesian sense, and therefore does not inherit the infamous 'mind-body' problem. Rather, res potentia and res extensa are defined as mutually implicative ontological extants that serve to explain the key conceptual challenges of quantum theory; in particular, nonlocality, entanglement, null measurements, and wave function collapse. It is shown that a natural account of these quantum perplexities emerges, along with a need to reassess our usual ontological commitments involving the nature of space and time.

And here’s a corresponding signal about the potential for even mathematics to be transformed with-and-by the computational environment. For anyone interesting in Math and formal methods of logic - this is a must read - a post-set-theory development.
“The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.

Will Computers Redefine the Roots of Math?

When a legendary mathematician found a mistake in his own work, he embarked on a computer-aided quest to eliminate human error. To succeed, he has to rewrite the century-old rules underlying all of mathematics.
Voevodsky, 48, is a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, N.J. He was born in Moscow but speaks nearly flawless English, and he has the confident bearing of someone who has no need to prove himself to anyone. In 2002 he won the Fields Medal, which is often considered the most prestigious award in mathematics.

Voevodsky pulled out his laptop and opened a program called Coq, a proof assistant that provides mathematicians with an environment in which to write mathematical arguments. Awodey, a mathematician and logician at Carnegie Mellon University in Pittsburgh, Pa., followed along as Voevodsky wrote a definition of a mathematical object using a new formalism he had created, called univalent foundations. It took Voevodsky 15 minutes to write the definition.

For nearly a decade, Voevodsky has been advocating the virtues of computer proof assistants and developing univalent foundations in order to bring the languages of mathematics and computer programming closer together. As he sees it, the move to computer formalization is necessary because some branches of mathematics have become too abstract to be reliably checked by people.

The problem of the inevitable suffering of disruption and displacement that technology can cause is really not ‘inevitable’ - optimism and institutional innovation can create social conditions that enable people to be more secure in the face of change. Surrendering to fear - makes us weaker and more vulnerable to opportunistic manipulation.

Obama: 'The world has never been healthier, wealthier or less violent'

Former president urges optimism and focus on progress at Bill and Melinda Gates Foundation conference, despite shadow cast by Trump’s UN speech
There’s never been a better time to be alive, the former US president Barack Obama told an audience of musicians, activists, comedians, innovators and royalty, gathered at the Lincoln Center in Manhattan on 20 September.

Despite the “extraordinary challenges” the world is facing – from growing economic inequality and climate change to mass migration and terrorism – “if you had to choose any moment in history in which to be born, you would choose right now. The world has never been healthier, or wealthier, or better educated or in many ways more tolerant or less violent,” he said in his speech, at an event for the Bill and Melinda Gates Foundation.

Optimism was the buzzword at the Gates’ event and is the foundation’s philosophy. Its focus is to acknowledge and promote the work of people who are finding practical ways to change the world, whether that’s through projects to improve the life chances of young people living in deprived areas through education, or with new technology that allows vaccines to be kept cool without a fridge or ice packs in remote regions of the world.

This is an important signal of a serious trajectory - health, big data, emerging new technologies. The emergence of Cloud and even ‘Fog’ computing and the Internet-of-Things-Sensors, plus ever cheaper memory - means it’s not just big data - it’s the emergence of an ever increasing memory of everything that can continually be queried for patterns. The future is less about the answers we have and much more about the stream of questions we can generate.

How AI Will Keep You Healthy

An audacious Chinese entrepreneur wants to test your body for everything. But are computers really smart enough to make sense of all that data?
ICX wants to capture more data about your body than has ever before been possible. It starts with your DNA sequence and includes data from Fitbit-style wearables that measure your steps, heart rate, and sleep patterns. Add frequent blood tests to measure various proteins and enzymes that can, say, reflect the health of your heart or signal very early signs of cancer. Include monitoring of the ever-changing levels of metabolites produced by the body as it processes food; traditional blood tests on levels of cholesterol and glucose; heart data from an EKG; and information from your medical history. The goal: continuous monitoring of your health and suggestions of adjustments you might make in your diet and behavior before you slip from being healthy into the early stages of an illness.

ICX is part of a new wave of companies that figure they can find something meaningful in the data and enable medicine to stop merely reacting to an illness you have; these companies want to keep you healthy at a fraction of the cost. Unlocking this puzzle, with its millions of moving pieces, is where AI and other advanced computing techniques will have to come in. “AI is how we can take all of this information and tell you things that you don’t know about your health,” says Wang.

Assuming it works, putting all of this together will not be cheap. As CEO of ICX, Wang has raised $600 million in funding for the effort, a remarkable amount for a project offering high-tech tests for healthy people. “But he’ll need it, and probably more, with everything they want to test,” says Eric Schadt, a computational biologist and mathematician who recently stepped down as director of Mount Sinai’s Icahn Institute for Genomics and Multiscale Biology in New York. Schadt has launched his own health data company, called Sema4, which is scanning genomes and molecular biomarkers.

ICX is using its pile of cash in part to invest in or acquire companies that might contribute to Wang’s holistic vision. This includes a $161 million stake in Colorado-based SomaLogic, which is working on a chip that can measure 5,000 proteins in the blood; more than $100 million in PatientsLikeMe, a company in Cambridge, Massachusetts, that provides an online platform for more than 500,000 patients to share experiences, metrics, and feelings about their health and diseases; and $40 million in AOBiome, also of Cambridge, which sells spray-on microbes that it says make skin healthier. ICX also recently invested in HealthTell of San Ramon, California, which identifies antibodies from a blood sample as clues to the presence and progress of diseases including cancer and autoimmune disorders. Additionally, ICX is also collaborating with several companies in China.

In terms of the Internet-of-Sensors and ‘Cloud - Fog’ computing - Imagine our mobile devices - on top of all of their capabilities was also the ‘Tricorder’ of Star Trek fame.
"I think mobile health is going to mean medical diagnostic tests for nutrition or wellness, a service that the major smartphone companies can help provide," Cunningham said. "They are looking for ways that healthcare can fold in with their capabilities. We're hoping to find companies that are interested in differentiating their phone from others by having this capability."

Researchers develop spectroscopic 'science camera' system for smartphones

The latest versions of most smartphones contain at least two and sometimes three built-in cameras. Researchers at the University of Illinois would like to sell mobile device manufactures on the idea of adding yet another image sensor as a built-in capability for health diagnostic, environmental monitoring, and general-purpose color sensing applications.

Three years ago, the National Science Foundation provided a pair of University of Illinois professors with a grant to develop technology called "Lab-in-a-Smartphone." Over that time, the research teams of Brian Cunningham, Donald Biggar Willett Professor of Engineering, and John Dallesasse, associate professor of electrical and computer engineering, have published papers detailing potential ways the mobile devices could provide health diagnostic tests and other measurements normally performed in a laboratory setting.

Their latest efforts demonstrate that mobile devices incorporating their sensor can provide accurate measurements of optical absorption spectra of colored liquids or the optically scattered spectra of solid objects. In other words, a mobile device incorporating the lab-in-a-smartphone "science camera" could accurately read liquid-based or paper-based medical tests in which the end result is a material that changes from one color to another in the presence of a specific analyte.

And more applications continue to emerge for real-time monitoring our world.

Farmers Should Start Using Artificial Intelligence. Here’s Why.

A team of researchers has developed an AI that can identify diseases in plants. Alongside other projects to use AI for farming and killing weeds, AI is quickly becoming a powerful tool in harvesting better crops and producing more food.

In their research published to arXiv, the team explains how they used a technique known as transfer learning to teach the AI how to recognize crop diseases and pest damage. As reported by Wired, they utilized TensorFlow, Google’s open source library, to build and train a neural network of their own, which involved showing the AI 2,756 images of cassava leaves from plants in Tanzania. Their efforts were a success, as the AI was able to correctly identify brown leaf spot disease with 98 percent accuracy. Best of all, the AI can be easily loaded onto a smartphone, and operate without accessing the cloud.

Here’s a signal about economic prognostication that shouldn’t surprise anyone who’s not trying to fit reality into an outmoded model. Jeremy Rifkin’s work in “Zero Marginal Cost Society” signaled some of the issues discussed here and of course the collapse of coordination costs that the digital environment enables. Essentially confirming a change in conditions of change requires a new political-economic frame.
The alternative, which Yellen admirably admits, is that structural changes are invalidating past assumptions and patterns.

For some aspects of our lives, there is no apples-to-apples comparison with the past. With Moore’s Law and the compression of data and power, today’s smartphones are the equivalent of yesterday’s supercomputer that cost 1,000 times as much, guzzled electricity and demanded expensive cooling systems. Electrifying a grid that needed to fuel that and billions of incandescent bulbs was costly compared with the dollop of energy needed to power LEDs. That washing machine, with its smart chips monitoring the size of your load? That smart thermostat in your home dynamically adjusting heat and air-conditioning? They also reduce costs, and overall electric demand, even in their limited numbers so far.


During her speech to the National Association of Business Economics on Tuesday, Federal Reserve Chair Janet Yellen made a rather startling admission: The Fed may have “misspecified” its models for inflation and “misjudged” the strength of wages and the job market. Leaving aside the odd choice of words, Yellen—true to her training and temperament—proved herself more interested in understanding the world as it is rather than being right, a rarity in a policy world that often rewards hubris over wisdom.

But her acknowledgment that economic patterns, and inflation especially, are not unfolding as she and the Fed expected should be taken as a sign that the world has changed and that the Fed, and other policymakers, have not yet grasped the extent of those shifts. It is still fighting the last war, and that can be problematic.

To wit, Yellen said inflation and wages are not rising as expected. Nonetheless, she believes the Fed should continue on its path of raising interest rates, because diverging would risk inflation getting out of control once it starts to rise, as she believes it inevitably must.

To which the question should be: Really, must it? What if the combined and continuing effects of technology and a globalized market of goods and labor are so altering commerce and prices that the 20th century script is as outmoded as an IBM Selectric typewriter?

Another signal about the Blockchain and distributed ledger technologies.
While individual organizations in the public health network share the same overall mission, a complex mishmash of data usage agreements and government privacy rules dictate which members can access information and which ones can modify it. That slows things down. A number of additional, sometimes manual processes are needed to make sure the correct organization or person sent or received the right data, and that it was used correctly. A blockchain can automate these steps

Why the CDC Wants in on Blockchain

Distributed ledgers could help public health workers respond faster to a crisis.
If someone in your home state contracts hepatitis A, a dangerous disease that attacks the liver, the Centers for Disease Control and Prevention needs to know about it. Health departments in neighboring states probably need to know about it, too, since the person may have contracted the virus from contaminated food or water in one of those states. The CDC, state and local health departments, and other organizations must routinely share public health data like this so they can control the spread of a range of infectious diseases. As straightforward as this may sound, though, it’s a massively complicated data-management challenge.

It’s also one that seems made for a blockchain, according to Jim Nasr, chief software architect at the CDC’s Center for Surveillance, Epidemiology, and Laboratory Services. For the past several months, Nasr has led a team working on several proofs of concept based on blockchain technology, with an eye toward building real applications next year. Most are geared toward better public health surveillance, which could include using a blockchain to more efficiently manage data during a crisis or to better track opioid abuse.

Here’s a signal emulating the first telephone conversation - but this time it’s based on an implementation of a quantum mechanics.


Quantum communication, including the quantum information science and technologies, are making an impact nowadays and to the coming 21st century. This was tested in the latest quantum-encrypted communication between two presidents of the academies of sciences of China and Austria.

Chunli Bai, the President of the Chinese Academy of Sciences and Anton Zeilinger, the President of the Austrian Academy of Sciences conversed in a secured and encrypted video conference call on Sept. 29. This was made possible through the help and knowledge of quantum technology.

China and Austria are on two continents. The two presidents are just about 7, 400 kilometers apart, yet they communicated successfully using the quantum cryptographically secured video call. It was conducted in a live-experiment in the attendance of scientists at the Austrian Academy of Sciences and in the Chinese capital of Beijing together with media representatives, according to Austrian Academy of Sciences.

This is a significant signal of the emerging mind-computer-AI interface. Not just replacing lost capability, but enhancing and augmenting our sensorium.

Phone calls can be beamed right into your central nervous system

Just what you need in the age of ubiquitous surveillance: the latest cochlear implants will allow users stream audio directly from their iPhone into their cochlear nerve. Apple and implant manufacturer Cochlear have made “Made for iPhone” connectivity available for any hearing implants that use the next-generation Nucleus 7 sound processor. The advance means that these implants can also stream music and Netflix shows.

The technology was first unveiled in 2014 when it was added to hearing aids such as the Starkey Halo and ReSound LiNX. But this is the first time it’s been linked into the central nervous system.

While some cochlear implants already offer Bluetooth connectivity, these often require users to wear extra dongles or other intermediary devices to pick up digital signals, and then rebroadcast them to the hearing aid as radio. This technology simply beams the signal right into the brain.

This is fascinating suggestion of the proactive use of placebos - pre-cebos.
only one of all the factors they looked at was predictive of higher flu antibody levels in the blood samples.
“We found that greater positive mood, whether measured repeatedly over a 6-week period around vaccination, or on the day of vaccination, significantly predicted greater antibody responses to influenza vaccination,”

Study Finds the Flu Shot Might Work Better if You’re Happy When You Get It

Curious Finding
A new study has found a link between being in a positive mood when you’re getting your flu shot and the vaccine’s protective effect.
It’s a curious finding, and these surprising results could really help researchers looking for new ways to boost the efficacy of the seasonal flu vaccine.

You’ve probably noticed that the annual flu shot isn’t 100 percent effective – not just because of the differences in virus strains attacking us, but also based on the person getting the shot and whether they develop a strong protective immune response.
Researchers from the University of Nottingham in the UK set out to assess how a range of known behavioural and psychological factors might be having an effect on the immune response to getting a flu jab.

“Patient behaviours and psychological well-being can influence immune responses to vaccination,” they write in the study.
Sleep, stress, physical activity, mood, and even nutrition can serve as these ‘immune modulators’, prompting researchers to look into whether these could be targeted to improve vaccine effectiveness.

This is a great signal of both the future of solar (and the phase transition of global energy geopolitics) and the future of farming - especially if one considers the advance in agri-robotics.

These Solar Farms Have A Secret Hiding Under Them: Mushrooms

Small farmers in Japan hope the tactic will help stabilize their incomes by letting them sell both energy and produce.

Small farms in Japan are struggling to survive. Rural populations are shrinking, and the average farmer is 67 years old. But two new farms will test a different business model to try to reinvigorate the sector: solar panels with mushrooms growing underneath them.

The farms, at two locations in northeastern Japan, will produce a combined 4,000 kilowatts of solar power that will be sold to a local utility, while the mushroom farms will yield an annual 40 tons of cloud-ear mushrooms, a crop that is typically imported from China.

“The environment needs to be dark and humid for mushrooms to spawn,” says Minami Kikuchi, who leads the “solar sharing” project that combines agriculture and solar power at Sustainergy, a renewable energy startup. “We simply created the suitable environment for them by making use of vacant space under the solar panels.” The company is working with Hitachi Capital, a leasing specialist that will provide the panels, and Daiwa House Industry, which will construct and maintain them.

Sustainergy believes that similar farms could potentially grow other crops that need little light, such as potatoes. In the U.S., researchers at the University of Massachusetts are exploring the possibility of growing a much wider range of crops; a farm in South Deerfield has spent the last two years growing plants like kale, broccoli, and Swiss chard under rows of nine-foot-high solar panels.

Farmers also have another option: Grazing sheep or cattle on grass grown under solar panels. Sheep can take the place of lawnmowers, and as the grass sucks up carbon dioxide from the atmosphere, the farm can have a negative carbon footprint.

If anyone is in doubt that we have already past the tipping point in renewable energy - this is a clear signal from one of the big auto manufacturers (even if it is late joining the party).
“General Motors believes the future is all-electric,” says Mark Reuss, the company’s head of product. “We are far along in our plan to lead the way to that future world.”


AFTER MORE THAN a century peddling vehicles that pollute the atmosphere, General Motors is ending its relationship with gasoline and diesel. This morning, the American automotive giant announced that it is working toward an all-electric, zero-emissions future. That starts with two new, fully electric models next year—then at least 18 more by 2023.

That product onslaught puts the company at the forefront of an increasingly large crowd of automakers proclaiming the age of electricity and promising to move away from gasoline- and diesel-powered vehicles. In recent months, Volvo, Aston Martin, and Jaguar Land Rover have announced similar moves. GM’s declaration, though, is particularly noteworthy because it’s among the very largest automakers on the planet. It sold 10 million cars last year, ranging from pickups to SUVs to urban runabouts.

Reuss did not give a date for the death knell of the GM gas- or diesel-powered car, saying the transition will happen at different speeds in different markets and regions. The new all-electric models will be a mix of battery electric cars and fuel cell-powered vehicles.

Another strong signal of the rapidly emerging change in global energy geopolitics.

'India's renewables to double by 2022, overtake EU expansion'

India's renewable energy capacity will more than double by 2022, which would be enough to overtake renewable expansion in the European Union for the first time, International Energy Agency (IEA) said in a report.

The country's renewable energy installed capacity is 58.30 GW as per the recent government data. The government has an ambitious target of raising it to 175 GW by 2022 including 100 GW of solar and 60 GW of wind energy.

IEA said the solar PV and wind together represent 90 per cent of India's capacity growth as auctions yielded some of the world's lowest prices for both technologies.

The analysis noted that China remains the undisputed leader of renewable electricity capacity expansion over the forecast period with over 360 GW of capacity coming online, or 40 per cent of the global total.

In fact, China already exceeded its 2020 solar PV target three years ahead of time and is set to achieve its onshore wind target in 2019. Still, the growing cost of renewable subsidies and grid integration issues remain two important challenges to further expansion, it added.

This is an important signal for Canada and for progress in 3D printing.

Atomize This: Metal Powder From This Canadian Plant Will Fire Up The 3D Printing Revolution

Canada’s Saint-Eustache doesn’t seem like an industrial powerhouse. But the remote Montreal suburb, once a vibrant car-building center, is experiencing a rebirth as a command post in the additive manufacturing revolution.

Canadian and Swedish dignitaries and a host of business executives traveled to Saint-Eustache for the opening of a new plant that will produce titanium powder. The fine, sandlike metal powder is what 3D printers fuse together, layer by layer, to build jet engine and gas turbine parts from the ground up. It’s also used in medicine to make hip replacements and skull implants.

Built by AP&C, a subsidiary of Swedish 3D-printing company Arcam, the new Saint-Eustache facility and another plant nearby will employ 200 people and make AP&C the world’s largest supplier of titanium powder with a production capacity of 1,500 tons.

How much is 1,500 tons? Arcam CEO Magnus Rene says the plants’ capacity will be enough to supply all the titanium used by the orthopedic industry, for example. “We can offer that industry all the powder they want for any part they need,” Rene said.

Ehteshami expects the additive industry — including materials, machines and services — to grow from $7 billion today to $80 billion in a decade. Arcam and Concept Laser — both of which GE Additive acquired controlling stakes in last year — shipped 200 printers in 2016. Ehteshami expects to ship 500 machines this year, and GE Additive’s revenues are estimated to reach $1 billion by 2020. “The numbers are just jumping, and the demand is growing as well,” Ehteshami said. “With companies like Arcam, which make printers, powder and also provide 3D printing expertise, we have a complete offering.”

For a very long time I’ve felt that Design is the master discipline needed not just in products - but in organizations. As the saying goes - design isn’t just about how a product looks - but how it works - how it works with others, how it works with systems and how it’s maintained.
The is an article worth reading - not just because it’s about Apple - but it’s a great critique about design that lets it self be constrained by the need to provide excellent human experience - a key to scale learning.  The article signals a recognition of a tipping point - for Apple who has a small share of the market space and whose key claim to fame these last few years is its profitability - it has been cruising on its past laurels for a number of years - as soon … maybe … people won’t feel paying the premium for an Apple product is worth it.
It’s almost as if the company is being buried under the weight of its products. Unable to cut ties with past concepts (for instance, the abomination that is iTunes), unable to choose clear paths forward (USB-C or Lightning guys?), compromising core elements to make room for splashy features, and executing haphazardly to solve long-term issues.
Pundits will respond to these arguments by detailing Apple’s meteoric and sustained market-value gains. Apple fans will shout justifications for a stylus that must be charged by sticking it into the bottom of an iPad, a “back” button jammed weirdly into the status bar, a system of dongles for connecting oft-used devices, a notch that rudely juts into the display of a $1,000 phone. But the reality is that for all the phones Apple sells and for all the people who buy them, the company is stuck in idea-quicksand, like Microsoft in the early 2000s, or Apple in the 90s.


Let me just rip this band-aid off...
Plenty has been written about the mind-numbing, face-palming, irritating stupidity of the notch. And yet, I can’t stop thinking about it. I would love to say that this awful design compromise is an anomaly for Apple. But it would be more accurate to describe it as the norm.

Once upon a time, Apple could do little wrong. As one of the first mainstream computer companies to equally value design and technical simplicity, it upended our expectations about what PCs could be. “Macintosh works the way people work,” read one 1992 ad. Rather than requiring downloads and installations and extra memory to get things right (as often required by Windows machines), Apple made it so you could just plug in a mouse or start up a program and it would just... work. Marrying that functionality with the groundbreaking design the company has embodied since the early Macs, it’s easy to see how Apple became the darling of designers, artists, and the rest of the creative class. The work was downright elegant; unheard of for an electronics company.

It’s been a long time since Apple blew anyone away with its “innovation” (Tim Cook’s favorite buzzword). Most of what’s been released in the Cook era has been iterative of the Steve Jobs era: Larger iPads. Smaller iPads. Bigger iPhones. Smaller iPhones. A stylus was added to the iPad but it was only addressing what third parties had been doing more clumsily for years. Yes, the cameras got better, the screens sharper — but so did literally everyone else’s (in fact, Samsung’s phones have long showcased higher resolution, more sophisticated displays). The software got more complex, but not necessarily more usable. Yes, Siri went from totally useless to “maybe it'll work this time.” And Maps improved, but still doesn’t provide the kind of detail or accuracy seen in Google’s product.

And it's not just the hardware, or the UI. The ecosystem is unwell. iTunes and Apple Music and the Podcasts app coexist on devices for reasons only Eddy Cue understands, your purchases and files floating somewhere in their digital ether, untethered to a clear system or logic. The “TV” app maintains some awkward middle ground that attempts to lasso your subscription services, your purchased content, marketing suggestions, and the cable you probably still pay for. But none of these things seem to actually function fluidly. Example: you can buy movies and TV shows in the iTunes Store app but you have to watch them in the TV app? It’s fucking crazy.