Thursday, July 7, 2016

Friday Thinking 8 July 2016

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes


Articles
It’s Finally Time To Stop Correcting People’s Grammar, Linguist Says



As we come up on year 8 of of the app economy, it’s absolutely remarkable to think about just how far we’ve come. Mobile has completely reshaped old industries, created new ones, and turned the entire computing world on its head. Companies from all sectors have met their end (or become shells of their former selves) for failing to think “mobile-first” — a term coined by Luke Wroblewski that has defined the age as much as “lean” and “design-thinking.” Most consumer-facing and many B2B verticals are being driven by companies that have designed or adapted their customer experiences to fit a smartphone dominated world.


What’s Next?
“authentically mobile” …. services that not only are tailored for the mobile world, but who so thoroughly leverage the unique capabilities of mobile devices that they could literally not exist without them. Where mobile-first companies take the new, portable form factor and riff on things that were more or less possible but limited in some way on the desktop, authentically mobile companies are truly creating experiences that would either be impossible or entirely meaningless without a networked supercomputer in our pockets. A classic example of authentically mobile would be Uber, which without a location-enabled computing device always on our person (on both sides of the 2-sided marketplace), would almost certainly not exist.

Ghost in the machine: Snapchat isn’t mobile-first — it’s something else entirely



Both the cryptoeconomics research community and the AI safety/new cyber-governance/existential risk community are trying to tackle what is fundamentally the same problem: how can we regulate a very complex and very smart system with unpredictable emergent properties using a very simple and dumb system whose properties once created are inflexible?

Vitalik Buterin - Why Cryptoeconomics and X-Risk Researchers Should Listen to Each Other More



It is difficult to imagine that the discipline which defined biology in the last century—that taught us so much and provided such benefit to the ambient society—is fundamentally flawed. But that is the case. Molecular biology expressly established itself within the (classical) Newtonian worldview. As such, its perspective was fundamentally reductionist. In other words, all things were explainable, completely and solely, as the sum of their various parts—which also meant that they could (in principle) be predicted a priori.


In this Newtonian world, the study of biology becomes a highly derived subdiscipline of the basic science of physics—in effect, an engineering enterprise; there is nothing “fundamental” about it. Biology becomes a study of machines made of assemblages of parts and the interactions among them, an exercise in describing, but not explaining, things as they are.


However, it is intuitively obvious that the essence of biology lies not in things as they are, but in things coming into existence. Biology is a study, not in being, but in becoming.
Carl R. Woese and Nigel Goldenfeld

How the Microbial World Saved Evolution from the Scylla of Molecular Biology and the Charybdis of the Modern Synthesis



The Darwinian interlude has lasted for two or three billion years. It probably slowed down the pace of evolution considerably. The basic biochemical machinery of life had evolved rapidly during the few hundreds of millions of years of the pre-Darwinian era, and changed very little in the next two billion years of microbial evolution. Darwinian evolution is slow because individual species, once established, evolve very little. With rare exceptions, Darwinian evolution requires established species to become extinct so that new species can replace them.


Now, after three billion years, the Darwinian interlude is over. It was an interlude between two periods of horizontal gene transfer. The epoch of Darwinian evolution based on competition between species ended about ten thousand years ago, when a single species, Homo sapiens, began to dominate and reorganize the biosphere. Since that time, cultural evolution has replaced biological evolution as the main driving force of change. Cultural evolution is not Darwinian. Cultures spread by horizontal transfer of ideas more than by genetic inheritance. Cultural evolution is running a thousand times faster than Darwinian evolution, taking us into a new era of cultural interdependence which we call globalization. And now, as Homo sapiens domesticates the new biotechnology, we are reviving the ancient pre-Darwinian practice of horizontal gene transfer, moving genes easily from microbes to plants and animals, blurring the boundaries between species. We are moving rapidly into the post-Darwinian era, when species other than our own will no longer exist, and the rules of Open Source sharing will be extended from the exchange of software to the exchange of genes. Then the evolution of life will once again be communal, as it was in the good old days before separate species and intellectual property were invented.

Freeman Dyson - Our Biotech Future




One of my favorite foresight experts is Kevin Kelly - his new book “The Inevitables” is a MUST READ (I say this even though I’m only ½ through it). This is a brief interview with him about it.

Wired Founder Kevin Kelly On the Technologies That Will Dominate Our Future

The optimistic futurist says we'll share more, own less and spend far more time on our devices
Kevin Kelly's title at Wired magazine is "senior maverick." While he co-founded the publication in 1993, he has been thinking about the future in an outside-the-box way his whole career. A former editor of the countercultural technology magazine Whole Earth Review, Kelly has championed the Quantified Self movement whereby humans use technology to track their daily lives, co-sponsored the first Hackers Conference back in the mid-1980s and been involved in The Long Now Foundation, a project to look at our long-range future as humans. He's also written several books, including the bestselling What Technology Wants, which looks at technology as its own biological system.


In his new book, The Inevitable: Understanding the 12 Technological Forces That Will Shape Our Future, Kelly sorts what he sees as the biggest coming trends into 12 categories—things like "screening" (turning more surfaces into screens) and "tracking" (using surveillance technologies more and more). We talked with Kelly about his predictions for the world to come, and how we can help shape technology for the better.


It’s becoming a truism that we are now in a ‘Beta’ world - one of accelerating change - we everything will be continually updated and upgraded - replaced or disrupted. A key consequence that Kevin Kelly highlights in his new book “The Inevitable” is that we are now and will always be ‘Newbies’ - barely will we become competent in some domain when new technology, knowledge will come and change it. The other consequence is what John Seely Brown outines in this great 6 min video. We have to become good at real-time on-demand learning and unlearning.

What does it mean to “unlearn” and why should companies care?

Published on 29 Jun 2016
John Seely Brown, independent co-chairman, Center for the Edge, defines what it means to “unlearn” and uncovers the importance of learning collaboratively with others.


I have mentioned of one of the key trajectories of the 21st Century as the domestication of DNA. Here’s an old MUST VIEW article - written by Freeman Dyson in 2007. Given that the first complete human genome sequencing was only finished in 2003 - and now genome sequencing is about the $1K price range and continues it decrease - this article is more relevant and important to grasp than it was when it was written.

Our Biotech Future

It has become part of the accepted wisdom to say that the twentieth century was the century of physics and the twenty-first century will be the century of biology. Two facts about the coming century are agreed on by almost everyone. Biology is now bigger than physics, as measured by the size of budgets, by the size of the workforce, or by the output of major discoveries; and biology is likely to remain the biggest part of science through the twenty-first century. Biology is also more important than physics, as measured by its economic consequences, by its ethical implications, or by its effects on human welfare.


These facts raise an interesting question. Will the domestication of high technology, which we have seen marching from triumph to triumph with the advent of personal computers and GPS receivers and digital cameras, soon be extended from physical technology to biotechnology? I believe that the answer to this question is yes. Here I am bold enough to make a definite prediction. I predict that the domestication of biotechnology will dominate our lives during the next fifty years at least as much as the domestication of computers has dominated our lives during the previous fifty years.


This is of vital importance today - for anyone interested in learning, knowledge-understanding, and sense-making. There is too much to know and data is not just ‘big’ today - it’s becoming cosmic - we need to re-imagine how we visualize and analyse data. Needless to say the graphics are worth the view.

The visualizations transforming biology

Inventive graphic design and abstract models are helping researchers to make sense of a glut of data.
A smart visualization can transform biologists' understanding of their data. And now that it's possible to sequence every RNA molecule in a cell or fill a hard drive in a day with microscopy images, life scientists are increasingly seeking inventive visual ways of making sense of the glut of raw data that they collect.


Some of the visualizations that are currently exciting biologists were presented at a conference at the European Molecular Biology Laboratory in Heidelberg, Germany, in March. Called Visualizing Biological Data (VIZBI), the meeting was co-organized by Seán O'Donoghue, a bioinformatician at the Garvan Institute of Medical Research in Sydney, Australia. The gathering attracts an eclectic mix of lab researchers, computer scientists and designers and is now in its seventh year.


Here, Nature highlights some of O'Donoghue's picks of the visualizations that are set to transform biology.


Visualization is very important in another way - via the application of AI and machine learning. This is just one more example of how AI will transform science, work, professional competences in almost any domain. In medicine machine learning will be vital in developing personalized medicine that links the vast data of one’s genome, mircrobial profile and various treatments.
It’s not the first attempt to apply deep learning to health care. IBM’s Watson supercomputer, for instance, is currently drawing on 600,000 medical evidence reports and 1.5 million patient records and clinical trials to help doctors develop better treatment plans for cancer patients. Meanwhile U.K.-based startup Babylon is developing software that takes symptoms from a user in order to suggest a course of action.

DeepMind’s First Medical Research Gig Will Use AI to Diagnose Eye Disease

Google’s machine learning division plans to help doctors spot the early signs of visual degeneration by sifting through a million eye scans.
Every week, Moorfields Eye Hospital in London performs 3,000 optical coherence tomography scans to diagnose vision problems. The scans, which use scattered light to create high-resolution 3-D images of the retina, produce large quantities of data. Analyzing that data is a slow process. Understanding the images requires trained and experienced human eyes to identify problems specific to each case, leaving little or no time to identify broader, population-wide trends that could make early detection easier.


That’s just the kind of task that artificial intelligence can be used to tackle, though. So it’s perhaps not surprising that Google’s AI wing, DeepMind, has decided to partner with the hospital to apply machine learning to the problem as part of its health program. The arrangement will see DeepMind’s software study over a million eye scans—both optical coherence and more conventional images of the retina—in order to establish what happens in the eye during the early stages of eye disease.


This is a great summary of some of the potential of Blockchain technology - the visuals are a Must View for anyone interested in the near future disruptions of the domains of currency, records and other institutions.
Using the law of diffusion of innovation curve, blockchain is predicted to move past the ‘Innovators’ phase in 2016 and reach the 13,5 per cent of “early adopters” within financial services. The “tipping point”, according to Accenture, is then expected to happen in 2018 when the early majority of financial services begin to see the benefits of the early adopters and new models emerge. This growth phase is predicted by Accenture to last until 2025 when blockchain will finally become mainstream within financial services.

The future of blockchain in 8 charts

Blockchain, essentially a giant network, which records ownership and value, is being hailed as the second coming of the internet. We break down its future in 8 charts
Canadian writers and researchers, Alex and Don Tapscott, authors of the new book Blockchain Revolution, explain that blockchain goes way beyond the second coming of the internet. The pair, like so many others, stumbled across blockchain via the bitcoin association, quickly realizing the genie is out of the bottle.


For beginners, blockchain is essentially a database, a giant network, known as a distributed ledger, which records ownership and value, and allows anyone with access to view and take part. A network is updated and verified through consensus of all the parties involved. When something is added it cannot be altered and, if it looks valid to everyone, the update is approved.


“The first generation brought us the internet of information. The second generation, powered by blockchain, is bringing us the internet of value, a new, distributed platform that can help us reshape the world of business and transform the old order of human affairs for the better. But like the internet in the late-1980s and early-1990s, this is still early days.”


Here is another way to quickly understand the blockchain - a 2 min video - very clear.

Understand the Blockchain in Two Minutes

Over the past decade, an alternative digital paradigm has slowly been taking shape at the edges of the internet.


This new paradigm is the blockchain. After incubating through millions of Bitcoin transactions and a host of developer projects, it is now on the tips of tongues of CEOs and CTOs, startup entrepreneurs, and even governance activists. Though these stakeholders are beginning to understand the disruptive potential of blockchain technology and are experimenting with its most promising applications, few have asked a more fundamental question: What will a world driven by blockchains look like a decade from now?


This article and it’s links is an amazing summary of the state of the research on the blockchain (distributed ledger) technology - a must read for anyone interested in the emerging disruption and institutional innovation that this technology promises.
“Distributed ledger technology provides the framework for government to reduce fraud, corruption, error and the cost of paper-intensive processes. It has the potential to redefine the relationship between government and the citizen in terms of data sharing, transparency and trust. It has similar possibilities for the private sector.” - Professor Sir Mark Walport, Chief Scientific Adviser to HM Government


“Some organisations are exploring how blockchain, the backbone behind bitcoin, might provide a viable alternative to the current procedural, organisational, and technological infrastructure required to create institutionalised trust.” - Deloitte report

Industry research papers highlight blockchain technology’s disruptive potential

The potential uses for blockchains in all of their various forms are piling up, and everywhere you turn another multinational corporation, industry organization, central bank, or government has come out with a research paper extolling the benefits of blockchains, distributed ledger technology, and even Bitcoin itself.


BraveNewCoin (BNC.) has put together a complete set of industry research papers on the subject, divided up by year and quarter, with useful summations of each paper. Over 140 research papers are listed on the BNC so far, we highlight the largest trends below as an introduction to this important resource library.


Not having a blockchain strategy today is like not having an internet strategy at the turn of the century. If your corporation or group uses documents or keeps records in any official way, someone has come up with a way for blockchains to improve the efficiency and trustworthiness of your process.


The European Central Bank, which had a reoccurring author of a few of these papers, started the trend of separating Distributed Ledger Technology (DLT) from blockchain technology, defining the former as “a record of information, or database, that is shared across a network. It may be an open, publicly accessible database or access may be restricted to a specified group of users.” The same report then credited blockchains as “the technology that makes this possible.”
The papers are listed here



On a social level - it seems increasingly important that we understand how to design and govern commons and peer-production systems. This was Written by Michel Bauwens, Berlin, October 23, 2015, for the Uncommons conference

The Ten Commandments of Peer Production and Commons Economics

This is an important synthesis of ten years of research at the P2P Foundation, on the emerging practices of the new productive communities and the ethical entrepreneurial coalitions that create livelihoods for shared resources.


This article addresses the emerging practices that should inspire these entities of the ‘ethical’ economy. The main aim it to create new forms that go beyond the traditional corporate form and its extractive profit-maximizing practices of value extraction. Instead of extractive forms of capital, we need generative forms, that co-create value with and for the commoners.


I am using the form of commandments to explain the new practices. All of them have already emerged in various forms, but need to be generalized and integrated.


What the world and humanity, and all those beings that are affected by our activities require is a mode of production, and relations of production, that are “free, fair and sustainable’ at the same time.


This is not new - but it is of vital importance that we understand just how fundamental Internet access is to being an engaged citizen and as an enabler to human flourishing.

UN council says blocking Internet access violates human rights

The United Nations Human Rights Council is taking aim at countries that block access to the Internet as a means to supress free expression.


The resolution passed by the 47-member council last week “condemns unequivocally measures to intentionally prevent or disrupt access to or dissemination of information online in violation of international human rights law and calls on all states to refrain from and cease such measures.”


Russia, China, and Saudi Arabia were among a handful of authoritarian regimes that asked the UN to strike that passage from the digital rights decree. South Africa and India also called for its removal.


The non-binding resolution also asks nations to harness the power of the Internet to empower women and persons with disabilities, as well as promote literacy and achieve sustainability goals.


Well this last week the first human fatality occurred for a vehicle (Tesla) on autopilot - this is sad but we must remember accidents are inevitable the question remains will self-driving cars be safer - thus far the still are.
That said - here’s a brilliant AI-ssistant that I think we will all want access to.

Chatbot lawyer overturns 160,000 parking tickets in London and New York

Free service DoNotPay helps appeal over $4m in parking fines in just 21 months, but is just the tip of the legal AI iceberg for its 19-year-old creator
An artificial-intelligence lawyer chatbot has successfully contested 160,000 parking tickets across London and New York for free, showing that chatbots can actually be useful.


Dubbed as “the world’s first robot lawyer” by its 19-year-old creator, London-born second-year Stanford University student Joshua Browder, DoNotPay helps users contest parking tickets in an easy to use chat-like interface.


The program first works out whether an appeal is possible through a series of simple questions, such as were there clearly visible parking signs, and then guides users through the appeals process.


The results speak for themselves. In the 21 months since the free service was launched in London and now New York, Browder says DoNotPay has taken on 250,000 cases and won 160,000, giving it a success rate of 64% appealing over $4m of parking tickets.


This is a very recent 35 min video of a Keynote presentation by head of microsoft AI research - it is definitely worth the view (even though he doesn’t mention anything happening at Google). It’s a very good summary of current state of AI research. The app demonstrated in the last 3 min if awesome.

Keynote AI in Support of People and Society

AI in Support of People and Society from Eric Horvitz (Microsoft Research)


One of the potential and very powerful uses of AI would be to enhance our capability to check facts - especially in the context of political, commercial and public discourse. Here’s an X-Prize like challenge - aiming to harness innovators and inventors to help create such a capability. Of course one has to keep in mind - that a great deal of politics and social discourse is shaped by frames and metaphors - and fact can't fight those. Only another frame-metaphor can counter a frame-metaphor.

Fast & Furious Fact Check Challenge

Today’s “always on” environment, together with social media, really does give us the ability to hear anything said by anyone, anywhere, anytime. Ironically, this flood of material makes it difficult to know what is actually true! Knowing the believability and accuracy of what we read, hear and see is important around the world -- and no less important for us here in the world’s leading democracy.


Fact checking is the process of verifying what someone has said, and then receiving a rating about the accuracy of the ‘fact.’ Fact checking enables us to sort through a tidal wave of massive information and communication.  


Some fact checking services exist, but none are instant.  
Fact checking today is done mostly by qualified humans. It’s a laborious, time-consuming process that is not easy, quick, cheap or comprehensive. There simply aren’t enough journalism researchers with the skills to verify all the claims made by our political candidates and public figures. It often takes a day or more to verify the accuracy of statements, especially in the context that they were made. And as time elapses, the truth moves further and further away from us.


The critical time to know if political claims and statements are accurate is now -- as we read or view it.  Therefore, the breakthroughs sought in this prize are those that improve speed of results in fact checking.


This is a fascinating article that involves some deeper questions of what life is - and potentially brings a new way to understand a ‘digital virus’. :)
In describing this bare-bones view of life, Nobel Prize-winning physiologist Albert Szent-Györgyi once said, “Life is nothing but an electron looking for a place to rest.”
The microbes’ apparent ability to ingest electrons—known as direct electron transfer—is particularly intriguing because it seems to defy the basic rules of biophysics. The fatty membranes that enclose cells act as an insulator, creating an electrically neutral zone once thought impossible for an electron to cross. “No one wanted to believe that a bacterium would take an electron from inside of the cell and move it to the outside,”
Methanococcus maripaludis, excretes an enzyme that sits on the electrode’s surface. The enzyme pairs an electron from the electrode with a proton from water to create a hydrogen atom, which is a well-established food source among methanogens.
Only a tiny fraction—perhaps 2 percent—of all the planet’s microorganisms can be grown in the lab.

The Electricity Eaters

Energy-sucking bacteria on rocks beneath the planet’s surface may provide a blueprint for life on other worlds.
Last year, biophysicist Moh El-Naggar and his graduate student Yamini Jangir plunged beneath South Dakota’s Black Hills into an old gold mine that is now more famous as a home to a dark matter detector. Unlike most scientists who make pilgrimages to the Black Hills these days, El-Naggar and Jangir weren’t there to hunt for subatomic particles. They came in search of life.


In the darkness found a mile underground, the pair traversed the mine’s network of passages in search of a rusty metal pipe. They siphoned some of the pipe’s ancient water, directed it into a vessel, and inserted a variety of electrodes. They hoped the current would lure their prey, a little-studied microbe that can live off pure electricity.


The electricity-eating microbes that the researchers were hunting for belong to a larger class of organisms that scientists are only beginning to understand. They inhabit largely uncharted worlds: the bubbling cauldrons of deep sea vents, mineral-rich veins deep beneath the planet’s surface, ocean sediments just a few inches below the deep seafloor. The microbes represent a segment of life that has been largely ignored, in part because their strange habitats make them incredibly difficult to grow in the lab.


Well this isn’t about electricity eating microbes - but it is a very interesting hint - preliminary view of the possibilities of the Internet of Things (IoT) and the future of everything - including agriculture and environmental governance.

IoT: The Internet of Tomatoes

ADI hopes its pioneering agricultural experiment will yield juicier, tastier tomatoes.
On the outside, New England–grown tomatoes look much like tomatoes grown anywhere else. But in terms of flavor, they’re rarely anybody’s first choice. In fact, New England tomatoes are more likely to end up in soup or as ketchup than sliced for sandwiches or drizzled with olive oil and served with basil and mozzarella.


Determined to find out why the region’s tomatoes are comparatively tasteless, Analog Devices Inc. (ADI) started its Internet of Tomatoes project. This precision agriculture experiment uses technologies such as micro-electromechanical systems (MEMS) and sensors to figure out whether environmental monitoring could improve flavor.


The project stems from a 2014 MEMS & Sensors Industry Group conference in Scottsdale, Arizona. There, keynote speaker Francis Gouillart, president of the Experience Co-Creation Partnership, an education and consulting firm, challenged attendees to use technology to improve basic human needs: water, food, energy, health care, education, and freedom. ADI decided to tackle the food factor by examining two facets of tomatoes that could be remedied: temperature measurements and growing-degree days, a heat index used to predict when a crop will reach maturity.


This is new - old promises - the creation of artificial spider silk - previously attempted via spider silk protein in goat’s milk. That said - it may be closer to primetime than we can imagine. What’s different is the huge acceleration in efforts to domesticate DNA.

Synthetic spider silk could be the biggest technological advance in clothing since nylon

Spider silk’s qualities are nearly mythical. Its tensile strength is comparable to steel’s. Yet it is lighter, and can be as stretchy as a rubber band. Those traits in combination make it tougher than Kevlar.


After years of hype and false starts, including one now-bankrupt effort that involved genetically modified goats producing it in their milk, a few companies think they’ve figured it out. The two leading the pack are Spiber, a Japanese company, and a California-based startup called Bolt Threads. Bolt Threads believes it has the edge, and that spider silk is only the beginning of what it can do.


Bolt Threads doesn’t use spiders to make its silk. The principal ingredients are genetically modified yeast, water, and sugar. The raw silk is produced through fermentation, much like brewing beer, except instead of the yeast turning the sugar into alcohol, they turn it into the raw stuff of spider silk. Bolt Threads spins that into threads using a method similar to the wet-spinning process used to create cellulose-based fibers such as Lyocell. Levin says it’s molecularly the same as natural spider silk, except for a few deliberate variations that only a chemical biologist would recognize.


The company has more announcements forthcoming, but Bolt Threads may have some key advantages. Even though it won’t have a product on the market first, it’s already producing silk in kilos, and plans to start churning silk out on the order of metric tons this year. It plans to have some demo textiles and even a few products up for sale in the next 12 to 18 months, too. Levin says using yeast is also “significantly less expensive” than using E.coli. They expect their fabric to cost about as much as other premium fabrics, such as high-end wools or natural silk.


The IoT will be overwhelmingly about the network of sensors and the cosmic data they generate - here’s a glimpse of one such sensor - not yet autonomous but in the next decade - how many millions(billions) of them will there be - networked to the cloud? The Noosphere seems to be emerging more rapidly than we can imagine.
It took only a few hours to design, manufacture and test the tiny eye, which yielded "high optical performances and tremendous compactness", the researchers reported.

3D-printing: German engineers create injectable micro-camera

German engineers have created a camera no bigger than a grain of salt that could change the future of health imaging — and clandestine surveillance.


Using 3D printing, researchers from the University of Stuttgart built a three-lens camera, and fit it onto the end of an optical fibre the width of two hairs.


Such technology could be used as minimally-intrusive endoscopes for exploring inside the human body, the engineers reported in the journal Nature Photonics.
It could also be deployed in virtually invisible security monitors, or mini-robots with "autonomous vision".


The compound lens is just 100 micrometres (0.1 millimetres) wide, and 120 micrometres with its casing.


It can focus on images from a distance of 3.0mm, and relay them over the length of a 1.7-metre optical fibre to which it is attached. The "imaging system" fits comfortably inside a standard syringe needle, said the team, allowing for delivery into a human organ, or even the brain.


Well by now everyone everyone has talked about the implications of Brexit - and who really knows what will happen - but this is an interesting development with many consequences.
“It is not an economic project, it is a geopolitical project — and it is very strategic,” Nadège Rolland, an analyst at the National Bureau for Asian Research,

Why is China building a New Silk Road?

China is reviving the historic Silk Road trade route that runs between its own borders and Europe. Announced in 2013 by President Xi Jinping, the idea is that two new trade corridors – one overland, the other by sea – will connect the country with its neighbours in the west: Central Asia, the Middle East and Europe.


The project has proved expensive and controversial. So why is China doing it?
There are strong commercial and geopolitical forces at play here, first among which is China’s vast industrial overcapacity – mainly in steel manufacturing and heavy equipment – for which the new trade route would serve as an outlet. As China’s domestic market slows down, opening new trade markets could go a long way towards keeping the national economy buoyant.


Hoping to lift the value of cross-border trade to $2.5 trillion within a decade, President Xi Jinping has channelled nearly $1 trillion of government money into the project. He’s also encouraging state-owned enterprises and financial institutions to invest in infrastructure and construction abroad.


Here’s some good news regarding the state of our environment-atmosphere.
The scientists’ finely tuned climate model also found that the ozone hole over Antarctica in the month of September shrank by 4.5 million square kilometres, on average, between 2000 and 2015.

Antarctic ozone hole is on the mend

Global regulation of chlorine compounds is giving the atmosphere time to heal, even as volcanic eruptions interfere.
It's the beginning of the end for the Antarctic ozone hole. A new analysis shows that, on average, the hole — which forms every Southern Hemisphere spring, letting in dangerous ultraviolet light — is smaller and appears later in the year than it did in 2000.


The 1987 global treaty called the Montreal Protocol sought to reduce the ozone hole by banning chlorofluorocarbons, chlorine-containing chemicals — used as refrigerants in products such as air conditioners — that accelerated ozone loss in the stratosphere. The study shows that it worked.


“We as a planet have avoided what would have been an environmental catastrophe,” says Susan Solomon, an atmospheric scientist at the Massachusetts Institute of Technology in Cambridge, and a pioneer in the field of Antarctic ozone loss. “Yay us!”
She and her colleagues report the finding on 30 June in Science


Art Meets Science
Art and science are in fact very old friends - maybe even simply two sides of a common coin. This is a fascinating convergence and assemblage for anyone who loves Jimi Hendrix and Moebius - and is interested in machine learning visuals - this is a must view.

Jimi Hendrix - Moebius (S)trip - Deepstyle(s)

Jimi painted in Jean "Moebius" Giraud style with neural network software.


This is a brilliant 6 min video history of virtual worlds - well worth the view for anyone interested on where gaming in virtual worlds has been.

Raph Koster's History of Virtual Worlds

Designer Raph Koster covers the history of designing virtual worlds during the GDC 2016 Flash Backward session.


I think this too is where art meets science and defeats pedentary - I love what the author says about language. This is a short, entertaining article.
What changed me was realizing that language isn’t some delicate cultural artifact but an integral part of being human. I found this out by reading what scholars of language — linguists, grammarians and cognitive scientists — say about the subject. It fascinated me. Language — which all human societies have in immense grammatical complexity — is far more interesting than pedantry.


I think language tuition is better focused on the need to express yourself to the right audience. Linguists refer to “register” — the different styles and ranges of formality we adopt for particular audiences. That’s not all there is to effective writing and speaking but it’s not stressed enough in usage guides.


I enjoy splitting infinitives, knowing that they not only accord with the grammar of Standard English but conform to its prosody as well (as they typically alternate stressed and unstressed syllables). Splitting infinitives is not just the right but the duty of the stylish writer.

It’s Finally Time To Stop Correcting People’s Grammar, Linguist Says

“Language — which all human societies have in immense grammatical complexity — is far more interesting than pedantry.”
...there are also those divisive subjects that, puzzlingly, manage to rouse anger in spite of being totally unimportant. Dietary preferences (#TeamBread). Fantasy football picks. And the most corrosive rage fuel: grammatical choices.


“Choices” is an important clarifier, at least according to Oliver Kamm, a linguist who recently wrote a book about the problem with pedantry, Accidence Will Happen.


A recovering pedant himself, he now speaks for the boldest form of descriptivism, arguing that if humans use a word outside of its traditional meaning, the new, creative use is now valid, simply by virtue of having been used at all. So, “literally” can mean “figuratively,” and “irregardless” can mean “regardless.” Adverbs — probably the mostly hotly debated part of speech — are welcome in Kamm’s world, as are split infinitives and sentences that start with “and.”

No comments:

Post a Comment