Thursday, April 12, 2018

Friday Thinking 13 April 2018

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9



Content
Quotes:

Articles:



Let’s start with Facebook’s Surveillance Machine, by Zeynep Tufekci in last Monday’s New York Times. Among other things (all correct), Zeynep explains that “Facebook makes money, in other words, by profiling us and then selling our attention to advertisers, political actors and others. These are Facebook’s true customers, whom it works hard to please.”

Irony Alert: the same is true for the Times, along with every other publication that lives off adtech: tracking-based advertising. These pubs don’t just open the kimonos of their readers. They bring people’s bare digital necks to vampires ravenous for the blood of personal data, all for the purpose of returning “interest-based” advertising to those same people.

With no control by readers (beyond tracking protection which relatively few know how to use, and for which there is no one approach or experience), and damn little care or control by the publishers who bare those readers’ necks, who knows what the hell actually happens to the data? No one entity, that’s for sure.

Facebook left its API wide open, and had no control over personal data once those data left Facebook.

But there is a wider story coming: (thread…)
Every single big website in the world is leaking data in a similar way, through “RTB bid requests” for online behavioural advertising #adtech.

Every time an ad loads on a website, the site sends the visitor’s IP address (indicating physical location), the URL they are looking at, and details about their device, to hundreds -often thousands- of companies. Here is a graphic that shows the process.

Client-server, by design, subordinates visitors to websites. It does this by putting nearly all responsibility on the server side, so visitors are just users or consumers, rather than participants with equal power and shared responsibility in a truly two-way relationship between equals.

It doesn’t have to be that way. Beneath the Web, the Net’s TCP/IP protocol—the gravity that holds us all together in cyberspace—remains no less peer-to-peer and end-to-end than it was in the first place. Meaning there is nothing to the Net that prevents each of us from having plenty of power on our own.

Facebook’s Cambridge Analytica problems are nothing compared to what’s coming for all of online publishing




Tin Hang Liu: I come from the traditional automotive industry. I used to work in one of the biggest design and engineering companies, making beautiful cars in Italy for the biggest auto OEMs. However, I decided to move to Silicon Valley when we realized that the car of the future was going to be nothing like a Ferrari or a Lamborghini.

Basically, we are moving towards a paradigm shift from car ownership to mobility becoming a service. Players like Uber, Lyft and Didi are defining a new massive trend in the automotive industry. People are less willing to buy cars, especially in big cities since we all have in our pockets powerful computers with connectivity that can take us from point A to B with a few taps. This has radically changed the scenario of how we have to design and engineer vehicles. This is why the objective of Open Motors is to create the best vehicle for services. Open data, open source and APIs play a huge role in this, because we believe that the cars of the future will be like computers on wheels.

Our core assumption, based on emerging trends around the world, is that there will be a shift from ownership to service. In this new paradigm, the way cars are made will definitely change because our priorities as consumers will change. You’ll care less for big car brands and start to pay more attention to the quality of the service you are getting from point A to B.

For us, the car of the future should be more like an aeroplane, designed and engineered for services and heavy usage, and the key part here is inscribing modularity right from the design. Just like after several flights you can replace turbines, the entire cockpit technology, change the interior for better infotainment or more comfortable seats.

Why this innovator thinks the car of the future rides on open source




Demanding that a theory is falsifiable or observable, without any subtlety, will hold science back. We need madcap ideas
So Newtonian gravity was ultimately thrown out, but not merely in the face of data that threatened it. That wasn’t enough. It wasn’t until a viable alternative theory arrived, in the form of Einstein’s general relativity, that the scientific community entertained the notion that Newton might have missed a trick. But what if Einstein had never shown up, or had been incorrect? Could astronomers have found another way to account for the anomaly in Mercury’s motion? Certainly – they could have said that Vulcan was there after all, and was merely invisible to telescopes in some way.

This might sound somewhat far-fetched, but again, the history of science demonstrates that this kind of thing actually happens, and it sometimes works – as Pauli found out in 1930. At the time, new experiments threatened one of the core principles of physics, known as the conservation of energy. The data showed that in a certain kind of radioactive decay, electrons could fly out of an atomic nucleus with a range of speeds (and attendant energies) – even though the total amount of energy in the reaction should have been the same each time. That meant energy sometimes went missing from these reactions, and it wasn’t clear what was happening to it.

The Danish physicist Niels Bohr was willing to give up energy conservation. But Pauli wasn’t ready to concede the idea was dead. Instead, he came up with his outlandish particle. ‘I have hit upon a desperate remedy to save … the energy theorem,’ he wrote. The new particle could account for the loss of energy, despite having almost no mass and no electric charge. But particle detectors at the time had no way of seeing a chargeless particle, so Pauli’s proposed solution was invisible.

Nonetheless, rather than agreeing with Bohr that energy conservation had been falsified, the physics community embraced Pauli’s hypothetical particle: what came to be known as a ‘neutrino’ (the little neutral one), once the Italian physicist Enrico Fermi refined the theory a few years later. The happy epilogue was that neutrinos were finally observed in 1956, with technology that had been totally unforeseen a quarter-century earlier: a new kind of particle detector deployed in conjunction with a nuclear reactor. Pauli’s ghostly particles were real; in fact, later work revealed that trillions of neutrinos from the Sun pass through our body every second, totally unnoticed and unobserved.

the choices we make between observationally identical theories have a big impact upon the practice of science. The American physicist Richard Feynman pointed out that two wildly different theories that have identical observational consequences can still give you different perspectives on problems, and lead you to different answers and different experiments to conduct in order to discover the next theory. So it’s not just the observable content of our scientific theories that matters. We use all of it, the observable and the unobservable, when we do science. Certainly, we are more wary about our belief in the existence of invisible entities, but we don’t deny that the unobservable things exist, or at least that their existence is plausible.

Some of the most interesting scientific work gets done when scientists develop bizarre theories in the face of something new or unexplained. Madcap ideas must find a way of relating to the world – but demanding falsifiability or observability, without any sort of subtlety, will hold science back. It’s impossible to develop successful new theories under such rigid restrictions. As Pauli said when he first came up with the neutrino, despite his own misgivings: ‘Only those who wager can win.’

What is good science?





This is a wonderful 1 hr 10 min conversation - that includes thoughts on the future of work - this is well worth the view. Dan Ariely is the James B Duke Professor of Psychology and Behavioral Economics at Duke University.

Yuval Harari with Dan Ariely: Future Think—From Sapiens to Homo Deus

In his new book, Homo Deus: A Brief History of Tomorrow, Harari examines what might happen to the world when these old myths are coupled with new godlike technologies such as artificial intelligence and genetic engineering. What will happen to democracy if and when Google and Facebook come to know our likes and our political preferences better than we know them ourselves? What will happen to the welfare state when computers push humans out of the job market and create a massive new “useless class”? How might Islam, Christianity and Judaism handle genetic engineering? Will Silicon Valley end up producing new religions, rather than just novel gadgets?


This is one of the most important social signals to track in the next few years - there are many weak signals in many domains that would suggest that some form of implementation of a ‘social credit’ or reputation-trustworthiness system - a way to ‘value values’ is inevitable.
...it has been hard to distinguish future promises — or threats — from the realities of how social credit is being implemented. Rongcheng is one place where that future is visible. Three dozen pilot systems have been rolled out in cities across the country, and Rongcheng is one of them. According to Chinese officials and researchers, it’s the best example of the system working as intended. But it also illustrates those intentions may not be as straightforward as they like to claim.
Rongcheng is a microcosm of what is to come. The national credit system planned for 2020 will be an “ecosystem” made up of schemes of various sizes and reaches, run by cities, government ministries, online payment providers, down to neighborhoods, libraries, and businesses, say Chinese researchers who are designing the national scheme. It will all be interconnected by an invisible web of information.

Life Inside China’s Social Credit Laboratory

The party’s massive experiment in ranking and monitoring Chinese citizens has already started.
In an attempt to ease bureaucracy, the city hall, a glass building that resembles a flying saucer, has been fashioned as a one-stop shop for most permits. Instead of driving from one office to another to get their paperwork in order, residents simply cross the gleaming corridors to talk to officials seated at desks in the open-space area.

At one of these stations, Rongcheng residents can pick up their social credit score.
In what it calls an attempt to promote “trustworthiness” in its economy and society, China is experimenting with a social credit system that mixes familiar Western-style credit scores with more expansive — and intrusive — measures. It includes everything from rankings calculated by online payment providers to scores doled out by neighborhoods or companies. High-flyers receive perks such as discounts on heating bills and favorable bank loans, while bad debtors cannot buy high-speed train or plane tickets.

By 2020, the government has promised to roll out a national social credit system. According to the system’s founding document, released by the State Council in 2014, the scheme should “allow the trustworthy to roam everywhere under heaven while making it hard for the discredited to take a single step.” But at a time when the Chinese Communist Party is aggressively advancing its presence across town hall offices and company boardrooms, this move has sparked fears that it is another step in the tightening of China’s already scant freedoms.

The system they have devised assigns 1,000 points at the beginning to each of Rongcheng’s 740,000 adult residents. From there, the math begins.

Get a traffic ticket; you lose five points. Earn a city-level award, such as for committing a heroic act, doing exemplary business, or helping your family in unusual tough circumstances, and your score gets boosted by 30 points. For a department-level award, you earn five points. You can also earn credit by donating to charity or volunteering in the city’s program.


This is a strong signal of an inevitable trajectory related to our data and identity - the question is not so much what - but how? How can data, identity, be recorded for effective use and democratic enablement? Considering Canada’s fiasco with it’s government Phoenix pay system - the scale of this initiative puts Canada’s problem in dismal perspective.

‘Big Brother’ in India Requires Fingerprint Scans for Food, Phones and Finances

Seeking to build an identification system of unprecedented scope, India is scanning the fingerprints, eyes and faces of its 1.3 billion residents and connecting the data to everything from welfare benefits to mobile phones.

Civil libertarians are horrified, viewing the program, called Aadhaar, as Orwell’s Big Brother brought to life. To the government, it’s more like “big brother,” a term of endearment used by many Indians to address a stranger when asking for help.

For other countries, the technology could provide a model for how to track their residents. And for India’s top court, the ID system presents unique legal issues that will define what the constitutional right to privacy means in the digital age.


If we think that social credit is a dystopian future that only a authoritarian state could enact - this article is a very important weak signal of the shadows of peer-to-peer ratings. The original paper is available for free download.

Good luck leaving your Uber driver less than five stars

Uber asks riders to give their drivers a rating of one to five stars at the end of each trip. But very few people make use of this full scale. That’s because it’s common knowledge among Uber’s users that drivers need to maintain a certain minimum rating to work, and that leaving anything less than five stars could jeopardize their status.

Drivers are so concerned about their ratings that one Lyft driver in California last year posted a translation of the five-star system in his car, to educate less savvy passengers. Next to four stars he wrote: “This driver sucks, fire him slowly; it does not mean ‘average’ or above ‘average.’” In a tacit acknowledgement of this, Uber said in July that it would make riders add an explanation when they awarded a driver less than five stars.

How did Uber’s ratings become more inflated than grades at Harvard? That’s the topic of a new paper, “Reputation Inflation,” from NYU’s John Horton and Apostolos Filippas, and Collage.com CEO Joseph Golden. The paper argues that online platforms, especially peer-to-peer ones like Uber and Airbnb, are highly susceptible to ratings inflation because, well, it’s uncomfortable for one person to leave another a bad review.

The somewhat more technical way to say this is that there’s a “cost” to leaving negative feedback. That cost can take different forms: It might be that the reviewer fears retaliation, or that he feels guilty doing something that might harm the underperforming worker. If this “cost” increases over time—i.e., the fear or guilt associated with leaving a bad review increases—then the platform is likely to experience ratings inflation.


Here’s a more positive signal of the potential to provide some better solutions and approaches to digital identity. This may well be an excellent role for a distributed ledger technology.
The mistake that both governments and tech pioneers are making is failing to realize that trustworthy identity depends on jointly-issued credentials, where credentials and certification must be based on trustworthy assertions by the community of people and institutions in which we live. Identity credentials are really mechanisms for collecting and documenting trusted relationships, not self-certifying systems. Trustworthy self-sovereign frameworks should really be called joint sovereignty or community sovereignty frameworks.

Digital Identity Is Broken. Here’s a Way to Fix It

The bedrock of trust is a human community with frequent positive interactions
Most people today suffer from a strange sort of psychosis: we are uncertain of our identity. For although we are (mostly) certain of who we are in our own minds, the identity we use to interact with the government, obtain services, and pay for goods is unreliable. In poor parts of the world people simply don’t have trusted identity credentials that allow them to prove who they are, while in rich parts of the world we worry about identity theft, and crimes using fake or stolen identity credentials are rampant.

The core problem is that the identity credentials we use often are defined and certified by someone else, for instance, by the government or a company such as a bank.

This means that the certifying authority can unfairly coerce individuals needing to be credentialed, and it is difficult to escape “big brother” surveillance and protect individual privacy. The centralized nature of identity certification also means that fraud only requires altering a single database and that corruption stemming from inside the certifying authority is difficult to prevent.

How can we fix the current mess, where the basic building block of commerce and government…the identity of people and businesses….is so badly broken? The answer is we need a new generation identity mechanism where credentials are issued by communities of people and businesses that know each other. Here, your entire community vouches for you, not a single bureaucracy or a single commercial player.


This is another important signal that every major city should pay careful attention to - something we should all dedicate serious time to re-imagining the communities we want to design for our lives - now and in the future. We need deep re-imagining of our “Communities for Life”.

A Right to the Digital City

A response to the Smart London ‘new deal for city data’ — Andrew Eland & Richard Pope
Summary
How to make London a smart city is, perhaps, the wrong question. A better question might be: how can London use digital tools to improve the lives of people who live, visit or work here?

How can we house them better, make them safer, healthier and maximise their happiness? What infrastructure needs to be in place to enable all this? How can we do this while giving people agency over the data about them? How can we give them the democratic control over digital services?

Fundamentally, what should a digital city be like? To start to answer this, we think there are three things that the Mayor, Chief Digital Officer and the Smart London Board will need to do:

-  First is the adoption of open standards, the development of definitive data registries and open APIs. These are the foundations the GLA, London boroughs, private sector and others will need to build upon.
-  Next, the use of data is set against increasing public concern over the effects of technology on our society. If Londoners are going to trust a digital city, then privacy, transparency and accountability must be core principles. Consumer technology and Silicon Valley struggle in these domains, leaving the opportunity for London to define and promote the worldwide standards for this emerging field.
-  Finally, the real prize here is new and improved services for Londoners. Redesigning everyday things Londoners rely on — things like getting a school place, commenting on a planning application or joining a housing list — represents an opportunity to improve millions of lives. To do this, London will need to invest in the digital capability of the GLA, London Boroughs and the private sector.


What’s the face of AI in the near future? This images are interesting - worth a view.

What People See in 157 Robot Faces

The largest study of robot faces we've ever seen shows the right way to design an expressive robot
In recent years, an increasing number of robots have relied on screens rather than physical mechanisms to generate expressive faces. Screens are cheap, they’re easy to work with, and they allow for nearly unlimited creativity. Consequently, there’s an enormous variety of robot faces, with a spectrum of similarities and differences both obvious and subtle. However, there hasn’t been a comprehensive study of the entire design space, possibly because of how large it is, and this is bad, because there’s a lot to learn.

At the ACM/IEEE International Conference on Human Robot Interaction (HRI) last month, roboticists from the University of Washington in Seattle presented a paper entitled “Characterizing the Design Space of Rendered Robot Faces.” When they say “characterizing” and “design space,” they aren’t kidding: They looked at 157 different robot faces across 76 dimensions, did a pile of statistical analyses on them, and then conducted a set of surveys to figure out how people experience robot faces differently.

The researchers determined what faces to include in their dataset in the obvious way—if it’s a digital face that has a good picture that can be found through an Internet search, it’s in. The 157 resulting robots were coded across dimensions, including: presence of a particular element on the face (e.g. mouth, nose, eyebrows, cheeks/blush); the color of these elements and the face (and any additional features); and the size, shape, and placement of each element. Some properties are binary (does it have a mouth?), and others were discretized, like how close together the eyes are. The researchers also recorded a bunch of other useful stuff, including where the robot is made and what it was designed to do.


Another signal of Moore’s Law is Dead - Long Live Moore’s Law. Not as continued increases of transistors on chips - but as the continued exponential increase in price-performance computational capability. An interesting metaphor for this new computational paradigm is the approaching ‘Programmable Matter’ as computer chip - from hard physical transistors to the grasping of the virtual ‘adjacent possibles’.
- field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing
“This is a major technology disruption for the industry and our most significant engineering accomplishment since the invention of the FPGA,” says Victor Peng, president and CEO of Xilinx. “This revolutionary new architecture is part of a broader strategy that moves the company beyond FPGAs and supporting only hardware developers. The adoption of ACAP products in the data center, as well as in our broad markets, will accelerate the pervasive use of adaptive computing, making the intelligent, connected, and adaptable world a reality sooner.”
“This is what the future of computing looks like,” says Patrick Moorhead, founder, Moor Insights & Strategy. “We are talking about the ability to do genomic sequencing in a matter of a couple of minutes, versus a couple of days. We are talking about data centers being able to program their servers to change workloads depending upon compute demands, like video transcoding during the day and then image recognition at night. This is significant.”

Xilinx adaptive and intelligent computing will give 20 times better performance for deep learning

Xilinx the leader in adaptive and intelligent computing, today announced a new breakthrough product category called adaptive compute acceleration platform (ACAP) that goes far beyond the capabilities of an FPGA. An ACAP is a highly integrated multi-core heterogeneous compute platform that can be changed at the hardware level to adapt to the needs of a wide range of applications and workloads. An ACAP’s adaptability, which can be done dynamically during operation, delivers levels of performance and performance per-watt that is unmatched by CPUs or GPUs.

An ACAP is ideally suited to accelerate a broad set of applications in the emerging era of big data and artificial intelligence. These include: video transcoding, database, data compression, search, AI inference, genomics, machine vision, computational storage and network acceleration. Software and hardware developers will be able to design ACAP-based products for end point, edge and cloud applications. The first ACAP product family, codenamed “Everest,” will be developed in TSMC 7nm process technology and will tape out later this year.


This is an important signal to watch - with the potential to transform our approach to neurological-cognitive science and our understanding of the brain and perhaps consciousness.
“If the question of whether quantum processes take place in the brain is answered in the affirmative, it could revolutionize our understanding and treatment of brain function and human cognition,” said Matt Helgeson, a UCSB professor of chemical engineering and associate director at QuBrain.

Are We Quantum Computers?

Led by UCSB’s Matthew Fisher, an international collaboration of researchers will investigate the brain’s potential for quantum computation
Much has been made of quantum computing processes using ultracold atoms and ions, superconducting junctions and defects in diamonds, but could we be performing them in our own brains?

It’s a question UC Santa Barbara theoretical physicist Matthew Fisher has been asking for years. Now, as scientific director of the new Quantum Brain Project (QuBrain), he is seeking to put this inquiry through rigorous experimental tests.
“Might we, ourselves, be quantum computers, rather than just clever robots who are designing and building quantum computers?” Fisher asks.

Some functions the brain performs continue to elude neuroscience — the substrate that “holds” very long-term memories and how it operates, for example. Quantum mechanics, which deals with the behavior of nature at atomic and subatomic levels, may be able to unlock some clues. And that in turn could have major implications on many levels, from quantum computing and materials sciences to biology, mental health and even what it is to be human.

The idea of quantum computing in our brains is not a new one. In fact, it has been making the rounds for a while with some scientists, as well as those with less scientific leanings. But Fisher, a world-renowned expert in the field of quantum mechanics, has identified a precise — and unique — set of biological components and key mechanisms that could provide the basis for quantum processing in the brain. With $1.2 million in grant funding over three years from the Heising-Simons Foundation, Fisher will launch the QuBrain collaboration at UCSB. Composed of an international team of leading scientists spanning quantum physics, molecular biology, biochemistry, colloid science and behavioral neuroscience, the project will seek explicit experimental evidence to answer whether we might in fact be quantum computers.


And another signal of advancing knowledge of our own brains and minds.The 1 min video is worth the view.
MAPseq’s competitive edge in speed and cost for such investigations is considerable: According to Zador, the technique should be able to scale up to handle 100,000 neurons within a week or two for only $10,000 — far faster than traditional mapping would be, at a fraction of the cost.

New Brain Maps With Unmatched Detail May Change Neuroscience

A technique based on genetic bar codes can easily map the connections of individual brain cells in unprecedented numbers. Unexpected complexity in the visual system is only the first secret it has revealed.
Sitting at the desk in his lower-campus office at Cold Spring Harbor Laboratory, the neuroscientist Tony Zador turned his computer monitor toward me to show off a complicated matrix-style graph. Imagine something that looks like a spreadsheet but instead of numbers it’s filled with colors of varying hues and gradations. Casually, he said: “When I tell people I figured out the connectivity of tens of thousands of neurons and show them this, they just go ‘huh?’ But when I show this to people …” He clicked a button on screen and a transparent 3D model of the brain popped up, spinning on its axis, filled with nodes and lines too numerous to count.

What Zador showed me was a map of 50,000 neurons in the cerebral cortex of a mouse. It indicated where the cell bodies of every neuron sat and where they sent their long axon branches. A neural map of this size and detail has never been made before. Forgoing the traditional method of brain mapping that involves marking neurons with fluorescence, Zador had taken an unusual approach that drew on the long tradition of molecular biology research at Cold Spring Harbor, on Long Island. He used bits of genomic information to imbue a unique RNA sequence or “bar code” into each individual neuron. He then dissected the brain into cubes like a sheet cake and fed the pieces into a DNA sequencer. The result: a 3-D rendering of 50,000 neurons in the mouse cortex (with as many more to be added soon) mapped with single cell resolution.

This work, Zador’s magnum opus, is still being refined for publication. But in a paper recently published by Nature, he and his colleagues showed that the technique, called MAPseq (Multiplexed Analysis of Projections by Sequencing), can be used to find new cell types and projection patterns never before observed. The paper also demonstrated that this new high-throughput mapping method is strongly competitive in accuracy with the fluorescent technique, which is the current gold standard but works best with small numbers of neurons.


This is a weak signal - but highly important. It relates to a couple of issues around crypto-currencies and blockchain-distributed-ledger technologies.
One key issue is the need to understand the distinction between the crypto-currency (e.g. Bitcoin) as a way to incentivize the maintenance of the the distributed ledger versus the importance and potential of the ledger itself. The real potential of distributed ledger technologies and application is best seen when people STOP associating it with currency.
For example - the other day someone noted to me that while Wikipedia was amazing - it was not economically successful! This surprised me - yes Wikipedia has not made its founder a billionaire - but no other private approach to creating an encyclopaedia can compete - it has essentially disrupted all previous private efforts and is now the largest knowledge commons in the world (except for the Internet itself). The reason for its success is that it harness intrinsic motivations of people - rather than relying on traditional extrinsic incentives of money.
Thus - the true potential of the blockchain will emerge when it emerges as the 21st century institution of records.
The related issue is that the mechanisms of incentivizing the maintenance of the blockchain through hard computational processes (with the extrinsic reward of various currencies) creates an endless ‘arms race’ vulnerable to monopoly (e.g. centralization) capture.

Ethereum falls after rumors of a powerful mining chip surface

Rumors of a new ASIC mining  rig from Bitmain have driven Ethereum prices well below their one-week high of $585. An ASIC – or Application-specific integrated circuit – in the cryptocurrency world is a chip that designers create for the specific purpose of mining a single currency. Early Bitcoin ASICs, for example, drove adoption up and then, in some eyes, centralized Bitcoin mining in a few hands, thereby thwarting the decentralized ethos of die-hard cryptocurrency fans.

According to a CNBC report, analyst Christopher Rolland visited China where he unearthed rumors of a new ASIC chip dedicated to Ethereum mining.

Historically users have mined Ethereum using GPUs which, in turn, led to the unavailability of GPUs for gaming and graphics. However, an ASIC would change the mining equation entirely, resulting in a certain amount of centralization as big players – including Bitmain – created higher barrier to entry for casual miners.

“Ethereum is of the most profitable coins available for GPU mining,” said Mikhail Avady, founder of TryMining.com. “It’s going to affect a lot of the market. Without understanding the hash power of these Bitmain machines we can’t tell if it will make GPUs obsolete or not.”
“It can be seen as an attack on the network. It’s a centralization problem,” he said.


This is a thoughtful piece signalling concerns about the future of an independent Internet - in relation to freedom from being held hostage to business models of ‘surveillance capitalism’.

Facebook: Tear Down This Wall.

Instead of kicking data brokers off its platform, Facebook should empower its entire user base to be their own brokers of data.
Late last week Facebook announced it would eliminate all third-party data brokers from its platform. It framed this announcement as a response to the slow motion train wreck that is the Cambridge Analytica story. Just as it painted Cambridge as a “bad actor” for compromising its users’ data, Facebook has now vilified hundreds of companies who have provided it fuel for its core business model, a model that remains at the center of its current travails.

Why? Well, I hate to be cynical, but here’s my answer: Because Cambridge Analytica provided Facebook air cover to consolidate power over the open web. Put another way: Facebook is planning to profit from a scandal of their own making.
There’s a lot to this post, and I even considered writing it as a multi-part series. But instead, here’s a short guide to what is a relatively long post (for today’s Interwebs, anyway).
-  A primer on data brokers and their relationship to “third” and “first” party data.
-  Facebook’s role as the biggest data broker of them all
-  The core issue of trust, and
-  A straw man for a better model for Facebook and the web.
So, pour yourself a bourbon folks, and here we go…

Facebook Is The Biggest Data Broker In The World
As the largest data collection platform in the world, Facebook has more first party relationships than any other entity in human history. When you agree to use the service, you agree to the company’s terms of service, which cement a first party relationship, and give Facebook the right to use your data to drive its advertising business, among other things.
Facebook has just become the biggest data broker in the history of humanity. It just doesn’t want you to know that.


This is another important signal indicating an acceleration of knowledge and capability that is arising as we cognify (Add AI) new processes of analysis.
“What we have seen here is that this kind of artificial intelligence can capture this expert knowledge,” says Pablo Carbonell, who designs synthesis-predicting tools at the University of Manchester, UK, and was not involved in the work. He describes the effort as “a landmark paper”.
The new AI tool, developed by Marwin Segler, an organic chemist and artificial-intelligence researcher at the University of Münster in Germany, and his colleagues, uses deep-learning neural networks to imbibe essentially all known single-step organic-chemistry reactions — about 12.4 million of them. This enables it to predict the chemical reactions that can be used in any single step. The tool repeatedly applies these neural networks in planning a multi-step synthesis, deconstructing the desired molecule until it ends up with the available starting reagents.

Need to make a molecule? Ask this AI for instructions

Artificial-intelligence tool that has digested nearly every reaction ever performed could transform chemistry.
Chemists have a new lab assistant: artificial intelligence. Researchers have developed a ‘deep learning’ computer program that produces blueprints for the sequences of reactions needed to create small organic molecules, such as drug compounds. The pathways that the tool suggests look just as good on paper as those devised by human chemists.

The tool, described in Nature on 28 March, is not the first software to wield artificial intelligence (AI) instead of human skill and intuition. Yet chemists hail the development as a milestone, saying that it could speed up the process of drug discovery and make organic chemistry more efficient.

Chemists have conventionally scoured lists of reactions recorded by others, and drawn on their own intuition to work out a step-by-step pathway to make a particular compound. They usually work backwards, starting with the molecule they want to create and then analysing which readily available reagents and sequences of reactions could be used to synthesize it — a process known as retrosynthesis, which can take hours or even days of planning.


One more step in the progress of domesticating DNA.

Crispr Enhanced to Find, Edit Tiny Mutations

A bioengineering lab at Harvard University designed a refinement for genome editing to identify and remove small genetic mutations that can lead to diseases or organisms resistant to current drugs. Researchers from the Wyss Institute, a biological engineering research center at Harvard, describe their process in the 19 March issue of Proceedings of the National Academy of Sciences (paid subscription required).

A team from the labs of geneticist George Church and systems biologist James Collins, on the faculty at MIT as well as Harvard, are seeking better methods for dealing with point mutations, variations in single nucleotide polymorphisms, or SNPs. These single change in base pairs are the most common type of mutation and generally have little effect on organisms. But in some cases, as when inside genes or regions where genes are regulated, SNPs can play a larger role. One example is bacteria, where even minute variations in their DNA can make the microorganisms resistant to current antibiotics.

The researchers led by Alejandro Chavez, now on the faculty at Columbia University, devised their process as enhancements to the emerging genome editing technology Crispr, short for short for clustered regularly interspaced short palindromic repeats. Crispr is based on bacterial defense mechanisms that use RNA to identify and monitor precise locations in DNA. The actual editing of genomes with Crispr employs enzymes that cleave DNA strands at the desired points, with Crispr-associated protein 9, or Cas9, being the enzyme used most often.

The team’s Crispr-Cas9 technology is able to discriminate in genomic locations down to single SNPs, accomplished by more precisely engineering the RNA to guide Cas9 enzymes to highly specific locations. Without this capability, say the researchers, Crispr edits can result in gain-of-function mutations that can cause unwanted changes in cells or tissue. “By focusing instead on guide RNA features,” says Church in a Wyss Institute statement, “our approach dramatically enhances Cas9’s specificity up to a level where single nucleotide polymorphisms can be clearly distinguished and unwanted genetic variants erased.”


This is a great signal of emerging brain-mind-computer interface enhancing human capacity.
“This is the first time scientists have been able to identify a patient’s own brain cell code or pattern for memory and, in essence, ‘write in’ that code to make existing memory work better, an important first step in potentially restoring memory loss,” said the study’s lead author Robert Hampson, Ph.D., professor of physiology/pharmacology and neurology at Wake Forest Baptist.

Prosthetic Memory System Successful in Humans, Study Finds

Scientists at Wake Forest Baptist Medical Center and USC have demonstrated the successful implementation of a prosthetic system that uses a person’s own memory patterns to facilitate the brain’s ability to encode and recall memory
Scientists at Wake Forest Baptist Medical Center and the University of Southern California (USC) have demonstrated the successful implementation of a prosthetic system that uses a person’s own memory patterns to facilitate the brain’s ability to encode and recall memory.

In the pilot study, published in today’s Journal of Neural Engineering, participants’ short-term memory performance showed a 35 to 37 percent improvement over baseline measurements. The research was funded by the U.S. Defense Advanced Research Projects Agency (DARPA).

The study focused on improving episodic memory, which is the most common type of memory loss in people with Alzheimer’s disease, stroke and head injury. Episodic memory is information that is new and useful for a short period of time, such as where you parked your car on any given day. Reference memory is information that is held and used for a long time, such as what is learned in school.

“We showed that we could tap into a patient’s own memory content, reinforce it and feed it back to the patient,” Hampson said. “Even when a person’s memory is impaired, it is possible to identify the neural firing patterns that indicate correct memory formation and separate them from the patterns that are incorrect. We can then feed in the correct patterns to assist the patient’s brain in accurately forming new memories, not as a replacement for innate memory function, but as a boost to it.


Here’s a prosthetic that may find many uses. The image is worth the look.
"The motivation for this was to build an IA device -- an intelligence-augmentation device," says Arnav Kapur, a graduate student at the MIT Media Lab, who led the development of the new system. "Our idea was: Could we have a computing platform that's more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?"

Computer system transcribes words users 'speak silently'

Electrodes on the face and jaw pick up otherwise undetectable neuromuscular signals triggered by internal verbalizations
Researchers have developed a computer interface that can transcribe words that the user verbalizes internally but does not actually speak aloud. Electrodes in the device pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations -- saying words 'in your head' -- but are undetectable to the human eye.


This is a lovely 3 min video - a small signal of emerging new art forms.

A neural network that keeps seeing art where we see mundane objects

When mundane objects such as cords, keys and cloths are fed into a live webcam, a machine-learning algorithm ‘sees’ brilliant colours and images such as seascapes and flowers instead. The London-based, Turkish-born visual artist Memo Akten applies algorithms to the webcam feed as a way to reflect on the technology and, by extension, on ourselves. Each instalment in his Learning to See series features a pre-trained deep-neural network ‘trying to make sense of what it sees, in context of what it’s seen before’. In Gloomy Sunday, the algorithm draws from tens of thousands of images scraped from the Google Arts Project, an extensive collection of super-high-resolution images of notable artworks. Set to the voice of the avant-garde singer Diamanda Galás, the resulting video has unexpected pathos, prompting reflection on how our minds construct images based on prior inputs, and not on precise recreations of the outside world.


Here is a strong signal of the future of art, science, learning and work. The real content of the digital environment is the interactive, immersive media. The Gifs and short video are worth the view.

CGI recreation of Damien Hirst painting takes viewers "inside" the artwork

This CGI animation, created by art studio Prudence Cuming Associates, takes viewers on a journey across the heavily textured surface of Damien Hirst's Veil of Faith painting.

London-based Prudence Cuming Associates created the movie to coincide with an exhibition of Hirst's latest series, The Veil Paintings, at the Gagosian Gallery in Los Angeles.

The company wanted to "imagine another dimension" of Hirst's work – one that gives the feeling of being inside the painting.

"Having worked with Science and Gagosian for many years, we have always been interested in what technology can do to get the viewer closer to the artwork," said Stuart Trood, CEO of HENI, the company that Prudence Cuming Associates forms part of.

No comments:

Post a Comment