Thursday, February 23, 2017

Friday Thinking 24 Feb. 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9


Astonishing geomagnetic spike hit the ancient kingdom of Judah

Imagine a world in which we didn’t exchange currency, but kept track of who had what on a huge public spreadsheet, distributed across the internet. Every 10 minutes, all the transactions that took place in that slice of time are fused together into a single block. Each block includes a chain linking it to previous blocks, hence the term ‘blockchain’. The end result is a universal record book that reliably logs everything that’s ever happened via a (theoretically) tamper-proof algorithm. We don’t need to trust human bankers to tell us who owns what, because we can all see what’s written in the mathematically verified blockchain.

But Bitcoin is just one version of the blockchain. The fundamental technology has the potential to replace a much wider range of human institutions in which we use trust to reach a consensus about a state of affairs. It could provide a definitive record for property transfers, from diamonds to Porsches to original Picassos. It could be used to record contracts, to certify the authenticity of valuable goods, or to securely store your health records (and keep track of anyone who’s ever accessed them).

But there’s a catch: what about the faithful ‘execution’ of a contract? Doesn’t that require trust as well? What good is an agreement, after all, if the text is there but people don’t respect it, and don’t follow through on their obligations?

The great cryptocurrency heist

According to Zurich’s Das Magazine, which profiled Kosinski in late 2016, “with a mere ten ‘likes’ as input his model could appraise a person’s character better than an average coworker. With seventy, it could ‘know’ a subject better than a friend; with 150 likes, better than their parents. With 300 likes, Kosinski’s machine could predict a subject’s behavior better than their partner. With even more likes it could exceed what a person thinks they know about themselves.”

It had taken Kosinski and his colleagues years to develop that model, but with his methods and findings now out in the world, there was little to stop SCL Elections from replicating them. It would seem they did just that.

Where traditional pollsters might ask a person outright how they plan to vote, Analytica relies not on what they say but what they do, tracking their online movements and interests and serving up multivariate ads designed to change a person’s behavior by preying on individual personality traits. ...using 40-50,000 different variants of ad every day that were continuously measuring responses and then adapting and evolving based on that response

For Analytica, the feedback is instant and the response automated: Did this specific swing voter in Pennsylvania click on the ad attacking Clinton’s negligence over her email server? Yes? Serve her more content that emphasizes failures of personal responsibility. No? The automated script will try a different headline, perhaps one that plays on a different personality trait -- say the voter’s tendency to be agreeable toward authority figures. Perhaps: “Top Intelligence Officials Agree: Clinton’s Emails Jeopardized National Security.”

Not Just a Bubble, But Trapped in Your Own Ideological Matrix
Imagine that in 2020 you found out that your favorite politics page or group on Facebook didn’t actually have any other human members, but was filled with dozens or hundreds of bots that made you feel at home and your opinions validated? Is it possible that you might never find out?

The Rise of the Weaponized AI Propaganda Machine

Alright, many people will say this can’t scale - but it can scale down - into self-organizing small teams. Worth the read and thought.
Yet Mr Kniberg stresses that not having to ask permission does not remove the need for staff to discuss issues or bounce ideas off each other.
Because they are all in charge, workers are more motivated, he argues. Crisp regularly measures staff satisfaction, and the average is about 4.1 out of five.

No CEO: The Swedish company where nobody is in charge

Do you really need someone to tell you what to do at work?
Three years ago, Swedish software consultancy Crisp decided that the answer was no.
The firm, which has about 40 staff, had already trialled various organisational structures, including the more common practice of having a single leader running the company.

Crisp then tried changing its chief executive annually, based on a staff vote, but eventually decided collectively that no boss was needed.
Yassal Sundman, a developer at the firm, explains: "We said, 'what if we had nobody as our next CEO - what would that look like?' And then we went through an exercise and listed down the things that the CEO does."

The staff decided that many of the chief executive's responsibilities overlapped with those of the board, while other roles could be shared among other employees.
"When we looked at it we had nothing left in the CEO column, and we said, 'all right, why don't we try it out?'" says Ms Sundman.

This is a must read article for anyone interested in the future of political-economies, politics, the power of AI for the ‘less than good’. Also this is a fascinating site to follow - articles created by Science Fiction writers. There’s a 12 min video as well.
Cambridge Analytica isn’t the only company that could pull this off -- but it is the most powerful right now. Understanding Cambridge Analytica and the bigger AI Propaganda Machine is essential for anyone who wants to understand modern political power, build a movement, or keep from being manipulated. The Weaponized AI Propaganda Machine it represents has become the new prerequisite for political success in a world of polarization, isolation, trolls, and dark posts.
We have entered a new political age. At Scout, we believe that the future of constructive, civic dialogue and free and open elections depends on our ability to understand and anticipate it.

The Rise of the Weaponized AI Propaganda Machine

There’s a new automated propaganda machine driving global politics. How it works and what it will mean for the future of democracy.
“This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go,” said professor Jonathan Albright.

Albright, an assistant professor and data scientist at Elon University, started digging into fake news sites after Donald Trump was elected president. Through extensive research and interviews with Albright and other key experts in the field, including Samuel Woolley, Head of Research at Oxford University’s Computational Propaganda Project, and Martin Moore, Director of the Centre for the Study of Media, Communication and Power at Kings College, it became clear to Scout that this phenomenon was about much more than just a few fake news stories. It was a piece of a much bigger and darker puzzle -- a Weaponized AI Propaganda Machine being used to manipulate our opinions and behavior to advance specific political agendas.

By leveraging automated emotional manipulation alongside swarms of bots, Facebook dark posts, A/B testing, and fake news networks, a company called Cambridge Analytica has activated an invisible machine that preys on the personalities of individual voters to create large shifts in public opinion. Many of these technologies have been used individually to some effect before, but together they make up a nearly impenetrable voter manipulation machine that is quickly becoming the new deciding factor in elections around the world.

Google has created many open source software projects including Android - here is another gift - not open source - but as a gift. I wonder if this is possibly related to a blockchain capability - as a complement.
Spanner has provided Google with a competitive advantage in so many different markets. It underpins not only AdWords and Gmail but more than 2,000 other Google services, including Google Photos and the Google Play store. Google gained the ability to juggle online transactions at an unprecedented scale, and thanks to Spanner’s extreme form of data replication, it was able to keep its services up and running with unprecedented consistency.
Spanner could also be useful in the financial markets, allowing big banks to more efficiently track and synchronize trades happening across the planet.

Spanner, the Google Database That Mastered Time, Is Now Open to Everyone

ABOUT A DECADE ago, a handful of Google’s most talented engineers started building a system that seems to defy logic.

Called Spanner, it was the first global database, a way of storing information across millions of machines in dozens of data centers spanning multiple continents, and it now underpins everything from Gmail to AdWords, the company’s primary moneymaker. But it’s not just the size of this creation that boggles the mind. The real trick is that, even though Spanner stretches across the globe, it behaves as if it’s in one place.

Google can change company data in one part of this database—running an ad, say, or debiting an advertiser’s account—without contradicting changes made on the other side of the planet. What’s more, it can readily and reliably replicate data across multiple data centers in multiple parts of the world—and seamlessly retrieve these copies if any one data center goes down. For a truly global business like Google, such transcontinental consistency is enormously powerful.

Part of the trick is that they equipped Google’s data centers with a series of GPS receivers and atomic clocks. The GPS receivers, much like the one in your cell phone, grab the time from various satellites orbiting the globe, while the atomic clocks keep their own time. Then they shuttle their time readings to master servers in each data center. These masters constantly trade readings in an effort to settle on a common time.

A margin of error still exists, but thanks to so many readings, the masters can bootstrap a far more reliable timekeeping service. “This gives you faster-than-light coordination between two places,” says Peter Mattis, a former Google engineer who founded CockroachDB, a startup working to build an open source version of Spanner.

Google calls this timekeeping technology TrueTime, and only Google has it. Drawing on a celebrated research paper Google released in 2012, Mattis and CockroachDB have duplicated many other parts of Spanner—but not TrueTime. Google can pull this off only because of its massive global infrastructure.

This is a great article with lovely graphics describing the machine learning architecture in comparison of the regular CPU or the Graphics Processor. What is interesting as well is the insight to new metaphors of describing computation as learning. We cannot grasp a truly unknown phenomena without colonizing the experience with familiar metaphors - so when new metaphors arise we should pay attention to what types of thinking they enable. This new metaphor of computation the Intelligent Processing Unit (IPU) is a graph processor bears watching.
We have designed it to be extensible; the IPU will accelerate today’s deep learning applications, but the combination of Poplar and IPU provides access to the full richness of the computational graph abstraction for future innovation.

Inside an AI 'brain' - What does machine learning look like?

One aspect all recent machine learning frameworks have in common - TensorFlow, MxNet, Caffe, Theano, Torch and others - is that they use the concept of a computational graph as a powerful abstraction. A graph is simply the best way to describe the models you create in a machine learning system. These computational graphs are made up of vertices (think neurons) for the compute elements, connected by edges (think synapses), which describe the communication paths between vertices.

Unlike a scalar CPU or a vector GPU, the Graphcore Intelligent Processing Unit (IPU) is a graph processor. A computer that is designed to manipulate graphs is the ideal target for the computational graph models that are created by machine learning frameworks.

We’ve found one of the easiest ways to describe this is to visualize it. Our software team has developed an amazing set of images of the computational graphs mapped to our IPU. These images are striking because they look so much like a human brain scan once the complexity of the connections is revealed – and they are incredibly beautiful too.

This is a must read article for anyone interested in the Blockchain technology and the current state of Etherium. The key lesson related the the initial basic assumption that it is possible to have a ‘trustless’ system. The article provides a good analysis of the history of Etherium’s lessons in this regard. That said the blockchain remains as promising as it was in terms of a disruptive technology of distributive trust.
This is another signal - of the need for a new institution - a global Auditor General of Algorithms - to ensure a sort of ‘truth in algorithm’ - that they are doing what they say they are doing.

The great cryptocurrency heist

Blockchains don’t offer us a trustless system, but rather a reassignment of trust
It might make for good marketing copy, but the fact of the matter is that blockchain technology is larded through with trust. First, you need to trust the protocol of the cryptocurrency and/or DAO. This isn’t as simple as saying ‘I trust the maths’, for some actual human (or humans) wrote the code and hopefully debugged it, and we are at least trusting them to get it right, no? Well, in the case of The DAO, no, maybe they didn’t get it right.

Second, you have to trust the ‘stakeholders’ (including miners) not to pull the rug out from under you with a hard fork. One of the objections to the hard fork was that it would create a precedent that the code would be changeable. But this objection exposes an unmentioned universal truth: the immutability of the blockchain is entirely a matter of trusting other humans not to fork it. Ethereum Classic Classic would be no more immutable than Etherum Classic, which was no more immutable than Ethereum. At best, the stakeholders – humans all – were showing that they were more trustworthy qua humans about not forking around with the blockchain. But at the same time, they obviously could change their minds about forking at any time. In other words, if Ethereum Classic is more trustworthy, it’s only because the humans behind it are.  

Third, if you are buying into Ethereum or The DAO or any other DAO, you are being asked to trust the people who review the algorithm and tell you what it does and whether it’s secure. But those people – computer scientists, say – are hardly incorruptible. Just as you can bribe an accountant to say that the books are clean, so too can you bribe a computer scientist. Moreover, you’re putting your trust in whatever filters you applied to select that computer scientist. (University or professional qualifications? A network of friends? The testimonials of satisfied customers – which is to say, the same method by which people selected Bernie Madoff as their financial advisor.)

Finally, even if you had it on divine authority that the code of a DAO was bug-free and immutable, there are necessary gateways of trust at the boundaries of the system. For example, suppose you wrote a smart contract to place bets on sporting events. You still have to trust the news feed that tells you who won the match to determine the winner of the bet. Or suppose you wrote a smart contract under which you were to be delivered a truck full of orange juice concentrate. The smart contract can’t control whether or not the product is polluted by lemons or some other substance. You have to trust the humans in the logistics chain, and the humans at the manufacturing end, to ensure your juice arrives unadulterated.
For anyone looking for a 2 min video explanation of the Blockchain here it is

Understand the Blockchain in Two Minutes

Over the past decade, an alternative digital paradigm has slowly been taking shape at the edges of the internet.

This new paradigm is the blockchain. After incubating through millions of Bitcoin transactions and a host of developer projects, it is now on the tips of tongues of CEOs and CTOs, startup entrepreneurs, and even governance activists. Though these stakeholders are beginning to understand the disruptive potential of blockchain technology and are experimenting with its most promising applications, few have asked a more fundamental question: What will a world driven by blockchains look like a decade from now?

There is a lot that has been written about video games and their impact on players. For many, however, video games are also works of art - like photography or films - maybe even more important than are in any previous art media.

Video Games Do Guilt Better Than Any Other Art

The idea that motion pictures can be works of art has been around since the 1920s, and it hasn’t really been disputed since. It’s easy to see why—cinema shares characteristics with theater in terms of acting, direction, music, set design, narrative, and so on. Now we have whole academic departments dedicated to film appreciation, to understanding the emotional and intellectual responses—deep feelings of awe and reverence, among others—that movies can elicit.

But video games aren’t assumed to be as artistic as cinema or theater, if it all. In 2010, for instance, the late film critic Roger Ebert wrote an essay titled, “Video Games Can Never Be Art.” But with the increasing sophistication, and variety, of video games today, it’s becoming more and more clear that they are forms of art; or, at least, they evoke many of the same intellectual and emotional responses that artworks do. What’s more, creating large-scale titles is like creating big-budget films or operas, since they require huge teams of people. An enormous amount of the cost of a big-budget video game is paid to people the industry classifies as “artists.” (When their jobs have such titles as set and lighting design, music composition and performance, acting, animating, and painting, what else should we call them?)

There have been many arguments against views like Ebert’s, and I won’t rehash them here. But perhaps it’s not enough to say, as the philosopher Aaron Smuts does, that video games are on equal artistic footing with any other so-called art. It might be that video games can actually do more as art than other forms.
I’m talking about guilt.

This is a fascinating and very important article - one of a number of recent articles discussing livelihood on social media platforms. Reading this article a little bit beneath the surface - beyond the ‘scandal’ (in the world of humor and satire - we must tread with care -between vigilance of hate-speech and the role of art to make us reflect seriously) and extrapolating to any other social media platform - highlights a need for a new institution - one addressing ‘creator rights’ that would constrain or make social media platforms more accountable to democratic principles. Livelihood in the emerging digital environment needs to be included in a digital charter of citizen rights.

YouTube’s Monster: PewDiePie and His Populist Revolt

Felix Kjellberg, known to his fans as PewDiePie, is by far YouTube’s biggest star. His videos, a mix of video-game narration, humorous rants and commentary, have cumulatively been viewed billions of times, and more than 53 million people subscribe to his channel. He has been called “the king of YouTube” and countless variations thereon, and he has remained unchallenged on that perch for years, making millions of dollars and leveraging his popularity into outside ventures.

But Monday night, The Wall Street Journal reported that the Disney-owned Maker Studios, a longtime partner of Kjellberg’s, would no longer have anything to do with him; later, YouTube announced that it was canceling a show developed with Kjellberg, and removing his channel from its lucrative “Google Preferred” advertising program. At issue was a series of recent comedy videos….

With more than a billion users, YouTube has become not merely a platform but almost a kind of internet nation-state: the host of a gigantic economy and a set of cultures governed by a new and novel sort of corporation, sometimes at arm’s length and other times up close. It’s a system Kjellberg has spent recent months antagonizing in a broader and less-inflammatory way, even as he continued to thrive within it. He bemoaned its structure and the way it had changed; he balked at its limits and took joy in causing offense and flouting rules. Over time, he grew into an unlikely, disorienting and insistently unserious political identity: He became YouTube’s very own populist reactionary.

In a similar stream of thinking.
“Nation-state hacking has evolved into attacks on civilians in times of peace,” said Smith at the RSA Conference in San Francisco, echoing the language of the Geneva Convention. “We need to call on the world’s governments to come together [as] they came together in 1949 in Switzerland.” Smith, who is also Microsoft's chief legal officer, has recently lobbied for legal reforms to update privacy and security protections for the Internet era

Do We Need a Digital Geneva Convention?

Microsoft calls for an international treaty to prevent companies and citizens from getting tangled up in nation-state cyberattacks.
The Geneva Convention, signed by war-weary nations in August 1949, now binds 196 countries to protect civilians in war zones. Microsoft’s president, Brad Smith, argues that the U.S. and other countries now need to draw up a digital equivalent to protect civilians and companies caught in the crossfire of constant cyberwar.

In recent years, computing and security companies have uncovered or been the victims of malware and network attacks that appear linked with military or intelligence agencies. Smith told an audience at the world’s largest security conference Tuesday that international diplomacy is needed to mitigate the negative effects on private companies and citizens.

Fascinating that games can induce guilt but employers readily continue to employ surveillance systems to control their workers. This is not quite the same as the use of sociometric badges used with employee consent.

Your Cubicle Has Ears—and Eyes, and a Brain

Sensors and AI can keep tabs on employees better than any boss.
Employers have long wanted to know how their workers spend their time. New office surveillance technology is now making the task far easier.

Bloomberg reports that an increasing number of companies are outfitting offices with sensors to keep track of employees. These sensors are hidden in lights, on walls, under desks—anywhere that allows them to measure things like where people are and how much they are talking or moving.

The raw data is just the beginning. New Scientist recently reported that a startup called StatusToday uses software to crunch information on everything from key card swipes to what applications people are using on their computers to understand how employees—and the business as a whole—operate.

Advocates suggest that insights from these kinds of initiatives can streamline companies and spot potential problems before they happen. Perhaps only two-thirds of desks get used at any moment, so the company can downsize the amount of office space they lease. Or maybe an employee looks at a lot of sensitive data and schedules a large number of external meetings, so the system flags them as a potential security risk. These are, after all, the problems that keep senior management awake at night.
Of course, the such schemes can also be read as creepy, Big Brother-style surveillance.

And surveillance of our systems is becoming ever more inadequate - a metaphor perhaps of any form of ‘activism’ or nefarious intentions.
The simplest advice for online safety comes via cybersecurity journalist Brian Krebs: First, if you didn’t go looking for it, don’t install it. Second, if you installed it, update it. Third, if you no longer need it, get rid of it! Mostly, use common sense: You wouldn’t eat a piece of candy off the ground. Yet in 2008, a U.S. soldier sparked one of the largest data breaches in military history by using a USB stick he found in the parking lot outside his base.

You Can’t Depend on Antivirus Software Anymore

Malware has become too sophisticated.
In 2005, Panda Software reported that a new strain of malware was discovered every 12 minutes. In 2016, the cybersecurity company McAfee says it found four every second.

And those were just the strains the companies could detect. For malware—the umbrella term for parasitic software like viruses, worms, and Trojans that infiltrate and interfere with computer functions—hasn’t only proliferated: It’s evolved to better evade detection.

Faced with this tsunami of sophisticated malware, antivirus software like McAfee, once practically synonymous with personal cybersecurity, has struggled to keep pace. In 2014, a senior vice president at Symantec (the company that created McAfee competitor Norton Antivirus) went so far as to publicly say he thought that antivirus software was “dead.” At the time, he estimated that the technology only caught about 45 percent of cyberattacks.

The tipping point has been passed - the question is will the US keep the pace of progress despite efforts by incumbents.
“What these numbers tell you is that the solar industry is a force to be reckoned with,” said Abigail Ross Hopper, SEIA’s president and CEO. “Solar's economically winning hand is generating strong growth across all market segments nationwide, leading to more than 260,000 Americans now employed in solar.”

US Solar Market Grows 95% in 2016, Smashes Records

In its biggest year to date, the United States solar market nearly doubled its annual record, topping out at 14,626 megawatts of solar PV installed in 2016.

This represents a 95 percent increase over the previous record of 7,493 megawatts installed in 2015. GTM Research and the Solar Energy Industries Association (SEIA) previewed this data in advance of their upcoming U.S. Solar Market Insight report, set to be released on March 9.

For the first time ever, U.S. solar ranked as the No. 1 source of new electric generating capacity additions on an annual basis. In total, solar accounted for 39 percent of new capacity additions across all fuel types in 2016.

This is may be a game changer for the hydrogen economy - a way to create fuels cells for homes, building, cars. Another forward move in a new energy geopolitics.

Four-stroke engine cycle produces hydrogen from methane and captures CO2

When is an internal combustion engine not an internal combustion engine? When it's been transformed into a modular reforming reactor that could make hydrogen available to power fuel cells wherever there's a natural gas supply available.

By adding a catalyst, a hydrogen separating membrane and carbon dioxide sorbent to the century-old four-stroke engine cycle, researchers have demonstrated a laboratory-scale hydrogen reforming system that produces the green fuel at relatively low temperature in a process that can be scaled up or down to meet specific needs. The process could provide hydrogen at the point of use for residential fuel cells or neighborhood power plants, electricity and power production in natural-gas powered vehicles, fueling of municipal buses or other hydrogen-based vehicles, and supplementing intermittent renewable energy sources such as photovoltaics.

Known as the CO2/H2 Active Membrane Piston (CHAMP) reactor, the device operates at temperatures much lower than conventional steam reforming processes, consumes substantially less water and could also operate on other fuels such as methanol or bio-derived feedstock. It also captures and concentrates carbon dioxide emissions, a by-product that now lacks a secondary use—though that could change in the future.

This is a short article discussing some further progress in understanding autoimmune conditions, genes and microbial profiles. Worth the read.

The very microbes that helped us evolve now make us sick

Between the mid-1990s and the mid-2000s alone, the likelihood of having a classmate with a food allergy increased by 20 per cent in the United States. In fact, over the past five decades, the incidence of all allergies and autoimmune diseases – caused by your body attacking itself – has skyrocketed. What could explain our sudden hypersensitivity to our surroundings and ourselves? Since evolution operates on the timescale of millennia, the culprits lie not in our genes but somewhere within our environment.

One thing that has changed in public health is our awareness of germs and how they spread. In response to that insight, over the past half-century our implementation of hygiene practices has spared us from debilitating infections and enormous human misery. But the new vigilance might have altered the development of our immune system, the collection of organs that fight infections and internal threats to our health.

So what has changed? In short, it’s the standard for what constitutes a good microbe versus a bad one. ‘Take bacterial species that increase nutrient absorption from food,’ Medzhitov says. These were immensely beneficial at a time where you had to go days without eating. Today in the parts of the world with an overabundance of food, having such bacteria in your intestine contributes to obesity. ‘Microbes that cause intestinal inflammation are another example of what we call bad microbes because they induce [detrimental immune] responses. But in the past, these microbes could have protected you from intestinal pathogens,’ he adds.    

Here’s some great news about Tuberculosis.

Two new drug therapies might cure every form of tuberculosis

Tuberculosis, the world’s leading infectious killer, may have finally met its match. Two new drug therapies may be able to cure all forms of tuberculosis – even the ones most difficult to treat.
“We will have something to offer every single patient,” says Mel Spigelman, president of the TB Alliance, the organisation coordinating trials of the two treatments. “We are on the brink of turning TB around.”

It presently takes six months of drug treatment to cure ordinary TB, and two years to cure people whose infections are resistant to drugs. People may need to take up to 20 tablets a day, plus injections.
Together, the new treatments, called BPaMZ and BPaL, could make treating TB much simpler and more effective.

BPaMZ involves taking four drugs once a day. Trials carried out in 240 people across 10 countries in Africa suggest that it cures almost all cases of ordinary TB in four months, and most people with drug-resistant TB in about six months. In the majority of cases, the TB bacterium had disappeared from sputum within two months.
“The alliance has never before seen such rapid action against TB bacteria,” says Spigelman.

Meanwhile, BPaL, a therapy that involves taking three drugs once a day, has so far cured 40 of 69 patients with “extremely-drug-resistant TB” – the most difficult form to treat. What’s more, it achieved this within six months. The 29 remaining participants in this trial are still to be assessed.
The TB Alliance says that BPaMZ has the potential to treat 99 per cent of people who catch TB each year, while BPaL could treat the remainder.

This is still in the ‘promise’ stage but given the speed of our domestication of DNA - could we see Mammoths by 2020?
“Our aim is to produce a hybrid elephant-mammoth embryo,” said Prof George Church. “Actually, it would be more like an elephant with a number of mammoth traits. We’re not there yet, but it could happen in a couple of years.”

Woolly mammoth on verge of resurrection, scientists reveal

Scientist leading ‘de-extinction’ effort says Harvard team could create hybrid mammoth-elephant embryo in two years
The woolly mammoth vanished from the Earth 4,000 years ago, but now scientists say they are on the brink of resurrecting the ancient beast in a revised form, through an ambitious feat of genetic engineering.

Speaking ahead of the American Association for the Advancement of Science (AAAS) annual meeting in Boston this week, the scientist leading the “de-extinction” effort said the Harvard team is just two years away from creating a hybrid embryo, in which mammoth traits would be programmed into an Asian elephant.

While we all accept the complexity of natural systems that are instrumental to enabling living systems to emerge and evolve - it seems that every year that complexity becomes … well more complex.

Astonishing geomagnetic spike hit the ancient kingdom of Judah

If this were to happen again today, the electrical grid could be a smoking ruin.
Earth's geomagnetic field wraps the planet in a protective layer of energy, shielding us from solar winds and high-energy particles from space. But it's also poorly understood, subject to weird reversals, polar wandering, and rapidly changing intensities. Now a chance discovery from an archaeological dig near Jerusalem has given scientists a glimpse of how intense the magnetic field can get—and the news isn't good for a world that depends on electrical grids and high-tech devices.

In a recent paper for Proceedings of the National Academy of Sciences, an interdisciplinary group of archaeologists and geoscientists reported their discovery. They wanted to analyze how the planet's geomagnetic field changes during relatively short periods, and they turned to archaeology for a simple reason. Ancient peoples worked a lot with ceramics, which means heating clay to the point where the iron oxide particles in the dirt can float freely, aligning themselves with the Earth's current magnetic field.

What they found was startling. Sometime late in the 8th century BCE, there was a rapid fluctuation in the field's intensity over a period of about 30 years—first the intensity increased to over 20 percent of baseline, then plunged to 27 percent under baseline. Though the overall trend at that time was a gradual decline in the fields' intensity similar to what we see today, this spike was basically off the charts.