Thursday, January 19, 2017

Friday Thinking 20 Jan. 2017

Hello all – Friday Thinking is a humble curation of my foraging in the digital environment. My purpose is to pick interesting pieces, based on my own curiosity (and the curiosity of the many interesting people I follow), about developments in some key domains (work, organization, social-economy, intelligence, domestication of DNA, energy, etc.)  that suggest we are in the midst of a change in the conditions of change - a phase-transition. That tomorrow will be radically unlike yesterday.

Many thanks to those who enjoy this.
In the 21st Century curiosity will SKILL the cat.
Jobs are dying - work is just beginning.

“Be careful what you ‘insta-google-tweet-face’”
Woody Harrelson - Triple 9

Content
Quotes:

Articles:
A 1958 TV Show Had an Unsavory Character Named “Trump” Who Promised to Build a Wall & Save the World


The way we want to make sense of the world around us often has to do with causality. The question we ask is what caused “something” to happen. There is a variable, the “it,” that happened, that is now to be explained. In scientific study this variable is regarded as dependent. An independent variable, or variables, that cause it are then sought. This is also the if-then model of management. In organizations, a familiar explanation for success is that a particular manager or a particular culture caused it. But scholars are increasingly pointing out the fact that this view of the relationship between cause and effect is much too simplistic and leads to a limited or even faulty understanding of what was really going on.

What emerges is, paradoxically, predictable and unpredictable, knowable and unknowable at the same time. This does not mean dismissing planning, or management, as pointless, but means that the future always contains surprises that no one can control.

Each of us is forming plans and making decisions about our next steps all the time. “What each of us does affects others and what they do affects each of us.”

Indefinite number of variables influence what is going on. Almost daily, we experience the inability of leaders to choose what happens to their organizations — or to their countries. The links between cause and effect are lost because the tiniest overlooked, or unknown, variable can escalate into a major force. And afterwards you typically can’t trace back. There is no trail that leads you to an independent variable.

The future of a complex system is emerging through perpetual creation. Complexity is a movement in time that is both knowable and unknowable. Uncertainty is a basic feature. Although the specific paths are unpredictable, there is a pattern. The pattern is never exactly the same, but there is always some similarity to what has happened earlier.

In the end it is about the combination and interaction of the elements that are present and how all of them participate in co-creating what is happening. The big new idea is to reconfigure agency in a way that brings these relationships into the center. The task today is to see action within these connections and interdependencies. We need to move towards temporality, to understand what is happening in time. An organization is not a whole consisting of parts. An organization, or a country, is a continuously developing or stagnating pattern in time.

The individual mind arises continuously in communication between people. The focus should now be on cooperation and emergent interaction based on interdependence and responsiveness to what is actually happening. Looking at communication, not through it, what we are creating together — and what we could create together. Ilya Prigogine wrote in his book “The End of Certainty” that the future is not given, but under perpetual construction:

“Life is about unpredictable novelty where the possible is always richer than the real.”

Esko Kilpi -The Essential Skill of Pattern Recognition



Because biographies of famous scientists tend to edit out their mistakes, we underestimate the degree of risk they were willing to take. And because anything a famous scientist did that wasn't a mistake has probably now become the conventional wisdom, those choices don't seem risky either.

Biographies of Newton, for example, understandably focus more on physics than alchemy or theology. The impression we get is that his unerring judgment led him straight to truths no one else had noticed. How to explain all the time he spent on alchemy and theology? Well, smart people are often kind of crazy.

But maybe there is a simpler explanation. Maybe the smartness and the craziness were not as separate as we think. Physics seems to us a promising thing to work on, and alchemy and theology obvious wastes of time. But that's because we know how things turned out. In Newton's day the three problems seemed roughly equally promising. No one knew yet what the payoff would be for inventing what we now call physics; if they had, more people would have been working on it. And alchemy and theology were still then in the category Marc Andreessen would describe as "huge, if true."

Newton made three bets. One of them worked. But they were all risky.

Paul Graham - The Risk of Discovery



Pushing people into ever-higher levels of formal education at the start of their lives is not the way to cope. Just 16% of Americans think that a four-year college degree prepares students very well for a good job. Although a vocational education promises that vital first hire, those with specialised training tend to withdraw from the labour force earlier than those with general education—perhaps because they are less adaptable.

At the same time on-the-job training is shrinking. In America and Britain it has fallen by roughly half in the past two decades. Self-employment is spreading, leaving more people to take responsibility for their own skills. Taking time out later in life to pursue a formal qualification is an option, but it costs money and most colleges are geared towards youngsters.

The market is innovating to enable workers to learn and earn in new ways. Providers from General Assembly to Pluralsight are building businesses on the promise of boosting and rebooting careers. Massive open online courses (MOOCs) have veered away from lectures on Plato or black holes in favour of courses that make their students more employable. At Udacity and Coursera self-improvers pay for cheap, short programmes that bestow “microcredentials” and “nanodegrees” in, say, self-driving cars or the Android operating system. By offering degrees online, universities are making it easier for professionals to burnish their skills. A single master’s programme from Georgia Tech could expand the annual output of computer-science master’s degrees in America by close to 10%.

Equipping people to stay ahead of technological change




This is a MUST READ for anyone concerned with the future of the Internet and the digital environment - concerned that it serve all of us as a 21st century platform for democratic governance and an open society.

Tim Wu: ‘The internet is like the classic story of the party that went sour’

The influential tech thinker has charted the history of the attention industry: enterprises that harvest our attention to sell to advertisers. The internet, he argues, is the latest communications tool to have fallen under its spell
In 2010, for example, he published The Master Switch: The Rise and Fall of Information Empires, a sobering history of the great communications technologies of the 20th century – the telephone, movies, broadcast radio and television. In telling the history, Wu perceived a recurring cycle in the evolution of these technologies. Each started out as open, chaotic, diverse and intensely creative; each stimulated utopian visions of the future, but in the end they all wound up “captured” by industrial interests.

The cue for his new book, The Attention Merchants, is an observation the Nobel prize-winning economist Herbert Simon made in 1971. “In an information-rich world,” Simon wrote, “the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.”

Wu’s book is a history of the attention industry, that is to say the enterprises that harvest human attention and sell it to advertisers. This is not quite the same thing as a history of advertising, which is a more ancient business. His story begins therefore not with the paid advertisements that featured in early newspapers and printed pamphlets, but with two separate developments.

This is a long but well worth the read analysis of business creed “Maximize Shareholder Value” MSV. It is clearly and engagingly written. For anyone who doesn’t quite understand the profoundly negative impact this creed has had on our economy and who wants to understand another contributor to the massive economic inequity we see today - this is worth your time.

Resisting The Lure Of Short-Termism: Kill 'The World's Dumbest Idea'

There is only one valid definition of a business purpose: to create a customer.
– Peter Drucker, The Practice of Management (1954)
When pressures are mounting to deliver short-term results, how do successful CEOs resist those pressures and achieve long-term growth? The issue is pressing: low global economic growth is putting stress on the political and social fabric in Europe and the Americas and populist leaders are mobilizing widespread unrest. “By succumbing to false solutions, born of disillusion and rage,” writes Martin Wolf in the Financial Times this week, “the west might even destroy the intellectual and institutional pillars on which the postwar global economic and political order has rested.”

The first step in resisting the pressures of short-termism is to correctly identify their source. The root cause is remarkably simple—the view, which is widely held both inside and outside the firm, that the very purpose of a corporation is to maximize shareholder value as reflected in the current stock price (MSV). This notion, which even Jack Welch has called “the dumbest idea in the world,” got going in the 1980s, particularly in the U.S., and is now regarded in much of the business world, the stock market and government as an almost-immutable truth of the universe. As The Economist even declared in March 2016, MSV is now “the biggest idea in business.”


This is an excellent article providing a weak signal to the types of re-imagining of communities - that the digital environment, the unprecedented change in age demographics, the domestication of DNA and so much more.

Students living in nursing homes - a solution to our ageing populations?

In today’s society both young and old increasingly find themselves living in a bubble of like-minded and similar-aged peers. This is especially true of university students who leave home at 18 to live with people of the same age – who have quite often had similar life experiences.

Given this, the report that a Dutch nursing home has established a programme providing free rent to university students in exchange for 30 hours a month of their time “acting as neighbours” with their aged residents is unusual.

The programme has seen students in their early twenties sharing lives with residents in their eighties and nineties. As part of their volunteer agreement, the students also spend time teaching residents new skills – like how to email, use social media, Skype, and even graffiti art.

And though research on the impact on students seems yet to be explored, from my own experience of running a similar project at the University of Exeter, I know that it is overwhelmingly positive – giving young people a sense of connection with older generations, and significantly increasing the likelihood that they will continue to volunteer after university.

Since 2011 student volunteers from the university’s Department of English and Film donate their time to bring conversation, literature, and friendship to the residents of over ten residential care homes across the city. And since the project’s inception it is estimated that around 250 active volunteers have reached over 500 elderly residents – at least half of whom have dementia.


This is a significant breakthrough in materials.

MIT Invented The Material We'll Need To Build In Space

It's ten times stronger than steel but is only 5% as dense, and it could revolutionize architecture on Earth, too.
The space elevator—a theoretical mode of transportation where transport modules move up and down a long cable that connects Earth to space—has long been the stuff of futuristic fantasy. It's shown up in books, movies, and scientific journals, while researchers have tried to uncover a material strong enough and light enough to make such a structure possible. Now, a team of MIT scientists has designed one of the strongest lightweight materials in existence, taking us one step closer to realizing that sci-fi dream—and creating a formula for a material that could revolutionize architecture and infrastructure right here on Earth, too.

The material is composed of graphene, a two-dimensional form of carbon that's considered to be the strongest of all known materials. But because the 2D form of graphene is so thin—it's only one atom thick—it's impractical for building purposes. The team's breakthrough is in creating a 3D geometry out of graphene using a combination of heat and pressure. As detailed in a paper published today in the journal Science Advances, they developed computational models of the form and then recreated it with graphene. The kicker? During testing, they found that the samples of the porous material were ten times stronger than steel, even though they were only 5% as dense.


Another breakthrough which may herald new forms of flexible integrated circuits.

New Breakthrough Can Lower Cost of Printed Electronics

While knowing that nanowires are the most conductive shape was valuable (if unsurprising) information to the researchers, most important was finding out the extent of their conductivity. “The nanowires had a 4,000 times higher conductivity than the more commonly used silver nanoparticles that you would find in printed antennas for RFID tags,” said Benjamin Wiley, assistant professor of chemistry at Duke.

In fact, the electrons flowed with such ease through the nanowire films that they could be functional without the particles being melted together. “If you use nanowires, then you don’t have to heat the printed circuits up to such high temperature and you can use cheaper plastics or paper,” said Wiley.

With heat no longer a factor limiting production, these circuits could be printed on cheaper materials, thus opening up the technology to additional applications. According to the researchers, these applications could include solar cells, touch-screens, batteries, LEDS, and even bio-electric devices.


We haven’t yet reached the $500 human genome sequence cost - but we are getting ever closer.

The Cost of Sequencing a Human Genome

Advances in the field of genomics over the past quarter-century have led to substantial reductions in the cost of genome sequencing. The underlying costs associated with different methods and strategies for sequencing genomes are of great interest because they influence the scope and scale of almost all genomics research projects. As a result, significant scrutiny and attention have been given to genome-sequencing costs and how they are calculated since the beginning of the field of genomics in the late 1980s. For example, NHGRI has carefully tracked costs at its funded 'genome sequencing centers' for many years (see Figure 1). With the growing scale of human genetics studies and the increasing number of clinical applications for genome sequencing, even greater attention is being paid to understanding the underlying costs of generating a human genome sequence.

Based on the data collected from NHGRI-funded genome-sequencing groups, the cost to generate a high-quality 'draft' whole human genome sequence in mid-2015 was just above $4,000; by late in 2015, that figure had fallen below $1,500. The cost to generate a whole-exome sequence was generally below $1,000. Commercial prices for whole-genome and whole-exome sequences have often (but not always) been slightly below these numbers.

Innovation in genome-sequencing technologies and strategies does not appear to be slowing. As a result, one can readily expect continued reductions in the cost for human genome sequencing. The key factors to consider when assessing the 'value' associated with an estimated cost for generating a human genome sequence - in particular, the amount of the genome (whole versus exome), quality, and associated data analysis (if any) - will likely remain largely the same. With new DNA-sequencing platforms anticipated in the coming years, the nature of the generated sequence data and the associated costs will likely continue to be dynamic. As such, continued attention will need to be paid to the way in which the costs associated with genome sequencing are calculated.


This is exciting for all of us who are tired of going to the various dental experts we have to go to - well at least to dentists that are concerned with tooth decay and damage.
“The simplicity of our approach,” says Sharpe in a university statement,”makes it ideal as a clinical dental product for the natural treatment of large cavities, by providing both pulp protection and restoring dentin.” He adds that “using a drug that has already been tested in clinical trials for Alzheimer’s disease provides a real opportunity to get this dental treatment quickly into clinics.”

Neurology Drug Signals Shown to Help Tooth Repair

Chemical signals from a drug designed to treat Alzheimer’s disease and other neurological disorders were shown in tests with mice to stimulate stem cells that repair teeth. A team from King’s College London, led by Dental Institute professor Paul Sharpe, describes its findings in the 9 January issue of the journal Scientific Reports.

Sharp and his lab colleagues study regenerative medicine, particularly the role of stem cells in growing new dental tissue for repairing teeth. In this study, researchers are seeking improvements to current methods for fixing damage to teeth caused by cavities that rely largely on fillings and cements made with silicon or calcium. Repairing further decay in the cavities, for example, requires removing original fillings and surrounding tooth material, often exposing the inner tooth pulp, and increasing the risk of infection.

The researchers tested their hypothesis in a proof-of-concept study on mice induced with tooth damage. The team infused low doses of tideglusib and two other GSK-3 inhibitors in commercial biodegradable medical sponge material placed inside the damaged teeth. The team found the treatments after 4 weeks restored as much as twice the amount of dentin as comparable mice teeth given untreated sponges or conventional mineral aggregates. After 6 weeks, the researchers found the sponges used to deliver the treatments completely degraded and dentin filled the damage sites.


Changing agricultural practices and approaches are important for many reasons - adaptation to climate change, more nature friendly ways to protect and improve what we need to grow.

Spray-On RNA Protects Plants from Viruses for Weeks

The technique could be faster and more versatile than developing genetically modified crops from scratch.
Scientists have demonstrated that they can use a crop spray to silence genes in plants, rendering the plants resistant to a virus for several weeks.

A team at the University of Queensland in Australia has developed a technique that allows it to deposit RNA onto the leaves of crops. The spray makes use of microscopic sheets of clay, into which RNA is loaded. As the sheets stick to the leaf of a plant and gradually break down, the RNA is taken up by the plants and then interferes with a gene inside to stop it from functioning.

In a paper published today in Nature Plants, the team shows that a single application of such a spray can stop tobacco plants from succumbing to the pepper mild mottle virus for as long as 20 days. The team explains that the technique works because clay adheres well to the leaves, ensuring that the RNA remains in contact with the plant for as long as possible.

As New Scientist points out, the potential for such a spray isn’t limited to just preventing disease, either. It could be used to help crops tolerate drought, trigger ripening, or activate some other genetically controlled trait.


This could be very worrying for the future of human design.

Eggs from Skin Cells? Here’s Why the Next Fertility Technology Will Open Pandora’s Box

Experts warn that a potential IVF breakthrough could have unintended social consequences.
Imagine you are Brad Pitt. After you stay one night in the Ritz, someone sneaks in and collects some skin cells from your pillow.

But that’s not all. Using a novel fertility technology, your movie star cells are transformed into sperm and used to make a baby. And now someone is suing you for millions in child support.

Such a seemingly bizarre scenario could actually be possible, say three senior medical researchers who today have chosen to alert the public to the social risks of in vitro gametogenesis, a technique they say could allow any type of cell to be reprogrammed into a sperm or egg.

The technology has already been demonstrated in mice by Japanese scientists and is likely to be extended to humans soon, according to the authors of an editorial in Science Translational Medicine, who warn that it could open a Pandora’s box of “vexing policy challenges” and ethical dilemmas.


This is not a new threat - many people have been aware of the possibilities for a long time - what is perhaps more significant is as we have increased our population and are increasing urbanized - our civilization becomes more vulnerable to catastrophic disruption - should such (perhaps ‘when’ is a more precise phrasing) an event occur. Of course - who knows what the ‘new’ White House will do. :)
Our civilization is now truly global in scale and scope, and it behooves us to comport ourselves accordingly and take cognizance of all the potential threats to our survival—whether destructive wars, disease pandemics, climatic change, or the threat of extraterrestrial impactors. Hopefully, the survival of humanity will not be threatened by the impact of an NEO anytime soon; but if it is, we can rest a little easier knowing that measures are being taken and there is finally a sound plan in place.

The White House Now Has a Plan for Handling an Asteroid Disaster

In 2013, a meteor broke apart over Russia, sparking panic and injuring over 1,400 people. Called the “Chelyabinsk Event,” the culprit meteor was estimated to be around 17-20 m (56-65 ft) in diameter and released approximately 500 kilotons of energy. In fact, before the atmosphere absorbed most of the energy, the meteor had about 29 times the energy of the atomic bomb blast at Hiroshima.

Every year, there could be dozens (if not more) of interstellar objects like comets and asteroids that come very close to Earth. In fact, as of January 3 of this year, 15,420 Near-Earth Objects (NEOs) had been discovered. On March 21, 2016, an NEO under 1 LD (lunar distance) from Earth was estimated to be 35-86 m (114.8-282.1 ft) in diameter. If that object had broken through the atmosphere and landed on Earth, the damage and devastation would be immense.

Yet even more threatening is the possibility of asteroid 2013 TV135 impacting the Earth. This asteroid has only a 1 in 63,000 chance of impacting within the next 100 years, but if it does, it would be absolutely catastrophic. The asteroid measures in at about 400 m (1,300 ft), which is around 4% of the size of the asteroid that is said to have killed the dinosaurs. So, while no dino-killer, the asteroid would certainly pack enough of a wallop to give the Earth one cosmic migraine.


Here’s a great signal for what’s to come both for military use but also many other applications.

US military tests swarm of mini-drones launched from jets

Three F/A-18 Super Hornets were used to release the Perdix drones last October.
The drones, which have a wingspan of 12in (30cm), operate autonomously and share a distributed brain.
A military analyst said the devices, able to dodge air defence systems, were likely to be used for surveillance.
Video footage of the test was published online by the Department of Defense.


The EU is advancing its use of renewable energy at an accelerating pace this new initiative will accelerate the take-up of electric vehicles.

European carmakers hope to catch Tesla with faster e-car chargers

Europe's biggest carmakers are drawing on the full force of the continent's industrial prowess to build a network of ultra-fast charging stations as they look to stoke demand for electric cars and break Tesla's stranglehold on the market.
BMW, Volkswagen (VOWG_p.DE), Ford and Daimler plan to build about 400 next-generation charging stations in Europe that can reload an electric car in minutes instead of hours.

The long time it takes to charge batteries is one of the main disadvantages of electric cars compared to conventional cars with gasoline tanks that can be filled up in seconds.
Until now, drivers of electric cars have had to leave their vehicles plugged in for hours at a charging station for a journey between cities, making many long range journeys impractical.

Installing new, faster chargers would spur the overall market, and also help the traditional car manufacturers close the gap with Tesla, the Silicon Valley-based e-car leader, which maintains its own network of charging stations. Tesla's chargers are the fastest in the industry, and are incompatible with existing electric cars made by rivals.

The carmakers are roping in experts from the European power and engineering industry, including Germany's Innogy, E.ON and Siemens and Portugal's Efacec, which are all working on the technology, people familiar with the matter told Reuters.
The new 350 kilowatt (kW) chargers would be nearly three times as powerful as Tesla's.


Here’s a great article from the Economist on a key challenge of accelerating technological and scientific change.
Unfortunately, as our special report in this issue sets out, the lifelong learning that exists today mainly benefits high achievers—and is therefore more likely to exacerbate inequality than diminish it. If 21st-century economies are not to create a massive underclass, policymakers urgently need to work out how to help all their citizens learn while they earn. So far, their ambition has fallen pitifully short.

Equipping people to stay ahead of technological change

It is easy to say that people need to keep learning throughout their careers. The practicalities are daunting
WHEN education fails to keep pace with technology, the result is inequality. Without the skills to stay useful as innovations arrive, workers suffer—and if enough of them fall behind, society starts to fall apart. That fundamental insight seized reformers in the Industrial Revolution, heralding state-funded universal schooling. Later, automation in factories and offices called forth a surge in college graduates. The combination of education and innovation, spread over decades, led to a remarkable flowering of prosperity.

Today robotics and artificial intelligence call for another education revolution. This time, however, working lives are so lengthy and so fast-changing that simply cramming more schooling in at the start is not enough. People must also be able to acquire new skills throughout their careers.

The classic model of education—a burst at the start and top-ups through company training—is breaking down. One reason is the need for new, and constantly updated, skills. Manufacturing increasingly calls for brain work rather than metal-bashing (see Briefing). The share of the American workforce employed in routine office jobs declined from 25.5% to 21% between 1996 and 2015. The single, stable career has gone the way of the Rolodex.


Speaking of the MOOC - here’s a article on new research findings.
“We explored 290 Harvard and MIT online courses, a quarter-million certifications, 4.5 million participants, and 28 million participant-hours,” Ho says.
During these four years, 2.4 million unique users participated in one or more MITx or HarvardX open online courses, and 245,000 learner certificates were issued upon successfully completing a course. On average, 1,554 new, unique participants enrolled per day over four years. A typical MOOC certificate earner spends 29 hours interacting with online courseware.

Study of MOOCs offers insights into online learner engagement and behavior

New report, based on four years of data from edX, represents one of the largest surveys of massive open online courses to date.
In 2012, MIT and Harvard University launched open online courses on edX, a non-profit learning platform co-founded by the two institutions. Four years later, what have we learned about these online “classrooms” and the global community of learners who take them?

Today, a joint research team from MIT and Harvard announced the release of a comprehensive report on learner engagement and behavior in 290 massive open online courses (MOOCs).

Building on their prior work — 2014 and 2015 benchmark reports describing their first two years of open online courses — the team’s new study reviews four years of data and represents one of the largest surveys of MOOCs to date.

"Strong collaboration has enabled MIT and Harvard researchers to jointly examine nearly 30 million hours of online learner behavior and the growth of the MOOC space," says study co-author Isaac Chuang, MIT senior associate dean of digital learning and professor of physics and of electrical engineering and computer science. "Our latest report features data from four full years of MITx and HarvardX courses, exploring in-depth information on demographics and behavior of course participants.”
The report is the latest product stemming from a cross-institutional research effort led by Chuang and study co-author Andrew Ho, chair of the Vice Provost for Advances in Learning (VPAL) Research Committee and professor of education at Harvard.


Here’s a dramatic shift in recommendations by the American Association of Pediatrics.
Based on an extensive study of digital parenting, Alicia Blum-Ross and Sonia Livingstone conclude that instead of quantity, parents needs to focus on “context (where, when and how digital media are accessed), content (what is being watched or used), and connections (whether and how relationships are facilitated or impeded).” Here are three ways to put this recommendation into action by letting go of fear, maintaining balance and hunting for extraordinary learning.

How Dropping Screen Time Rules Can Fuel Extraordinary Learning

Last fall, the American Association of Pediatrics (AAP) finally backed down from their killjoy “screen time” rules that had deprived countless kids of the freedom to pursue their interests and explore digital worlds. No screens in the first 2 years, no more than 2 hours a day. After pushing their famous 2x2 rule for almost two decades, now they advocate against a one-size-fits-all approach and suggest that parents can be “media mentors” and not just time cops. But damage has been done.

For almost as long as the AAP 2x2 rules have been in place, I’ve been studying how multimedia, digital games, and the Internet can fuel extraordinary forms of learning and mobilization. Young people are growing up in a new era of information abundance where they can google anything and connect with specialized expert communities online. However, our research also indicates that most kids are not truly tapping the power of online learning. In part I blame the 2x2 guidelines for holding kids back, and putting parents in the role of policing rather than coaching media engagement.

By focusing on quality over quantity, families can move away from fear, maintain a healthy balance, and seek out extraordinary learning.
Screen Time is an Outdated Concept


This is a short but very interesting piece on why people play video games. Well worth the read.
The authors of a 2014 paper examining the role of self-determination in virtual worlds concluded that video games offer us a trio of motivational draws: the chance to “self-organize experiences and behavior and act in accordance with one’s own sense of self”; the ability to “challenge and to experience one’s own effectiveness”; and the opportunity to “experience community and be connected to other individuals and collectives.”

How Video Games Satisfy Basic Human Needs

For these researchers, incredibly, enjoyment is not the primary reason why we play video games.
For the British artificial intelligence researcher and computer game designer Richard Bartle, the kaleidoscopic variety of human personality and interest is reflected in the video game arena. In his 1996 article “Hearts, Clubs, Diamonds, Spades: Players Who Suit MUDs,” he identified four primary types of video game player (the Killers, Achievers, Explorers, and Socializers). The results of his research were, for Bartle, one of the creators of MUD, the formative multiplayer role-playing game of the 1980s, obvious. “I published my findings not because I wanted to say, ‘These are the four player types,’” he recently told me, “but rather because I wanted to say to game designers: ‘People have different reasons for playing your games; they don’t all play for the same reason you do.’”

Bartle’s research showed that, in general, people were consistent in these preferred ways of being in online video game worlds. Regardless of the game, he found that “Socialisers,” for example, spend the majority of their time forming relationships with other players. “Achievers” meanwhile focus fully on the accumulation of status tokens (experience points, currency or, in Grand Theft Auto’s case, gleaming cars and gold-plated M16s).

In a 2012 study, titled “The Ideal Self at Play: The Appeal of Video Games That Let You Be All You Can Be,” a team of five psychologists more closely examined the way in which players experiment with “type” in video games. They found that video games that allowed players to play out their “ideal selves” (embodying roles that allow them to be, for example, braver, fairer, more generous, or more glorious) were not only the most intrinsically rewarding, but also had the greatest influence on our emotions. “Humans are drawn to video and computer games because such games provide players with access to ideal aspects of themselves,” the authors concluded. Video games are at their most alluring, in other words, when they allow a person to close the distance between how they are, and how they wish to be.


Practicing in the virtual world to make the real world virtual - this is potentially a very significant development - for anyone. A remote prosthetic - add some haptic feedback and our ‘proprioceptive’ sense is transformed. There’s 2 min video and lots of interesting pictures. Our imagination is the limit to how we incorporate (literally) this technology.

Brain-controlled robot lets physically challenged see the world

For those with severe motor disabilities, mind-controlled prostheses have long offered a sliver of hope that they might one day be able to regain some semblance of autonomy. While we've seen numerous examples of such prostheses over the years, most involve brain surgery and are still not ready for commercialization. As scientists continue to tinker with neuro circuits, Melbourne-based startup Aubot has skipped past all these complications to launch the Teleport, the world's first commercially available telepresence robot that can be controlled by thought.


And speaking of games - and AI - here’s the latest development.

Poker Is the Latest Game to Fold Against Artificial Intelligence

Two research groups have developed poker-playing AI programs that show how computers can out-hustle the best humans.
In a landmark achievement for artificial intelligence, a poker bot developed by researchers in Canada and the Czech Republic has defeated several professional players in one-on-one games of no-limit Texas hold’em poker.

Perhaps most interestingly, the academics behind the work say their program overcame its human opponents by using an approximation approach that they compare to “gut feeling.”

“If correct, this is indeed a significant advance in game-playing AI,” says Michael Wellman, a professor at the University of Michigan who specializes in game theory and AI. “First, it achieves a major milestone (beating poker professionals) in a game of prominent interest. Second, it brings together several novel ideas, which together support an exciting approach for imperfect-information games.”

Later this week, a tournament at a Pittsburgh casino will see several world-class poker players play the same version of poker against a program developed at CMU. Tuomas Sandholm, a professor of computer science at CMU who is leading the effort, says the human players involved are considerably stronger than those tested by the Alberta researchers, and 120,000 hands will be played over 20 days, providing greater statistical significance to the results. The tournament could confirm that AI has indeed mastered a game that has long seemed far too complex and subtle for computers.


For Fun - I think
I try to stay away from overt politics - but this is just too good - only a 4 min video - maybe some of us will remember this episode.

A 1958 TV Show Had an Unsavory Character Named “Trump” Who Promised to Build a Wall & Save the World

You’re not watching an episode from the The Twilight Zone. No, this clip is from the 1950s western TV series Trackdown. Or, to be more precise, it comes from a 1958 episode called “The End of the World.” The clips features Lawrence Dobkin playing the role of “Walter Trump,” a fraud who rides into town claiming, writes Snopes, “that only he could prevent the end of the world by building a wall.” The episode ends, in case you’re wondering, with the fear-mongering Trump getting placed under arrest. Freedom and sanity are restored. Hurray.

No comments:

Post a Comment