Rob Walker’s Workologist column included this bit of insight in I’m Here to Work. Not to Be Your Friend., a sentiment that runs counter to a great deal of workplace ‘we’re a family’ philosophizing:
Often, we put too much emphasis on the idea of being “friends” with our co-workers, Dr. [Amy Cooper] Hakim [co-author of Working With Difficult People] says. “Our job at work is to work,” she says. “I actually argue against having true friends in the workplace, aside from maybe a handful — people you would actually want to be friends with if you didn’t work at that company.”
That doesn’t mean you should be rude or unprofessional, but rather that it’s healthier to think of colleagues as what Dr. Hakim calls “friendlies” — relationships that neither require nor assume everything that goes with a true friendship. This mentality will help you separate the personal and the professional. In this case, perhaps you can let everyday coldness slide (you don’t have to be best pals) — but not unanswered email or ignored tasks, which are tangible work problems.
There’s some evidence that having a few ‘true friends’ at work increases retention, and the likelihood of feeling like work is purposeful, but forcing a false camaraderie in the workplace decreases professionalism and can overly personalize disagreements and dealing with problems. So aim for being friendlies, not friends.
As AI continues to eat the world, even software is being consumed. In this MIT Technology Review story, various researcher organizations have started to teach AI to build AI software.
Tom Simonite, AI Software Learns to Make AI Software
Researchers are finding that they can make software that can learn to do one of the trickiest parts of their own jobs — the task of designing machine-learning software.
In one experiment, researchers at the Google Brain artificial intelligence research group had software design a machine-learning system to take a test used to benchmark software that processes language. What it came up with surpassed previously published results from software designed by humans.
In recent months several other groups have also reported progress on getting learning software to make learning software. They include researchers at the nonprofit research institute OpenAI (which was cofounded by Elon Musk), MIT, the University of California, Berkeley, and Google’s other artificial intelligence research group, DeepMind.
If self-starting AI techniques become practical, they could increase the pace at which machine-learning software is implemented across the economy. Companies must currently pay a premium for machine-learning experts, who are in short supply.
And the justification for this investment? We don’t have enough people to do this work, so let’s hand it over to the machines. This brings AI development down to the level of packing orders in an Amazon warehouse, or taxi drivers being displaced by driverless cars in our cities this year.
Fascinating research shows that the way that we discover novelties — music, art, video, books — operate like the creation of innovation.
Emerging Technology from the arXi, vMathematical Model Reveals the Patterns of How Innovations Arise
“Though the creative power of the adjacent possible is widely appreciated at an anecdotal level, its importance in the scientific literature is, in our opinion, underestimated,” say [Vittorio] Loreto and co[lleagues] [from Sapienza University of Rome].
Nevertheless, even with all this complexity, innovation seems to follow predictable and easily measured patterns that have become known as “laws” because of their ubiquity. One of these is Heaps’ law, which states that the number of new things increases at a rate that is sublinear. In other words, it is governed by a power law of the form V(n) = knβ where β is between 0 and 1.
Words are often thought of as a kind of innovation, and language is constantly evolving as new words appear and old words die out.
This evolution follows Heaps’ law. Given a corpus of words of size n, the number of distinct words V(n) is proportional to n raised to the β power. In collections of real words, β turns out to be between 0.4 and 0.6.
Another well-known statistical pattern in innovation is Zipf’s law, which describes how the frequency of an innovation is related to its popularity. For example, in a corpus of words, the most frequent word occurs about twice as often as the second most frequent word, three times as frequently as the third most frequent word, and so on. In English, the most frequent word is “the” which accounts for about 7 percent of all words, followed by “of” which accounts for about 3.5 percent of all words, followed by “and,” and so on.
This frequency distribution is Zipf’s law and it crops up in a wide range of circumstances, such as the way edits appear on Wikipedia, how we listen to new songs online, and so on.
These patterns are empirical laws — we know of them because we can measure them. But just why the patterns take this form is unclear. And while mathematicians can model innovation by simply plugging the observed numbers into equations, they would much rather have a model which produces these numbers from first principles.
Enter Loreto and his pals (one of which is the Cornell University mathematician Steve Strogatz). These guys create a model that explains these patterns for the first time.
They begin with a well-known mathematical sand box called Polya’s Urn. It starts with an urn filled with balls of different colors. A ball is withdrawn at random, inspected and placed back in the urn with a number of other balls of the same color, thereby increasing the likelihood that this color will be selected in future.
This is a model that mathematicians use to explore rich-get-richer effects and the emergence of power laws. So it is a good starting point for a model of innovation. However, it does not naturally produce the sublinear growth that Heaps’ law predicts.
That’s because the Polya urn model allows for all the expected consequences of innovation (of discovering a certain color) but does not account for all the unexpected consequences of how an innovation influences the adjacent possible.
So Loreto, Strogatz, and co have modified Polya’s urn model to account for the possibility that discovering a new color in the urn can trigger entirely unexpected consequences. They call this model “Polya’s urn with innovation triggering.”
The exercise starts with an urn filled with colored balls. A ball is withdrawn at random, examined, and replaced in the urn.
If this color has been seen before, a number of other balls of the same color are also placed in the urn. But if the color is new — it has never been seen before in this exercise — then a number of balls of entirely new colors are added to the urn.
Loreto and co then calculate how the number of new colors picked from the urn, and their frequency distribution, changes over time. The result is that the model reproduces Heaps’ and Zipf’s Laws as they appear in the real world — a mathematical first. “The model of Polya’s urn with innovation triggering, presents for the first time a satisfactory first-principle based way of reproducing empirical observations,” say Loreto and co.
The team has also shown that its model predicts how innovations appear in the real world. The model accurately predicts how edit events occur on Wikipedia pages, the emergence of tags in social annotation systems, the sequence of words in texts, and how humans discover new songs in online music catalogues.
Interestingly, these systems involve two different forms of discovery. On the one hand, there are things that already exist but are new to the individual who finds them, such as online songs; and on the other are things that never existed before and are entirely new to the world, such as edits on Wikipedia.
Loreto and co call the former novelties — they are new to an individual — and the latter innovations — they are new to the world.
Curiously, the same model accounts for both phenomenon. It seems that the pattern behind the way we discover novelties — new songs, books, etc. — is the same as the pattern behind the way innovations emerge from the adjacent possible.
So, can our understanding of ‘innovation triggering’ provide a means to increase innovative thinking? It seems that a/ it should, and b/ it will look a lot like active curation of ‘the adjacent possible’ in any domain of inquiry. I hope to investigate this a bit more, as I find more research on the topic.
Are we creating inequality as a direct consequence of our response to and embrace of a culture of competitiveness?
What seems to be provoking the most outrage right now is not inequality as such, which has, after all, been rising in the UK (give or take Tony Blair’s second term) since 1979, but the sense that the economic game is now being rigged. If we can put our outrage to one side for a second, this poses a couple of questions, for those interested in the sociology of legitimation. Firstly, how did mounting inequality succeed in proving culturally and politically attractive for as long as it did? And secondly, how and why has that model of justification now broken down?
In some ways, the concept of inequality is unhelpful here. There has rarely been a political or business leader who has stood up and publicly said, “society needs more inequality”. And yet, most of the policies and regulations which have driven inequality since the 1970s have been publicly known. Although it is tempting to look back and feel duped by the pre-2008 era, it was relatively clear what was going on, and how it was being justified. But rather than speak in terms of generating more inequality, policy-makers have always favoured another term, which effectively comes to the same thing: competitiveness.
My new book, The Limits of Neoliberalism: Sovereignty, Authority & The Logic of Competition, is an attempt to understand the ways in which political authority has been reconfigured in terms of the promotion of competitiveness. Competitiveness is an interesting concept, and an interesting principle on which to base social and economic institutions. When we view situations as ‘competitions’, we are assuming that participants have some vaguely equal opportunity at the outset. But we are also assuming that they are striving for maximum inequality at the conclusion. To demand ‘competitiveness’ is to demand that people prove themselves relative to one other.
I was interviewed by my old pal, Brian Solis, for the Ditial Outliers series, sponsored by BMC:
In this episode, host Brian Solis interviews Stowe Boyd, a futurist and editor-in-chief at Work Futures, about how companies are missing the mark on digital transformation by thinking about it from an industrial approach instead of a humanities-based approach.
In a wide-ranging discussion, Boyd shares his thoughts on the inherent bias in social media and how we have to do more than being aware to counteract it. He also shares how orgs can capitalize on the groundswell created by change agents, no matter where they sit in the organization, and move from an industrialization mindset into a future of work mindset.
I’m researching decision making for a project.
Chip Heath and Dan Heath, The 10/10/10 Rule For Tough Decisions
One tool we can use was invented by Suzy Welch, a business writer for publications such as Bloomberg Businessweek and O magazine. It’s called 10/10/10, and Welch describes it in a book of the same name. To use 10/10/10, we think about our decisions on three different time frames:
- How will we feel about it 10 minutes from now?
- How about 10 months from now?
- How about 10 years from now?
Edelman communications group has been surveying the public to see what the level of trust is for institutions like government and media. Edelman reports a ‘global implosion’ in trust this year.
This year’s survey recorded the largest-ever drop of trust in business, government, the media, and NGOs. A majority of people in two-thirds of countries surveyed now distrust these institutions, with all-time low levels of trust recorded in a number of places.
The report is timed to coincide with the start of the World Economic Forum in Davos. The world leaders, corporate chieftains, and media moguls who gather at the exclusive Alpine jamboree won’t find much solace in its findings.
The credibility of corporate CEOs, the most common Davos delegate, fell off a cliff, with only 37% of people saying they trust them, a 12-point drop from the previous year. CEO credibility fell in every country surveyed — an impressive feat. Still, they remain more trusted than government leaders (29%), so that’s something.
The media also takes a beating in Edelman’s latest survey, with 43% of respondents expressing trust in the press, down from 48% the year before. Trust in media fell to an all-time low in 17 of the 28 countries polled.
A lot of people are thinking about changing jobs. Why? Bad bosses.
According to a recent survey of 3,300 employees across 14 countries by Dale Carnegie Training, 26% of U.S. employees say they will look for a new job within the next 12 months, and 15% are already actively looking for a new job. In total, more than 40% of all employees are at risk of leaving their job in the coming year.
According to this study, a primary reason for leaving is poor management. Researchers found that an employee is nearly 10 times more likely to be very satisfied with their job when they are led by someone they feel is honest and trustworthy. Those who feel that their superiors do not exhibit such behaviors are four times more likely to be looking for a different job. This squares with results from a 2015 report by Weber Shandwick that found a chief executive’s reputation influences employees’ decisions to remain at their company.
Claire Cain Miller reports on a troubling element of Obama’s farewell speech: how will we help people on the downside of technological disruptions in work?
Claire Cain Miller, A Darker Theme in Obama’s Farewell: Automation Can Divide Us
Technological change will soon be a problem for a much bigger group of people, if it isn’t already. Fifty-one percent of all the activities Americans do at work involve predictable physical work, data collection and data processing. These are all tasks that are highly susceptible to being automated, according to [a report McKinsey published in July using data from the Bureau of Labor Statistics and O*Net to analyze the tasks that constitute 800 jobs.
Twenty-eight percent of work activities involve tasks that are less susceptible to automation but are still at risk, like unpredictable physical work or interacting with people. Just 21 percent are considered safe for now, because they require applying expertise to make decisions, do something creative or manage people.
The service sector, including health care and education jobs, is considered safest. Still, a large part of the service sector is food service, which McKinsey found to be the most threatened industry, even more than manufacturing. Seventy-three percent of food service tasks could be automated, it found.
In December, the White House released a report on automation, artificial intelligence and the economy, warning that the consequences could be dire: “The country risks leaving millions of Americans behind and losing its position as the global economic leader.”
No one knows how many people will be threatened, or how soon, the report said. It cited various researchers’ estimates that from 9 percent to 47 percent of jobs could be affected.
In the best case, it said, workers will have higher wages and more leisure time. In the worst, there will be “significantly more workers in need of assistance and retraining as their skills no longer match the demands of the job market.”
Technology delivers its benefits and harms in an unequal way. That explains why even though the economy is humming, it doesn’t feel like it for a large group of workers.
The real problem is that things are changing too fast for people to be retrained in roles that are actually ahead of the technological power curve. Maybe we need to start retraining people to work with robotic mules on organic vegetable farms?