How Automated Sentiment Analysis Brings Emotions Into Automated Decision-Making

We know that emotions is a substantial source of decisions and actions for humans. Being able to identify that emotion is an essential technology that currently becomes mature, and I was not really aware of the extent of progress in that area. An excellent summary is given in this Quartz Obsession Post ‘Sentiment analysis‘.

More and more companies use sentiment analysis to drive their decisions about communication. The first applications were sentiment analysis based on social network posts, crudely based on the types of words people were using. Today automated sentiment analysis applies to images and videos – emotions can be detected from selfies and pictures and even from micro-expressions in videos, adding a lot of context and more subtle ways to detect the emotional condition of people.

It goes further – for example affective computing is “Affective Computing is computing that relates to, arises from, or deliberately influences emotion or other affective phenomena” (see the affective computing MIT group page). It clearly states here that we are at the stage where computing will start influencing emotions. Or at least, detecting our emotional state and propose to change it. This can be potentially dangerous, and this space needs to be watched closely. Social networks and brands will now not only inadvertently influence our emotions, but probably also voluntarily!

Share

How to Work for making a Statement in the World

Following up on the post ‘How Dangerous It Is to Define One’s Identity by Work: Workism‘, I found this interesting counterpoint from Mitch Joel: ‘Loving The Work That You Do‘. He reacts to the Atlantic article about workism. As we spend a lot of our time actually working, it is best to make it for some purpose, to make a statement in the world!

The article in The Atlantic isn’t taking a shot at anybody who is engaged, in love and passion about their job. The Atlantic is just questioning how hard we are all working, what we’re working for and – maybe – if we’re letting our work define our life’s purpose much more than we should.”

Why work eight-plus hours a day (and be away from your family and friends) if it’s just a job? Why do all of that heavy lifting if you don’t love it, when you could be with your family (or practicing a music instrument, or doing art, or… you get the idea)? What is the point of working if it doesn’t make a statement?

Let’s work to make a statement. I find this expression quite magic. Make a dent in the world, make a statement.

Share

How Dangerous It Is to Define One’s Identity by Work: Workism

Following up from our previous post ‘How the Balance of Work Time Between Rich and Poor has Dramatically Changed‘, this excellent Atlantic article highlights concerns about workism: ‘Workism Is Making Americans Miserable‘. Workism is about work becoming an identity and pervading all aspects of our lives, becoming an obsession.

The economists of the early 20th century did not foresee that work might evolve from a means of material production to a means of identity production. They failed to anticipate that, for the poor and middle class, work would remain a necessity; but for the college-educated elite, it would morph into a kind of religion, promising identity, transcendence, and community. Call it workism.”

Specific american aspects are quoted in the post – with which I do not agree totally: in my travels to the US I have observed that while american engineers don’t take holidays, their working day is generally shorter and ends on a fixed time, which may not be the case in other countries.

Nevertheless, the issue of work having become a means to personal identity is an important issue; and the article also develops the case of millennials trapped in student debt exhausting themselves at work. Advice like finding your passion to justify long hours at work may be subject to caution.

In any case it is certain that if work is your only identity, you face a problem, because you may get retrenched or face a substantial issue some day. In that case it is better to have one’s identity defined in a more robust way.

Share

How the Balance of Work Time Between Rich and Poor has Dramatically Changed

In previous centuries and roughly until the mid XXth century, the few rich were idle, bored, and enjoying sports and other ways to spend time; and the poor worked extra long hours in dire conditions just to survive. It is amazing how this has changed – nowadays it is almost the reverse: the upper middle class and rich people generally work intensely, while the working class has on average more leisure, or at least more time away from formal work.

Early 20th century leisure: a tea party

Although this might be changing with the gig economy regarding the working class, with working hours going up again for some categories, the amazing observation is more for the wealthy part of the population, which is generally working very intensely nowadays – with the potential exception of some super-rich billionaires.

Wealthy and upper-middle-class persons are nowadays most often very highly paid and very highly busy professionals, with little time availability for leisure – even more so when family requirements take up the precious remaining time. Of course this has significantly changed from previous times where wealth was often primarily derived from mariage and inheritance.

This trend for the wealthy to work even more is now being called by some “workism” as a psychological or even almost religious need to work a substantial part of the time, as part of personal values or education.

This trend seems to reinforce itself with the Fourth Revolution, which is in strong need of highly educated engineers, and which is at the same time automatizing manual labor tasks. The wealthy educated will work more, be more constantly on line, while the less educated will have more time for themselves.

Amazing how in less then a century, the balance of work has shifted!

Share

How Newspaper Paywalls would be a Paradox regarding Spreading Quality News

Following up on our investigation of the economics of writing (see previous post ‘How Writers’ Income Sources Are Changing‘), an interesting perspective is given in this article ‘How Paywalls are Making Us Dumber‘. The thesis of this paper revolves around the paradox that great journalistic content needs to be compensated -hence paywalls and subscriptions- but that this prevents many people from accessing it, leaving space open for much less reliable news and even fake news, which are all freely accessible.

I believe it is useful to come back to a historical perspective. Since the beginning, journalism has been financed either by tycoons – sometimes well meaning, sometimes with the objective to manipulate – or by advertisement. Sometimes by both. In certain countries there are also subsidies by government, with all the problems related to free press. In general, the income from readers or subscriptions has always remained limited due to the costs of physical printing and distribution.

Therefore, it does not seem to me that the overall balance of financing of journalism is shifting dramatically. Jeff Bezos – a tycoon – had to take over the Washington Post recently; same happens in France for the main newspapers. Nor that fake news and opinionated papers have been around for lots of time. The current newspaper crisis is mainly linked to the fact that advertisement dollars have migrated elsewhere, and newspapers are struggling to compensate.

Of course, the reach of internet and the easier spreading of fake news is a concern, as well as the difficulty to effectively regulate it. There are still many sources of quality information sufficiently open to benefit from it. Financing a news outlet only with income from readers has always been an illusion. At the same time it does not seem to me that paywalls are really a problem, in particular if they let read a few key articles per month for free if one investigates a specific subject. What do you think?

Share

How Writers’ Income Sources Are Changing

In this excellent New-York Times article ‘Does It Pay to Be a Writer?‘, the problem of writer compensation is analyzed in depth. There is a significant shift in writers’ compensation patterns at work over the last decades and faster even in the last few years.

There are less opportunities to write for a living – “Writing for magazines and newspapers was once a solid source of additional income for professional writers, but the decline in freelance journalism and pay has meant less opportunity for authors to write for pay.”

And those opportunities pay much less: “In the 20th century, a good literary writer could earn a middle-class living just writing,” said Mary Rasenberger, executive director of the Authors Guild, citing William Faulkner, Ernest Hemingway and John Cheever. Now, most writers need to supplement their income with speaking engagements or teaching. Strictly book-related income — which is to say royalties and advances — are also down, almost 30 percent for full-time writers since 2009.” A survey in the US is also quoted showing the median compensation for writers to have fallen by 40% in a few years.

Writing has thus become more of a commodity. On the other hand we also need to relativise the statement that writer wan’t make a living – historically most writers have had to supplement income with other activities, except for those writing bestsellers or being famous for some other reason. Still, income gets lower for pure writing in the gig economy and one century ago it was possible to make a decent living just writing some pieces for newspaper, which is not any more the case today.

This reinforces the observation that writing today must be part of a global package of activity – some will be journalists, professional speakers, consultants, other professionals for which writing is a way to communicate to part of the public and spread their message.

Share

How it Becomes Important to be Informed When a Bot Answers

In this post ‘Truth in bots’, Seth Godin makes the point that since bots are increasingly common in answering messages and even voice phone, we should be informed that it is a bot answering rather a human before we start the interaction.

You’re talking to a next-generation bot on the phone, and it’s only a minute or two into the interaction that you realize you’re being fooled by an AI, not a caring human. Wouldn’t it be more efficient (and reassuring) to know this in advance?

It would be not only ethical, but, as Seth Godin rightly mentions, it will influence the level of emotional investment we will put in the conversation – and give us some clues as to the adequacy of the response.

Not that conversation with humans in service centers is necessarily more helpful or engaging, but at least it would be useful to know who we are speaking to.

As bots become more and more widespread this is becoming an essential aspect of service. Why is it not becoming a standard more quickly? It is not bad to have abot answer – it can sometimes be better. So why avoid giving this information?

Share

How the ‘Buy Slow, Sell Fast’ Advice of Stockbrokers is Wrong

In this Marginal Revolution post ‘The Buying Slow but Selling Fast Bias‘ by Alex Tabarrok, a long quoted wisdom sentence of stockbrokers is proven wrong. ‘Buy Slow, Sell Fast’ has been proven by data scientists to not be the best strategy: it would rather be ‘Buy Slow, Sell Slow’.

According to the research quoted in the post, buying slow and deliberately is not a problem. It is rather on the selling side that selling fast is not optimal. According to the research article, “We use a unique data set to show that financial market experts – institutional investors with portfolios averaging $573 million – exhibit costly, systematic biases. A striking finding emerges: while investors display clear skill in buying, their selling decisions underperform substantially – even relative to strategies involving no skill such as randomly selling existing positions – in terms of both benchmark-adjusted and risk-adjusted returns. We present evidence consistent with limited attention as a key driver of this discrepancy, with investors devoting more attentional resources to buy decisions than sell decisions.”

Coming back to the thinking fast and slow approach now familiar thanks to Daniel Kahneman, this tends to demonstrate that in most cases, a slow and reflective approach is better than a fast, reactive approach – and that it shows even in the testosterone-laden world of financial trading!

Even in stressful situations, it pays off to think slow or think twice before taking a decision!

Share

How to Regulate the Algorithms that Shape our Lives

The issue of controlling the algorithms that increasingly shape our lives to avoid bias is now recognized (see our previous posts ‘How Algorithms Can Become Weapons of Math Destruction‘ and ‘How We Need to Audit the Key Algorithms That Drive our Lives‘). A proposal is contained in the Quartz post ‘We should treat algorithms like prescription drugs‘.

For decades, pharma and biotech companies have tested drugs through meticulously fine-tuned clinical trials. Why not take some of those best practices and use them to create algorithms that are safer, more effective, and even more ethical?“. In addition, a strong regulator enforces checks and verifies that the testing has been done properly before allowing drugs to be put on the market. Then, a surveillance network also feeds back unexpected effects of a drug which may lead to reconsider its use or for which symptoms it is really useful.

On interesting aspect of this analogy is to recognize that algorithms like drugs have side effects. In a systemic view of the world, an algorithm that aims to solve a problem may – no, will – create unforeseen effects on some other aspects, especially if its use becomes widespread.

As the article mentions, drugs regulators have already started regulating devices that use algorithms for medical purpose (for example, sugar regulation apps for diabetics). This may produce a framework that could be spread to other types of algorithms.

Still, regulating algorithms may be a huge endeavor and setting up this framework will take time and effort – and require to develop new ways to efficiently evaluate algorithms for bias and for unexpected effects. An interesting field of research for the years to come!

Share

Why It is Better to Use ‘Disruption’ Rather than ‘Innovation’

I like this short video of Charlene Li ‘Truth Drop: Disruption vs Innovation‘. She explains why ‘innovation’ is not the right word because it looks like it is going to be easy. So she recommends to systematically use ‘disruption’ instead.

Basically, she states that “innovation is a false promise. It says that it’s going to be easy, we’re going to find the answer in a certain timeline with an investment.

On the other hand, “Disruption though is honest. It says, “If we’re going to create growth, create change, it’s going to be hard, it’s going to be painful and the journey ahead is going to be filled with obstacles and boulders that we have to climb over.

I will listen to the advice, and use disruption rather than innovation the next time I will talk about digital transformation!

Share

How Various Meanings are Used for the ‘Fourth Revolution’ concept

This excellent post by Quartz ‘We’re thinking about the fourth industrial revolution all wrong‘ gives some perspective on terminology. It compares what is now generally understood as the “Fourth Industrial Revolution” (1st: steam engine; 2nd: oil & electricity; 3rd: internet; 4rd: digital) and what we name here in this blog the Fourth Revolution (1st: language; 2nd: writing; 3rd: broadcasting (printing etc); 4th: cheap 2-way communication).

Basic RGB

I like very much this article of course because it exposes that we should not be myopic and that the real change is akin to what we expose in this blog since the beginning. What is usually meant by the “Fourth Industrial Revolution” is in fact just a way to name a trend into the digital, but the real change is really very cheap, global, 2-way communication. In this article, it is mentioned as “the period of industrialized intelligence, rising with the mental-energy-saving inventions of the mid-20th century and continuing through today. Much as the industrial revolution dehumanized biological strength with machines, the displacement of biological intelligence with computers represents the dehumanization of intellectual labour. Projecting current techniques a few years forward suggests that autonomous systems will eventually be capable of outcompeting humans in every area where intelligence is the key component of production.”

To avoid falling in the trap of overestimating the importance of present trends, it is always worth taking a deep historical perspective. In any case, the current transformation is really a revolution, and probably much deeper than the concept of “Fourth Industrial Revolution” would imply: we are now beyond the Industrial Age!

Share

How the World is Really Improving

The world is improving. There is much less poverty today than there was a few years or decades ago, and it is much more visible. Yet, amazingly, there is substantial controversy on this positive message. For example, in this article ‘Bill Gates tweeted out a chart and sparked a huge debate about global poverty‘, this controversy is expressed in length.

The controversial chart from Bill Gates

The interesting aspect of the controversy is that most of the counterarguments are based on moving the signpost: while there was a standard for defining poverty globally, some argue that it is not sufficient any more and it should now be raised substantially. Of course, we don’t define poverty in the same manner in developed countries and in less developed countries. Of course we need to improve further. Yet, why move the signpost when the situation is improving?

In addition, many studies show that in most aspects, the story of the chart is true and that all segments of poverty are seeing their situation improve. I read not so long ago the enlightening book by Jack London, ‘the people of the abyss‘ about his experience in the poorer districts of London at the end of the 19th century. So say the least, the situation has improved greatly!

It is good to be demanding on the subject of poverty, but let’s not underestimate the substantial progress that is made. It deserves some celebrating, even if it is never enough.

Share