How Writers’ Income Sources Are Changing

In this excellent New-York Times article ‘Does It Pay to Be a Writer?‘, the problem of writer compensation is analyzed in depth. There is a significant shift in writers’ compensation patterns at work over the last decades and faster even in the last few years.

There are less opportunities to write for a living – “Writing for magazines and newspapers was once a solid source of additional income for professional writers, but the decline in freelance journalism and pay has meant less opportunity for authors to write for pay.”

And those opportunities pay much less: “In the 20th century, a good literary writer could earn a middle-class living just writing,” said Mary Rasenberger, executive director of the Authors Guild, citing William Faulkner, Ernest Hemingway and John Cheever. Now, most writers need to supplement their income with speaking engagements or teaching. Strictly book-related income — which is to say royalties and advances — are also down, almost 30 percent for full-time writers since 2009.” A survey in the US is also quoted showing the median compensation for writers to have fallen by 40% in a few years.

Writing has thus become more of a commodity. On the other hand we also need to relativise the statement that writer wan’t make a living – historically most writers have had to supplement income with other activities, except for those writing bestsellers or being famous for some other reason. Still, income gets lower for pure writing in the gig economy and one century ago it was possible to make a decent living just writing some pieces for newspaper, which is not any more the case today.

This reinforces the observation that writing today must be part of a global package of activity – some will be journalists, professional speakers, consultants, other professionals for which writing is a way to communicate to part of the public and spread their message.

Share

How Songs Are Becoming Shorter – and this Reflects our Increasingly Frequent Switching

Did you notice that songs are on average getting shorter? That there are less words in their titles? That is just part of a series of trends that reflect our modern usage of digital access. More details are to be found in this excellent Medium post ‘Music is Getting Shorter‘. A more general discussion on this issue is contained in this excellent post by Mitch Joel ‘Welcome To Toggle Economics‘.

The controversy lies in whether this trend is due to the economy of music distribution (streaming), or in our shorter attention span. The latter explanation has received much attention. Mitch Joel argues however that there are indications that our attention span can remain substantial if we want (such as book reading); and that what really makes the difference would be our ability to switch more frequently from one center of attention to the other.

And it is becoming less and less difficult to switch from one internet browser tab to the next, or from one phone app to another.

Whether it is an intrinsic shorter span of attention, or the disturbance of too much choice and a too easy possibility to switch, when it comes to roaming services such as music, the result is clearly that we crave shorter durations. And the trend is just starting!

Share

How Digital Detox May Not Be Effective

“Digital Detox” is a growing trend, a manner to unplug from our increasingly hectic and 24h way of life and find back our balance and ‘real connections’. This extreme process is increasingly trendy (although it just implies unplugging from our screens and internet), and this certainly reflects some level of anxiousness. Yet the effectiveness of this process is disputed, or at least not proven scientifically, as exposed in this Quartz post ‘Digital detoxes are a solution looking for a problem‘.

The point is to examine whether digital detox really improves mental health, like other detoxes do (by the way, the terminology assumes that digital is an addiction).

The article mentions quite a number of excellent references on the impact of digital and social networks on mood and other factors such as sleep. It is clear that in some ways, social networks impacts mood, in particular as people tend to post only the good things that happen to them. Still, the amount of impact on mental health is controverted.

I like the thesis of the article which takes the view that as always when a new technology is introduced, its effect on health is controverted, and adequate usage rules must be invented (one will remember the famous articles in the early 19th century about the fact that running on trains above a few mph should result in certain death).

My view is that digital services are part of our way of life and provide us with significant services that improve our lives (for example, navigating in an unknown town, knowing the latest infos on local transportation etc). They also make it more hectic. On the other hand, excessive usage is certainly harmful. Cutting off entirely is not any more an option; however, making sure we have spaces with lower usage such as on week-ends is certainly a good idea for balance. There is so much to be learnt in that respect that it will take years of learning to understand really what is harmful and adapt our behavior. Let’s use digital in a measured way in the meantime!

Share

How to Deal with the Conundrum of Smart or Safe Cities

Smart Cities is a big trend that influences nowadays a lof of cities’ development policies. It is aimed to bring many benefits to citizens and large city administration at the same time. In parallel the concept of Safe city has emerged – using the data to improve citizen safety through increased surveillance.

As always, technology comes with advantages and drawbacks. Like Internet allowed incredible 2-way communication advances, it came along with easier surveillance capabilities. Smart cities will thus come also with increased surveillance capabilities, in the name of pubic safety.

Ethics is becoming an increasing concern in our society, as a way to address democratic control on the modern surveillance capabilities. It has to be stressed that surveillance is not a recent issue – for long times, autocratic governments have controlled and opened private correspondence and spied on its citizens. As it becomes increasingly easy to implement surveillance programs, the setup of adequate ethical and independent control rules becomes even more essential.

Maintaining balance between the benefits of increased digitalization and sensors’ data, and privacy, is a essential challenge we are facing in the next few years. At the same time, fear of surveillance should not prevent us from benefiting from smart advances. The creation of new institutions to guarantee ethical treatment of the data is a challenge we all need to address.

Share

How We Believe The First Explanation We Hear

I did not realize it so forcefully, and I have been impressed since I became conscious of the effect. Whatever first explanation on some event or phenomenon we hear we tend to believe, until some more persuasive explanation is forced upon us – and we have a tough time to change our beliefs then. There is an incredible power in the first convincing explanation we receive.

Something many believed not so long ago: flat earth

This may explain for example, why it may be difficult to change our mind from explanations provided by our parents or social entourage when we were young, or from explanations provided by our cultural environment. It takes being exposed to obvious observations that the initial explanation is not sufficient or inaccurate to change our mind.

This effect has of course serious impact in our daily life: there is a premium to the first explanation we hear. If it is fake, unscientific or an attempt at manipulation does not matter – if it’s credible, we will take it for granted until a better explanation is imposed upon us. This explains in part the power of fake news and social media, and conversely the importance of subscribing to reliable information sources.

Being better conscious of this ‘first explanation effect’ is also useful to be less reluctant to change our view when we are offered more credible alternative explanations. Be more aware!

Share

How the Science Behind Popular Psychological Effects is Often Wrong

Surprise, the science behind our previous blog post ‘How To Play With the Psychological Lunch Effect‘ was wrong! An excellent post ‘Impossibly Hungry Judges‘ explains in detail why and gives all the necessary links to papers that show why. Still, many people use this study as a reference (and we did too as it is entrenched in popular knowledge!)

This is just another example that we need to take with a pinch of salt all those popular psychological studies. In this case, as shown in this paper ‘Overlooked factors in the analysis of parole decisions‘, there were many other factors that explain very well the order of cases during the morning and afternoon and explain better the timing: easier cases that are supported by lawyers are considered first, etc.

We know intuitively that timing and lunch may play a role, but the correlation was just too strong to be true. It is probably much weaker. This is just a reminder of how much we need to be careful between correlation and causation, and when we read about a surprising psychological study!

Share

How Increasingly Difficult It Can Be To Prove Causation vs Correlation

Following up from the post ‘How the Van der Waerden Theorem Shows the Limits of Big Data‘, since Big Data will produce an increasing number of spurious correlations, the issue of identifying causation versus correlation will become increasingly important.

This Medium article ‘Understanding Causality and Big Data: Complexities, Challenges, and Tradeoffs‘ does a good work to explain the issues at stake. It also explains in a clear manner when causation is really needed, and when correlation is sufficient.

The most important in my view is that with the increasing complexity of our world (directly inherited by our increasing linkage), proving causation will become increasingly difficult. It does not help that we are trying to increasingly derive causation from smaller effects, which are on the border of being statistically significant. The causation chain can have some very indirect links that will make it difficult to determine what is causing what. I believe the current debates about the effect of certain chemicals used in natural environment (such as pesticides) exactly demonstrates this issue: in a complex ecosystem, proving a causation link is very difficult even if there is correlation.

Substantial theoretical and practical progress in the methods to determine causation is an important issue for the world today. I hope that enough focus and effort is dedicated to this problem.

Share

How AI is Being Used to Spot Lies and False Declarations

Following up on our review of the changes brought by AI in the field of justice (see for example the post ‘How Predictive Justice Software Starts Being Used‘), this interesting Quartz post ‘Police are using artificial intelligence to spot written lies‘ addresses how AI can detect fake statements for insurance or police.

Certain patterns can certainly be identified to assess the probability for a statement to be untrue, but the immediate question if of course up to what level this may be used. Is this only to prioritize those declarations that would warrant further investigation, or would that lead to a straight rejection?

One can also expect in the near future to see a whole new industry of AI statement coaches to emerge, with coaches and counter-AI programs being made available to check the veracity level and modify the initial statement to make them appear more credible… The interesting part here is that we are increasingly moving into a world of conformity, because AI will instantaneously detect anything that comes out of the ordinary.

Share

How AI trainer is the new trendy gig for students and young professionals

I was not aware until recently how the job of “AI trainer” is the new gig for students seeking some extra money and for young professionals. But as AI-based services and ‘deep-learning’ products increase, there is a need to help them learn faster from existing data. The job is about feeding the data to the software and manually correcting the outcome to help the algorithm learn faster.

AI teaching sweatshop in Asia

For some basic image recognition, there are even sweatshops setup in low-cost countries to teach the algorithms. For more complex matters, this is generally performed in the AI company premises by graduate students in the relevant specialty.

Of course, the intent of deep-learning algorithm is to replicate what has been taught to it in a scalable manner, therefore automatizing the job that was previously performed by junior personnel. But the irony is that it also creates the new job of teaching it on the basis of an initial set of data how to respond and what to produce.

Let’s not be astonished if the first jobs of new graduates in the years to come is ‘AI teacher’!

Share

How the Van der Waerden Theorem Shows the Limits of Big Data

The Van der Waerden theorem states basically that if a string of data is long enough, there will always be instances of periodic occurrences. This means that when there is enough data, there will always be regularities – and they will not be meaningful, it is just a mathematical situation.

This theorem just means that for a big enough heap of data, we will find correlations that in fact do not have any meaning: these are spurious correlations.

Hence we can expect that with big enough data, Big Data analysis will throw up heaps of correlations that have no meaning at all.

But we can also expect that some people and organizations will take action based on those correlations, and that it may sometimes be deeply counter-productive.

Those who will have success in the world of bid data are those that will be able to sieve the many spurious correlations from the few real insights that can be gained from analysis. This will not be easy, because intuition may not be of great help. A thorough scientific analysis will be required, involving reproducibility of experiments in various independent data sources – and that will be difficult to do fully.

Let’s thus brace for many spurious correlations to be announced as discoveries only to be disproved some years later!

Share

How Predictive Justice Software Starts Being Used

In this Bloomberg article ‘This AI Startup Generates Legal Papers Without Lawyers, and Suggests a Ruling‘, the operation of an Argentinian start-up, Prometea is described. It uses AI to produce suggested rulings from past decisions, thus increasing dramatically the productivity and reducing the backlog of cases.

The productivity boost is significant: “The Buenos Aires office says its 15 lawyers can now clear what used to be six months’ worth of cases in just six weeks.

At the moment it only produces drafts that are still reviewed by humans, but the results are apparently very encouraging, and many countries seem to be interested by the system, including the UN.

Predictive justice is coming, at least for simple cases. Our judicial systems will certain try to resist, but that’s the trend of history.

Share

How Algorithms Are More Effective Than Human Decisions – Even If Bias Still Needs to be Managed

In a counterpoint to the ideas represented by the “Weapons of Math Destruction” concept – how algorithms could reinforce inequality and prejudice (refer to our post ‘How Algorithms Can Become Weapons of Math Destruction‘), the HBR paper ‘Want Less-Biased Decisions? Use Algorithms‘ discusses the fact that algorithms lead to less bias.

Critiques and investigations [about the perverse effects of algorithms] are often insightful and illuminating, and they have done a good job in disabusing us of the notion that algorithms are purely objective. But there is a pattern among these critics, which is that they rarely ask how well the systems they analyze would operate without algorithms. And that is the most relevant question for practitioners and policy makers: How do the bias and performance of algorithms compare with the status quo? Rather than simply asking whether algorithms are flawed, we should be asking how these flaws compare with those of human beings.

The paper then quotes a number of studies and papers showing that automation reduces dramatically mistakes and some biases in human decision-making. An effort still needs to be made to ensure algorithms are not biased, however following public awareness a lot of activities are happening in that field, including publication of the source code of some key algorithms. The paper thus rather takes a positive view on the subject. Let’s keep a tab to see how it evolves over the next few months!

Share