“Because biographies of famous scientists tend to edit out their mistakes, we underestimate the degree of risk they were willing to take. And because anything a famous scientist did that wasn’t a mistake has probably now become the conventional wisdom, those choices don’t seem risky either.” writes Paul Graham in an excellent short post, ‘the risk of discovery‘.
“Biographies of Newton, for example, understandably focus more on physics than alchemy or theology. The impression we get is that his unerring judgment led him straight to truths no one else had noticed. How to explain all the time he spent on alchemy and theology? Well, smart people are often kind of crazy.” (and it seems Newton’s dog helped burn his alchemy writings as well).
There are at least two interesting learning points from this reflection:
People who truly seek new truths at the border of knowledge will seem a bit crazy and will investigate potential avenues, some of which might not be fruitful at the end. And they will put in question mainstream knowledge, which can be dangerous for them.
History only highlights what is becoming new mainstream knowledge forgetting about the rest, and deleting it from collective memory. But that is reductive because we don’t know what will become mainstream in the future.
So it quite normal that we take risks if we strive to progress science and find new truths. Taking risk is part of it. Let’s not stop at it!
The gist of the argument and of the findings is that “creativity calls on persistence and problem-solving skills, not positivity“. Hence, creativity would be found in rather tougher environments where problem-solving is paramount to survival.
It is a rather similar argument about the fact that expatriation and exposure to other cultures promote creativity: because problem-solving abilities are challenged significantly when moving to another country, plus exposure to other ways of thinking, there is a good fertile soil for creativity.
On the other hand there needs to be quite some protection afforded to allow for time and reflection that are involved in creativity. Extremely tough environments will not afford that. There must be some optimal spot in between perfect bliss and total disruption.
Conclusion: to achieve a creative environment, provide a protective setting but don’t pamper people too much!
I stumbled upon this great story of a japanese farmer using Artificial Intelligence (AI) – or at least some form of learning algorithm – to develop a cucumber sorting machine using cheap technology widely available -see for example the Quartz post ‘The ultimate promise of artificial intelligence lies in sorting cucumbers‘. For the technical geeks, the full description of the contraption is here on a Google blog.
The inspiring part of the story is of course how a japanese farmer could leverage such technology for cheap and using available cloud technology and cheap raspberry-type devices. This shows the amazing possibilities offered for cheap today to those that are able to grasp them. If you read the more technical blog, we can also understand that it still has taken some effort to get the machine learning in place, but the amazing thing is how it can transform anybody’s lives by tinkering with readily available stuff.
Technology for the masses is coming. Let’s not underestimate its potential.
Human behavior doesn’t always conform to what seems sensible to us, and that what seems sensible to us isn’t necessarily valuable in evaluating how a person thinks or acts.”
This makes any kind of judgment on people’s behavior difficult. As explained in the book in certain situations, suspension of judgment is required. That is the case for example during coaching, or during interviews to determine trustworthiness.
Big Data is trendy, and the graal of Big Data is to be able to predict behaviors and ultimately influence them. But the world is complex and whatever power we put behind Big Data, there will be a close limit to what can be inferred.
The most well known complex system is weather. In spite of the tremendous increase in computing power in the last decades, our prediction capacity remains limited to a week or so. That is because it is inherent to a complex system that prediction capability is limited by the system, our understanding of the initial conditions, and not by its equations or by the computing power we put behind.
So the graal of Big Data is in fact elusive – it will never possible to predict behaviors beyond a certain limit which is still to be determined practically.
Big Data will never allow the long term prediction we hope for. It will be a disappointment for many. It is also another sign of our freedom.
A few (scary) experiments show that today, we are not anonymous anymore in a crowd. Our face can easily be recognized thanks to the technology, in particular the technology used by social networks that pushes us to identify the face of friends.
A Russian photography student has carried out an experiment to show how easy it is to identify complete strangers.Twenty-one-year-old Egor Tsvetkov took photos of people in public places and then tracked them down on the Russian social media site VKontakte using a facial recognition app. The experiment ‘Your Face Is Big Data’ was published online (link in Russian). It is quite impressive how the results turned out to be!
We can expect this technology to be quite available, so we’re probably not anonymous any more when we are walking around or taking the tube. Something to take into account in our daily life… and our privacy settings on our favorite social networks!
New technology allows us to perform automatically some tasks that would have required significant cognitive power. It may even lead to changes in our brain functions as we fail to exercise some of them.
One of the best examples is the GPS. Driving with the GPS, following instructions without having the overview of what we are doing, diminishes our navigation capabilities at least in terms of training. This is even noted in this Bloomberg article: ‘How GPS Came to Be—and How It May Be Altering Our Brains‘.
I have remarked quite often that people that use GPS systematically become utterly lost geographically if they happen not to have the small device, and can’t even guess in which part of town they are. And as GPS is now ubiquitous in our phones, we always have it close-by. But at the end, we lose our sense of orientation and our capability to map out out surroundings and build a consistent picture of geography.
It might not be a big hurdle (until the day where the GPS won’t work!) but keeping a good sense of orientation is, I believe, a good capability to have. Maybe someday we’ll have remediation practice – in any case, our technology has started to transform us.
An interesting segment of the comments is that the machine won using strategies that no human had used before, and some found beautiful (see this Wired article). Interestingly enough, quickly however (after 3 stunning defeats though) the human Lee Sedol was able to take the machine to its own game. The graphic analysis of what happened is exposed in this great article very worth reading on Quartz ‘Google’s AI won the game Go by defying millennia of basic human instinct‘.
Is AlphaGo actual Artificial Intelligence? There are even some articles denying it like Why AlphaGo is not AI.
My take on this momentous event is that it shows again that the machine can help us develop new abilities and look at things differently. It probably still cannot equate the humans in learning ability, but does provoke thought supports us by finding new ways to consider problems. And that is possibly the main message from this experiment.
There is a lot of buzz nowadays about Artificial Intelligence (AI) starting to be present in our lives: virtual assistants and else. However as this excellent article in Bloomberg shows ‘The Humans Hiding Behind the Chatbots‘, this AI is still very much human powered. We are not really talking only to a clever machine, but to a system that is still highly facilitated by humans.
AI will certainly become sometimes in the future a real feature in our environment. For the moment we mainly observe systems that do increase human productivity to respond to requests. The limit is fuzzy to a real AI system that is only being administered, but we can be on the safe side to affirm that real independent AI it not there yet.
In fact, numbers are not known precisely but moderation on all social networks is certainly one of the first tasks that could be handed over to AI and at the moment it is still very much human powered (using workers from low wage countries). And that may remain the most economic option for a while.
In collaborative networks, forums and wikis, actual production only relies on a small percentage of users. This is confirmed in a business environment in a post from the Harvard Business Review ‘Collaborative Overload‘: “In most cases, 20% to 35% of value-added collaborations come from only 3% to 5% of employees“.
The reasons are multiple:
Collaborative systems act as complex systems and hence, contribution follow a ‘long tail’ curve: major contributors really produce a large part of the value (however the aggregated value of the contributions of all the others should not be neglected)
Most users generate interactions of low value to the community
Most users are swamped by daily urgencies and do not have the time to do longer term contributions.
This small percentage has an interesting implication when it comes to organizations’ internal collaborative networks – they can only work if there is a sufficient number of potential users so that the core group of 3-5% of users generating most of the value is large enough. That is why a minimum of a few hundred to a few thousand potential users is necessary for successful internal collaborative networks.
The entire HBR’s paper is quite an interesting read as it focuses on the emotional drain for the key collaboration contributors and the fact that their contribution is often not recognized enough.
Mark Zuckerberg (yes, Facebook’s) analyzed the history of online collaboration on Facebook and concluded that the amount of information shared on the internet roughly doubles every year.
And this is not going to stop with the substantial increase of mobile devices and their ubiquity in particular in emerging and developing countries.
We generally underestimate the power of the exponential, but this is huge! This means that in a limited number of years the increase will be dramatic. The size of the data must even increase more quickly as videos tend to replace simple pictures or music.
Zuckerberg’s law matters because it describes what is really happening with the Fourth Revolution, better than laws focused on hardware capability. We are in an era of exponential increase of exchange and sharing between individuals. And this matters.