The Internet of Things is spreading, multiplying the number of clever devices and intruding deeper in our privacy. Those who will succeed in that market are those that will master the technologies that avoid fraud and excessive privacy intrusion.
The Internet of Things faces huge hacking risks. For two reasons:
IoT devices are relatively easier to hack because they do not usually include software upgrade and because they are based on standard chips with many more functionalities.
The consequences of hacking can also be much more visible, being devices that control the physical space.
At the start of internet, many services collapsed due to the issue of managing spam and fraud. For example, lots of paypal competitors died of this scourge, and paypal survived by having, from the start, implemented strong anti-fraud features.
The same will happen in the IoT: the survivors will be those that will develop and implement the technology that will avoid as much as possible hacking and subornation of their devices. This should be a key research angle for those that want to succeed in this field.
After search-centric companies, and then mobile-centric companies, here come AI-centric companies! Following the trend such as at IBM, The new strategic impetus at Google is the inclusion of Artificial Intelligence in all its services, with dramatic quality improvements.
This interesting NYTimes article ‘the Great AI awakening‘ is worth reading. It hightlights in particular the work of a particular division at Google called “Google Brain” with a focus on the usage of neural networks for deep machine learning and outcome quality improvements. According to the paper, in particular for the ‘Translate’ application, “the AI system has demonstrated overnight improvements roughly equal to the total gains the old one had accrued over its entire lifetime” (i.e. since 2006).
The paper also interestingly gives an account of the historical moves that have made machine learning based on neural networks mainstream in the past few years.
Let’s brace for similar improvements in a bunch of similar services that we are increasingly using in our daily life!…
The reality is that 95%+ of our daily interactions with people remain at a too superficial level to figure out what it is they know we don’t know. The issue is then to figure out how to setup those conversations in a way to enrich our experience and their experience.
It all comes down to connecting in the right manner, demonstrating interest to the person, its interests and aspirations. It also come down to a benevolent attitude that does not seek immediate advantage or profit from the relationship.
Of course that takes time so we can’t do that for everyone we meet, but we can certainly do better.
Benevolence is important. I had written first the first sentence of this post “how to benefit from this knowledge”. But the point is not to benefit, but to share!
Let’s try to learn more about the world by connecting better with more people, learning exciting new stuff we did not even know existed and sharing our knowledge too!
“Everyone you Will Ever Meet Knows Something that You Don’t” is a quote by Bill Nye, and american science educator. It is a very powerful statement that I find by experience to be actually quite true. However we can only find out provided we take the time to establish the right connection to figure it out.
The reason for this situation is of course the variety of individual experience and interests.
We often tend to dismiss the knowledge that is available around us, while daily experience shows how fruitful it can be. For example on the workplace, leveraging on the interest and knowledge of the people constituting the team or the extended team is a very effective way to increase effectiveness. It is too often forgotten in particular with the race to efficiency.
Let’s never forget that anyone around us, however menial their occupation be, have something to teach us.
The new trend seems to be Artificial-Intelligence powered chatbots. In an interesting experiment, a university professor replaced one of his assistants by such a chatbot. The students did not detect it and actually wanted to nominate it as the best assistant in the class! (see ‘Innovations
What happened when a professor built a chatbot to be his teaching assistant‘)
Actually it would be interesting to have a test or some kind of identification that would enable us to determine if we are interacting with a chatbot or a human. But we need to be aware that it will be increasingly the rule that our interface will be with robots. In 5 year time it might even be the majority of the customer service we will be facing. For sure that will produce sometimes astonishing results.
Since I am publishing a lot, I am aware that many times when people comment on my posts, they seem not to have read it. They react on the title, or on some idea more or less closely linked to the post topic.
It is difficult in the Collaborative Age to read everything that is thrown to us. But we can maybe check first that what we are sharing makes sense. Another basic behavior rule that will slowly emerge and be taught to future generations maybe?
Big Data is trendy, and the graal of Big Data is to be able to predict behaviors and ultimately influence them. But the world is complex and whatever power we put behind Big Data, there will be a close limit to what can be inferred.
The most well known complex system is weather. In spite of the tremendous increase in computing power in the last decades, our prediction capacity remains limited to a week or so. That is because it is inherent to a complex system that prediction capability is limited by the system, our understanding of the initial conditions, and not by its equations or by the computing power we put behind.
So the graal of Big Data is in fact elusive – it will never possible to predict behaviors beyond a certain limit which is still to be determined practically.
Big Data will never allow the long term prediction we hope for. It will be a disappointment for many. It is also another sign of our freedom.
When you write software, to avoid bugs you assign to each variable some default value, that is afterwards supposed to be updated by the program.
What happens when the default value does not get updated? Something like the nightmare of happening to be located at the default value of a mapping application, like what happened to a quiet farm in Kansas. The story told in this Fusion article ‘How an internet mapping glitch turned a random Kansas farm into a digital hell‘ is really though-provoking. Just because it happens to be at the center of the country, it is mapped as the location by default of IP addresses. The article contains many other similar stories of misplaced geographical locations of IP addresses.
It happens all the time also on our favorite online maps when they show the center of a long avenue when searching for an address – this center could be far remote from the actual location we are looking for!
I am not speaking of people driving in entirely wrong locations by their GPS because they did not check that they had selected the adequate town or checked there was actually a road!
Even in this century of overwhelming information some checks are required before believing what the machine says. Stay vigilant!
There are two schools of thought regarding how truthful the information from the man on site can be. One school follows Winston Churchill: “Never trust the man on the spot“. Another school believes that local knowledge offers sometimes a better insight than what is available in headquarters.
What’s the right way about this? It’s all about what information we want to have.
Information about the actual progress and the actual situation on the ground is best retrieved from site. Far-away management does not work and leads to unrealistic assessments of the situation. I observe this effect all too often in large projects.
On the other hand, do not expect the site people to have a very worthwhile assessment of the whole strategic or even tactical picture. They can only have a limited view of the whole due to their position. The breadth of the subjects they can apprehend depends on their scope. Local representatives in a particular country will often have a much better assessment of the political situation of that entire country and what can or cannot be done than the global headquarters. A local representative on a site can only apprehend very local issues. In general I have observed that often the local representative can be trusted on a scope slightly larger than his assignment.
In general, I tend to trust more the people on site except if the topic is clearly beyond their observation range.
Churchill quote from H. R. McMaster Dereliction of Duty (a recommended read about how the US politicians and top military got embroiled in the Vietnam war)
The testimonies about content moderation are quite breathtaking, and the decisions whether to keep some videos that have shocking content but are important from the political perspective (like the murder of people during demonstrations) an example of tough decisions to make.
And because “The stakes of moderation can be immense. As of last summer, social media platforms — predominantly Facebook — accounted for 43 percent of all traffic to major news sites. Nearly two-thirds of Facebook and Twitter users access their news through their feeds“, this determines what people will ultimately see from the world.
Of course before there was journalism, a limited number of sources and effective censorship by governments. What has changed is that it is now privately handled and not susceptible to democratic control. I would anticipate that at some stage, guidelines might be defined by governments (e.g. related to anti terror campaigns) but at the moment it is an issue to be kept in mind.
Mobile is eating the world, and the proof is in a quite famous presentation by Benedict Evans from the Venture Capital firm Andreessen Horowitz which has been updated in 2016.
I share here some highlights which have particularly struck me.
First, mobile will represent a roughly 5-10x increase in the number of users and devices compared to the previous ecosystems, and is quite comparable to the move to personal PCs, as shown in the figure on the right. And this means everyone has a super computer in his/her pocket. The presentation goes on to argue that this will also lead to a significant increase of productivity. That might be true in theory, but I think this can be controversial as mobile devices are also a great source of lost time! (ref our post on How Mobile Phones Distract Us – A Real Life Example).
Mobile is an ecosystem and actually most people use apps to access the data of the internet. What happens is that actually, people don’t use a lot of apps on their mobile devices, hence there are much less valuable online properties, but they are quite more valuable. Actually one third of the users access the internet through Facebook!!
Finally, we are just at the beginning of the disruption brought by mobile devices in the business models that pervaded the world before, so hold on!
The Fourth Revolution and the availability of data and data processing allows us to go one step beyond looking at averages. We can now observe data distributions and figure out finer conclusions than those based simply on averages.
This seems only a small step, but right now our institutions and large companies still have not figured it out. It is is the same in projects – we only consider average performance. Average makes sense only when aggregating a large number of instances, but we lose so much information in doing so!
In particular we miss critical information on what is working better and where we can seek to learn what is the impediment to better performance. We miss information on the variability which is so important. Read Seth Godin’s post On average, averages are stupid to have a great illustration of this effect.
Let’s get beyond averages and take benefit of the wealth of available data to produce better informed decisions!